Resources / blog

Cape Privacy and the Next Century of Artificial Intelligence

by Reesha Dedhia
AI and Cape

The second in a series of twenty five-year reports tracking the progress, impact, and evolution of artificial intelligence over the next century was issued recently. Led by Stanford University, the first of the ambitious One Hundred Year Study on Artificial Intelligence (AI100) was issued in 2016. The 2021 report was joined by experts from The Alan Turing Institute, Brown University, California Polytechnic University, Cornell, Duke University, Georgia Institute of Technology, Google, Harvard University, LinkedIn, the London School of Economics, McKinsey & Company, MIT, Ohio State University, Okinawa Institute of Science and Technology Graduate University, Oxford, Portland State University, the Santa Fe Institute, Sony AI, University of Edinburgh, University of Melbourne, University of New South Wales, University of North Carolina, University of Pennsylvania, University of Texas at Austin, and University of Toronto. 

While the 2016 report focused mostly on establishing a baseline for defining artificial intelligence and made some broad policy recommendations, the 2021 report is based on the conclusions from a series of workshops exploring questions on the development of artificial intelligence, its applications in all aspects of society, challenges associated with the use of AI, public sentiment, and more. 

One part we found interesting was in response to the question, “What are the most important advances in AI?” An observation from that section said: 

“AI tools now exist for identifying a variety of eye and skin disorders, detecting cancers, and supporting measurements needed for clinical diagnosis. For financial institutions, uses of AI are going beyond detecting fraud and enhancing cybersecurity to automating legal and compliance documentation and detecting money laundering. Recommender systems now have a dramatic influence on people’s consumption of products, services, and content, but they raise significant ethical concerns.”

This statement summarizes the friction that exists between the potential for artificial intelligence to be a catalyst for doing tremendous good in the world and the means by which that good can be achieved. That’s because, in order for AI to function at its maximum potential, it needs a lot of detailed, sensitive data. In the realm of health sciences that data often consists of either personal health information (PHI) collected from individual patients, or high-value intellectual property (IP) generated by the collective efforts of individual organizations’ research and development teams. 

In both cases, the security of that data is a high priority. PHI is protected by law under regulations such as the Health Insurance Portability and Accountability Act (HIPAA), whereas IP is closely guarded by the organizations that spend thousands of hours and millions of dollars to create it. If that data could be shared safely with others, while also preserving its privacy and integrity, the speed of innovation could be greatly accelerated. 

The rapid development of vaccines against COVID-19 is a taste of how data sharing in health sciences can produce results more quickly. If we are already able to use AI to accurately diagnose certain ailments based on imagery, consider how insights drawn from tens of thousands—even millions—of health records associated with other diseases could be used to inform powerful algorithms for the diagnosis and treatment of a wide range of maladies. It’s an exciting prospect. And it’s possible when that data is secured and modeled using a tool like Cape Privacy’s encrypted learning platform.

Privacy preserving technologies like encrypted learning combine advanced cryptography with sophisticated machine learning techniques that can make it possible to use private data in the development of sophisticated data models, while keeping that data private. That’s a big advantage over traditional approaches to data modeling that rely on techniques like federated learning, synthetic data, homomorphic encryption, or data clean rooms. Those traditional methods can be expensive, and often pose a risk.

With encrypted learning, no one can ever see the sensitive data or the models being developed because it is never decrypted. The sensitivity and integrity of private PHI and high-value PI remains secure because no other human has access to that data. 

When considered in that context, artificial intelligence will likely play a key role in the future of health sciences and also healthcare, supplementing the human element with a multiplying effect that eases the detrimental effects of the global shortage of skilled health professionals. In fact, the AI100 report says that, “Some patients may even express a preference for robotic care in contexts where privacy is an acute concern, as with intimate bodily functions or other activities where a non-judgmental helper may preserve privacy or dignity.”

Beyond health care and health sciences, the AI100 report addresses a number of other industries where the power of artificial intelligence can be applied. In financial services, for example, the report says, “High-frequency trading relies on a combination of models as well as the ability to make fast decisions.”

Indeed, one Cape Privacy customer is benefitting from that very application. Using our platform to analyze 17 years of third-party credit card transaction data that could not have been shared otherwise, that $400B firm was able to tune its data models to identify trends in consumer spending more precisely and improve per-trade performance by a fraction of a percent. In volume, that translates to tens-of-millions of dollars profit for the firm.

Cape Privacy is also being used for more than just boosting volume trade performance. In fact, encrypted learning has many practical business and strategic applications in financial services, including helping banks to automatically detect and prevent financial fraud and money laundering schemes. By flagging unusual transactions and other anomalies, encrypted learning can identify fraud or other suspicious activity with greater accuracy and speed, supporting compliance with Bank Secrecy Act as well as investigations into money laundering investigations, or to monitor employees who may be engaged in improper or illegal practices. 

Artificial intelligence and encrypted learning can also play a critical role in risk management in financial services. By enabling a more thorough analysis of historic data, including internal and third-party data, encrypted learning can be used to augment risk management control practices to enhance lending and loan restructuring and recovery, as well as loss forecasting. It may also be used in liquidity risk management, for example, to enhance monitoring of market conditions or collateral management. 

And while not a cybersecurity tool in the traditional sense, encrypted learning can complement cyber threat detection and prevention by detecting malicious activity and identifying potentially compromised systems through forensic analysis of malicious files and tracking activity that may indicate compromised systems or accounts.

Cape Privacy is proud to play a role in the next century of AI, and its use for the benefit of the world we live in. Not just in health sciences and financial services, but in every area of research that affects life and business. For more information about Cape Privacy’s encrypted learning platform, watch this handy video; to be a part of our team, visit our career page; or for more information, get in touch.

Our mission is simple: data privacy for all

We started Cape Privacy because Privacy-Enhancing technology should be a priority.

See our mission

Are you passionate about transforming the future?

Let’s transform the worlds of Data Privacy and AI together.

Join our team

Get started today

Join our teamContact us