Financial Services institutions are in a constant struggle to stay ahead of fraud and other financial crimes. The reasons are simple: financial fraud harms people, costs money, and undermines brand trust. When fraud occurs, the financial institutions involved often bear a disproportionate cost. The most recent LexisNexis True Cost of Fraud study found that every $1 of fraud perpetrated costs financial services organizations $3.64. And those dollars add up. Professional services and consulting firm PwC recently calculated the cost of financial fraud in the U.S. at $42 billion.
Perpetrators of financial fraud crimes are relentless. According to the True Cost of Fraud study, financial services organizations deal with an average of 1,262 attempted fraudulent transactions per month, of which nearly half—625—are successful. Compounding the problem is that financial fraud comes in many guises, making it difficult for financial institutions to detect and prevent. The list of types of financial fraud is long, and includes things like:
The Bank Administration Institute (BAI) estimates that the financial services industry spends $9.3 billion annually on fraud detection and prevention tools. Those tools detect fraudulent financial transactions by looking for anomalies in typical patterns. Those anomalies can provide clues that something isn't right, pointing analysts toward suspicious transactions that they can then examine more closely to understand the reason for the anomaly and make a determination as to whether the activity was illegal or licit. Such decisions are based on data. And better predictions require more and better data.
Anomalies can be single events (an attempted login or transaction from a high-risk geography, or an attempted transaction disproportionate to an account's assets/credit limit), or a collection of events (multiple simultaneous transactions from widely differing geographies, or a series of failed login attempts). Traditionally, fraud detection tools operate with prediction models based on known rules that establish parameters for making their determinations. However, many criminals engaged in financial fraud are sophisticated. They understand the "rules'' used to detect and prevent them from perpetrating their fraud, and operate in ways that hide their intent long enough to allow them to succeed. Even if a criminal's activities are detected, they can complete the transaction before preventative measures are taken.
That is why rules-based detection based on static prediction models is insufficient. Problem with static models is that legitimate behavior can change (suddenly or over time), and that fraudulent behavior will adapt. The model you create today could be obsolete in a matter of days. Dynamic prediction models, characterized by accuracy plus speed, and capable of evolving as new data is generated, is the key to detecting _and _preventing financial fraud. Machine learning must be part of the equation.
For example: if a customer's financial transaction patterns change because of a new job that requires them to relocate overseas, the prediction model in use must be able to immediately recognize that change. But it also needs to be able to distinguish legitimate transactions from those being attempted by offshore criminal elements, and to be able to intervene in the latter case. Or, if an identity thief tries to open a fraudulent account in someone's name after generating a synthetic profile using information harvested online, aggregated, and sold on the dark web, the organization involved needs to be able to analyze all available data to confirm that fraud is being attempted.
Data is vital to creating and training the prediction models organizations use to detect and prevent fraud. And the best data is fresh, accurate, and detailed. However, there are laws in place that dictate how some customer data–data that could play a critical role in the development of highly effective prediction models–can be used. Privacy and data security regulations mean that financial services organizations must be very careful how they collect, secure, and use personally identifiable information (PII) and payment card information (PCI). Those restrictions often mean that, rather than operationalize data that could otherwise be useful in generating more accurate prediction models, fine-tuned to quickly and accurately detect and prevent financial fraud, that data is encrypted and left in storage.
Encryption protects data, but it has traditionally meant that data so protected can't be used by artificial intelligence. And the risks of decrypting data for the purpose of generating predictions, or other types of decision intelligence, are too great. In fact, according to a recent IDC report, 68% of all data collected by organizations goes unused because of security risks.
Cape Privacy solves this problem by enabling AI-based fraud detection predictions on encrypted data for customers who store data in and use the tools available through Snowflake. With fraud detection models running on consolidated transaction datasets in the Snowflake environment, using Cape Privacy's secure multiparty computation (MPC) platform, sensitive data can be collected, encrypted at the point of capture, moved into Snowflake, and operationalized to generate powerful AI predictions to detect and prevent financial fraud—without ever being decrypted.
That means machine learning tools, run in the Snowflake environment, are able to analyze patterns across transactions using prediction models that are more accurate because they were created, securely, using better data. And having data for fraud detection in a single Snowflake instance enables automated predictions for fraud to scale.
With Cape Privacy and Snowflake, financial services firms can operationalize their high-value, sensitive financial data assets and leverage the full potential of that data, keeping it fully encrypted throughout the entire process. To learn more about Cape Privacy and our platform, https://capeprivacy.com/about. Or https://capeprivacy.com/contact for more information.