Fraud detection is a challenging problem. The fact is that fraudulent transactions are rare; they represent a very small fraction of activity within an organization. The challenge is that a small percentage of activity can quickly turn into big dollar losses without the right tools and systems in place. Criminals are crafty. As traditional fraud schemes fail to pay off, fraudsters have learned to change their tactics. The good news is that with advances in machine learning, systems can learn, adapt and uncover emerging patterns for preventing fraud.
Most organizations still use rule-based systems as their primary tool to detect fraud. Rules can do an excellent job of uncovering known patterns; but rules alone aren’t very effective at uncovering unknown schemes, adapting to new fraud patterns, or handling fraudsters’ increasingly sophisticated techniques. This is where machine learning becomes necessary for fraud detection.
Machine learning is all the rage now. Most vendors claim they have some form of machine learning, especially for fraud detection. SAS has been a pioneer in machine learning since the 1980s, when neural networks were first used to combat credit card fraud. But just because we’ve been doing it so long doesn’t mean we’ve been resting on our laurels. In fact, it’s quite the opposite.
Machine learning is a critical part of the fraud detection toolkit. Here’s what you’ll need to get started.
Data sets are only growing larger, and as the volumes increase, so does the challenge of detecting fraud. In fact, data is key when it comes to building machine learning systems. The adage that more data equals better models is true when it comes to fraud detection. Practitioners need their machine learning platform to scale as data and complexity increase. While academic tools often work well with thousands of records and a few megabytes of data, real-world problems are measured in gigabytes or even terabytes of data.
The advantages of multiplicity
There is no single machine learning algorithm or method that works. Success comes from the ability to try lots of different machine learning-based methods, trying variations on them and testing them with a variety of data sets. The data scientist needs a toolkit with a variety of supervised and unsupervised methods – as well as a variety of feature engineering techniques. Finally, there is a creative aspect or “art” to machine learning for fraud detection. It’s the application of machine learning in new and novel ways, like combining a variety of supervised and unsupervised methods in one system to be more effective than any single method alone.
Integration into operations
It should be obvious, but this one’s a challenge for many organizations. Once you have a machine learning model developed, the challenge becomes integrating it with operations. If your data is in Hadoop, it makes sense that your machine learning model can be applied in Hadoop. Similarly, if your data is streaming in real-time systems, you want a machine learning engine that can run in real time or in stream. Portability of the model and integration of the decision logic within operational systems is paramount to stopping fraud at scale – and as it occurs at scale.
Explaining what a machine learning system is doing is critical; this is often referred to as “white boxing.” Machine learning methods and models are generally black boxes. It’s very difficult (if not impossible) to explain to analysts why they got the score or decision that they received. There are many approaches to including scorecards based on local linear approximation, generation of textual narratives and generation of graphical visualizations. These are approximations, but can provide users insight into the machine learning model and guide the fraud investigation process.
All things change, and you must adapt over time. Ongoing monitoring of machine learning fraud detection systems is imperative for success. As populations and the underlying data shift, expected system inputs degrade and therefore have an impact on overall performance. This isn’t unique to machine learning systems; rule-based systems have the same challenge. But newer machine learning methods can adapt to new and unidentified patterns as underlying changes occur. This eliminates some, but not all, of the machine learning retraining and evaluation steps.
A good monitoring program proactively looks at the data entering the system, evaluates the machine learning model’s predictions and explanations, and alerts administrators to shifting data trends and statistics before dramatic changes affect operations and the bottom line.
What about the impact on your customers?
For one financial institution, fighting fraudulent cases was a challenge. It had to identify nefarious transactions, but also maintain quality customer service. A vigilant fraud detection effort cannot be intrusive to the customer by flagging – and declining – legitimate transactions.
This financial institution wanted to modernize its rule-based fraud detection system and strike a balance between oversight and customer service. To do this it worked with SAS to implement a machine learning-based fraud detection solution that takes advantage of an ensemble of neural networks to create two different fraud scores:
- A primary fraud score, evaluating the likelihood that an account is in a fraudulent state.
- A transactional score, evaluating the likelihood that an individual transaction is fraudulent.
Using this approach, the financial institution could correctly identify close to $1 million in monthly transitions that had been erroneously identified as fraud, and was able to identify an additional $1.5 million per month in additional fraud that had gone undetected. Besides dramatically improving the company’s ability to detect fraud, the solution significantly increased customer satisfaction by reducing the friction between the company and its customers. How? By significantly improving the transaction approval process while increasing the effectiveness of fraud detection.
Think out of the box
Finally, successful machine learning programs have an element of ongoing experimentation. It isn’t enough to just build a machine learning model and let it crunch. Fraudsters are clever, and technology is changing fast. Having a sandbox where data scientists can freely experiment with a variety of methods, data and techniques to combat fraud has become a critical aspect of top programs. Investments in boosting the capacity of data scientists who combat fraud have an almost immediate payback.
Analytics and the AML Paradigm Switch
Financial organizations are deploying artificial intelligence and machine learning in the fight against financial crimes. In this ISMG report, David Stewart, Director of Pre-Sales for the Global Security Intelligence Practice at SAS, offers tips to help separate fact from market hype when reviewing new data analytics tools.
Get the report
So what exactly is machine learning?
Simply put, machine learning automates the extraction of known and unknown patterns from data. It expresses those patterns as either a formula or instruction set that can be applied to new and unseen data. The machine learns and adapts as outcomes and new patterns are presented to it, and can be either supervised or unsupervised.
Supervised machine learning is a class of analytic methods that attempt to learn from identified records in data; this is often referred to as labeled data. To train a supervised model, you present it both fraudulent and nonfraudulent records, and the model then attempts to infer a function or instruction set that can predict whether fraud is present by applying it to new examples. Common supervised machine learning methods include logistic regression, neural networks, decision trees, gradient bosting machines, random forests of trees, support vector machines and many more.
Unsupervised machine learning is different. Since you don’t know what data is fraudulent, you want the model to create a function that describes the structure of the data. This way the model flags anything that doesn’t fit the model as an anomaly. To train an unsupervised model, you simply present it data and the model attempts to infer a function or instruction set that describes the underlying structure and dimensions of the data. This function or instruction set can then be applied to new and unseen data.
The challenge with unsupervised methods is that it’s often hard to assess the accuracy of the detection scheme until data has been worked and verified by hand. Common unsupervised machine learning methods include self-organizing maps, k-means, dbscan, kernel density estimates, one-class support vector machines, principal component analysis and many more.
And onto artificial intelligence
We’ve come a long way from statistical analysis to machine learning, but the momentum of machine learning and artificial intelligence is gaining speed. To find out more, check out The Enterprise AI Promise, which describes a phone survey of executives from 100 organizations across Europe in banking, insurance, manufacturing, retail, government and other industries. The SAS study was conducted in 2017 to measure how business leaders felt about AI’s potential, how they use it today and plan to use it in the future, and what challenges they face.
- Article Choosing the right statistical analysis toolsThe GDPR and PSD2 will force businesses, especially banks, to overhaul existing processes in the name of data protection. But what can they do to prepare?
- Article Big data: The gold mine of investigative policingFind out what's needed to capitalize on big data for a significant impact your investigators' work and outcomes.
- Interview Five steps you can take to beef up cybersecurityAnyone with Internet access and programming skills can invade a country or bring down a corporation. Traditional firewalls and anti-virus don't work (just ask JP Morgan). Security expert Ray Boisvert shares five actions organizations can take now to defend against cyberattack.
- 20/20 vision of riskBanks need a holistic view of their data in order to accurately detect risk. They also must overcome the challenges associated with poor data quality and the "noise" generated by existing control systems. Laura Hutton, Director of Banking and Solutions at SAS, explains how technologies can improve a bank's visibility into operational risk.