Changes in regulation have the financial industry reeling from the increased costs. According to a recent research report by Veris, 66 percent of those surveyed “saw an increase in their AML and OFAC compliance budgets over the last three years”. This has everyone looking for ways to reduce costs and increase efficiency.
The Veris report included responses from 284 senior executives, board of director members and division heads from a broad variety of financial services providers – retail, corporate and business banking, private and wholesale banking, broker-dealers and money service business. Many of those institutions are multinational.
According to the report, the respondents believe the cost of compliance will continue to increase because of the continued change in global regulation and the need for more employees to work on AML and OFAC compliance. And even with increased hiring, almost 70 percent of the respondents are augmenting their compliance staff with personnel from other functional departments. Thus increasing personnel costs across the business.
High volume of low-value alerts
These hurdles mean you can’t afford to keep doing things the way you always have. You need to find a new way of monitoring your customers and transactions.
Michael Zeldin, Deloitte Financial Advisory Services, says, “Current regulatory guidelines require that the transaction monitoring solutions become more effective [and] business requirements further demand that they also be more efficient.”
Given the increased cost – in both personnel and technology – it’s important to ensure that high-value alerts are presented for analysis. Producing high volumes of low-value alerts can impact investigation effectiveness and regulatory report quality. For example, an investigator faced with a large number of alerts in the queue may rush to resolve the alerts without a thorough investigation that would result in filing a high quality report.
Generating productive alerts
- Data Quality
This process starts with the initial collection of historical alerts and then to the gathering of supporting alert data such as transactions, profiles and events. Since this process may require pulling data from disparate systems, more time is spent on data gathering than on the analytics. The goal is to build a set of data that can be used in the tuning and evaluation phases. Generally a minimum of two to three months of historical alerts are needed. However, some scenarios and techniques may require as much as six to 12 months of data. If investigations data is reliable, the models will provide more accurate results.
Properly tuning scenario thresholds
Scenario tuning is a test-and-learn process that requires a detailed understanding of the investigation process as well as good data analysis skills. For smaller institutions, iterative what-if analysis may be all that is required to eliminate redundant work items. For larger institutions with a higher-risk profile, a more sophisticated approach is essential. In applying tuning techniques, it is key to have quality historical data and use rigorous back-testing to review before and after results of the tuning techniques.
- Relevant scenarios through predictive analytics
A more effective approach is to apply analytics to predict which alerts are investigation-worthy. For example, a decision tree is a common data mining technique that predicts the value of a target, in our case “good alert,” based on several input variables: the transaction and event profile. The decision tree will produce a risk score; the higher the score, the higher the risk. Alerts that exceed a prescribed risk score will be triaged into the analyst’s work queue.
Incorporating feedback from alert investigations
It is important to structure your disposition nomenclature so that relevant work items will include more than SAR or Currency Transaction Report (CTR) filings. Often an event is unusual, meriting further review but not an SAR. The process of bucketing good and bad alerts usually involves the application of business logic to prepare the data for analysis. The ability to analyze and learn from case histories should be a consideration during the workflow design and enactment.
High-performance scenario tuning
The tuning of detection rules to make them more efficient and effective is a crucial step in optimizing any transaction monitoring system. There have been dozens of articles written about using standard deviation or range-based outlier detection to set scenario parameters. The traditional approach to rule tuning is to iteratively change parameters one rule at a time until incremental performance improvements diminish.
This approach is actually fairly risky because a change to a rule without understanding outcomes can have dangerous downstream consequences. A number of rule-based tuning approaches have been proposed; however, the risk from over-tuning scenario parameters far outweighs long-term benefits.
Many times investigators are challenged with a high volume of alerts, many of which are of low quality. Something as simple as changing the dates of the time period to run a simulation against provides invaluable information to an analyst. Scenarios can be optimized and alerts generated in a simulated environment to analyze how the number and quality of work items are impacted - without assistance from a programmer or IT. High performance scenario tuning can provide a safe work area that will not affect any of the actual production environments. This helps investigation staff be more efficient – reducing the risk of inaccurate or late regulatory reports.
Read more ways you can improve your alerts – SAS® Security Intelligence: The next generation of fraud, compliance and security solutions.
Additional contributor: Cameron Jones, Principal Consultant, SAS Security Intelligence