Financial institutions are working hard to fight financial crime and bank fraud driven by demands to protect their assets,
as well as by regulatory compliance. One area of specific focus is that of Anti-Money Laundering (AML). For many institutions,
there are several challenges to creating a sustainable AML organization – one that can respond to regulatory reporting mandates
and provide information to support “business as usual” demands – while also finding, developing and retaining the talent needed
to accomplish these critical activities.
While standardization, centralization and optimization may be ultimate objectives, individual opportunities should be identified, converted and used as a foundation on which to build. Activities such as compliance case management or analytics reporting around risk are often a prime target to begin the journey towards standardization and/or centralization. Some banks and financial institutions may want to look within a specific line of business, while others may want to consider a broader range of activities; the key is to start with a specific focus and develop a methodology that works and that can be leveraged.
To accomplish this, firms need to get three things right:
- a. Too many companies wait until they are faced with regulatory enforcement actions before updating their AML communication and systems.
- b. Firms need to enhance their communication with strong messages from leadership and a consistent “tone at the top” to help cascade risk culture across the organization.
- a. Financial institutions should consider AML and sanctions solutions and screening software that can support regulatory requirements while minimizing time, resources and operational risk.
b. The platforms in scope should include:
- i. Visual analytics tools or real-time dashboards for identifying patterns, anomalies and trends.
- ii. Data warehouse and data feeds to access the right data. An advanced screening solution using efficient name matching algorithms to monitor and detect alerts.
- iii. Case management to handle robust workflow and generate reports.
3. Data and monitoring:
- a. Improve the effectiveness of AML monitoring systems by installing the periodic review of AML source data
- b. Data quality and data integrity should be focused on data sourcing analysis and data quality analysis, assessing completeness, conformity, consistency, accuracy and duplication.
- c. Know Your Customer (KYC) and sanctions screening information should be integrated to capture the right information up front and should be included in sanctions data feeds.
valoores Financial Crime and Compliance suite of applications helps financial institutions tackle today's tactical risk and compliance problems while future-proofing compliance spent across regulatory mandates.
Use Case 1 - Evolution in Transaction Monitoring
Current transaction monitoring technology relies on the development of declarative models/scenarios which can be applied to a sparse transaction data. Institutions typically maintain over 200 scenarios which accumulatively generate 99.9% false positive alerts which drive up the cost of compliance due to the requirement to investigate each alert. The problem with AML transaction monitoring is not that model development teams lack the skill or expertise to develop sophisticated models; but rather, the problem lies within the technical limitations levied against these teams due to rigid and outdated transaction monitoring tools.
The Big Data Solution
The Big Data Solution: With Big Data technologies, model development teams will have much more sophisticated capabilities available to them to construct AML transaction monitoring scenarios. Using big data technology for transaction monitoring provides the following advantages over traditional methods:
Ability to create more sophisticated models without limitations of legacy technology- complete freedom to develop complex detection models without having to worry about your transaction monitoring tool’s ability to handle the code.
Ability to leverage additional transaction data and meta data- No longer will you restricted to using only a subset of fields from your transaction source data. Big Data technologies will allow you to levererage structured, semi-structured, and unstructured data in the development of your models. This becomes particularly helpful when processing wire transfer data which may follow inconsistent formats and contain semi-structured data.
Apply Machine Learning techniques to self-tune models over time- Loop investigation disposition data back into the models to provide supervised training and self-tuning of detection models. Leverage unsupervised machine learning techniques which will help discover new attributes which may be strongly correlated to AML risk.
Apply behavioral modeling analytics to support anomaly based alert generation- Create models which alert based on anomalous behavior. Create natural clusters of peer groups and detect activity which deviates from expected behavior.
Ability to test and iterate on new models leveraging the power of distributed computing- Run and test models across the data with much reduced processing time.
Use Case 2 - Enhanced Reporting Capabilities
At this point, most institutions have implemented large scale enterprise data warehouses which they use as a central archive for bank data. These data warehouses have come with multi-million dollar investments to implement and are also expensive to maintain. Additionally, these environments rely on rigid data structures which becomes a blocker in being able to be nimble and responsive to analytics needs. Finally, these environments make it more expensive to bring in new data sources and integrate them with existing data.
The Big Data Solution
The Big Data approach is fundamentally different than the data warehouse approach. Rather than spending millions of dollars in extracting, transforming, and loading data into a predetermined schema, the best practice is to extract data in its native format and load the data into the “Data Lake” which resides in the Big Data environment. This approach substantially reduces the cost and time that it takes to bring in and combine large data sets into a common environment. With this paradigm, organizations are no longer penalized (by high price tags and long lead times) for bringing in “all of the data” and therefore do not need be overly careful about which data they bring over and which data they leave behind. Additionally, storing data in the Big Data environment is relatively less expensive than traditional data stores as Big Data typically runs on more commodity hardware.
Once the data resides in the Data Lake, data scientists can execute any transformations that they require at the time that they go to query the data. This “query on read” approach provides much more flexibility to run reports, analytics, and gain insights from your AML data. Whereas with the traditional data storage and “query on write” approach assumes that you know all of the questions that you will ever want to ask of your data, the “query on read” / Data Lake approach assumes that you will always have new questions and provides a platform on which you can quickly create question focused datasets.
Let’s consider a practical example: Regulators usually ask some standard questions which can be answered by traditional reports that you have generated. Sometimes however they ask you a very specific question which is not satisfied by one of your current reports. Let’s analyze how this scenario would play out in with the traditional approach to data management vs. the Big Data / Data Lake approach.
Traditional Approach: As you analyze the data that you have available to you in your standardized data table, you realize that you are missing a key field that you had not previously thought you would have needed from the source system. In this case you put in a request to your technology team which then has to go back to the source systems, identify the corresponding fields, write new extraction scripts to bring the new data into your data warehouse, write transform scripts to get it into the proper format, and then write new queries to support the regulator’s question. While your institution may have streamlined processes and procedures of handling these scenarios, often they result in long delays, increased costs, and a lot of unnecessary paperwork.
Big Data Approach: In this approach, you would determine the data fields that will be required to answer the regulator’s question and then go to your data lake to locate the corresponding fields. Once you identify these fields, you will write a query which retrieves the data and transforms it into the format that you need in order to answer the regulator’s question. As you can see from this example, the value in the Data Lake approach is that you always have access to the data that you might need without having to return to the source.
Use Case 3 - Platform for Data Exploration and Intelligence Analysis
Combating financial crime oftentimes involves searching for patterns, trends, or anomalies across millions of customers and/or billions of transactions. The questions that you are seeking answers for across this data set change frequently. This week you may be looking for signals of human trafficking among your customers and next week you may be looking to find customers who are truly money service businesses. Not only do these questions change frequently, but oftentimes the questions and answers are complex and if limited to traditional data exploration tools, relational databases, and structured data, these answers may be impossible to find.
The Big Data Solution
The goal of implementing a Big Data platform is to provide flexibility and robust tools whereupon AML teams can openly explore the data to find the answers that they are looking for. Big Data not only allows AML programs to aggregate large volumes of data from a large variety of data sources, but also provides a large ecosystem of tools which can be leveraged based on the question that you are trying to answer. Things that are very difficult (if not impossible) to do using traditional relational databases and tools such as natural language processing, similarity clustering across massive data sets, and machine learning techniques provide your AML analytics team with a new toolbox for solving these problems.
A few examples of the type of data exploration that you may be enabled to do using Big Data include:
- SAR/STR Clustering- generate clusters based on attributes within STR/SAR filings to determine common attribute and unexpected relationships
- Peer Group Clustering- Cluster large volumes of businesses based on peer groups to identify anomalies to expected behavior. For example, find a business which is supposed to be a gas station which is clustering closer to a MSB.
- Negative News Analysis- Apply NLP and sentiment analysis to AML negative news feeds to identify common AML trends and risks as the emerge.
Use Case 4 - Large Scale Processing of New & Unstructured Data Sources
Much of Financial Institutions are looking for when it comes to AML risks cannot be found in conventional If/Then statements running against transaction data. Many times what we want to know is whether or not one of our customers is a subject to some kind of negative news. Other times we may be wanting to understand a trend in the narratives that our AML analyst teams are writing as an output of their investigations; perhaps there is a trend of MSBs disguising themselves as retail stores or marijuana dispensaries disguising themselves as pharmacies. These questions cannot be answered strictly by looking at the structured data that we have in our transaction monitoring tools and therefore we ask humans to collect the data and analyze for these trends.
The Big Data Solution
One of the largest benefits of using Big Data technology is the capability to process unstructured data. With Big Data tools, the machine can store, process, and extract intelligence from large blocks of text. This is ideal for AML as we can use these tools to:
- Process negative news- Archive negative news feeds from common AML data vendors and process this data looking for names, business names, keywords, trends, and sentiment. Let the machine take the first pass at identifying if a negative news article is truly aligned to your customer. Look for trends in negative news; for example, see that there is an increase in wire fraud out of Eastern Asia among your customers.
- Process Case File Narratives- Extract intelligence and trends from the narratives that your AML analyst team is generating. Find common AML risks that your AML analysts are reporting.
- Extract intelligence from SARs/STRs- Extract intelligence and trends from the SARs that your team is filling out. Are there common themes among certain types of customers and specific AML risks?
Use Case 5 - Next Generation Case Management
Case Management tools are at the center of every AML program and serve as a system of record for all of the AML investigations including transaction monitoring, high risk customer, and watchlist management investigations. Unfortunately, most case management systems fail to empower AML programs with added efficiency but rather become cumbersome systems that analysts have to work around in order to do their job. Most analysts find ways of minimizing the time that they spend in their case management system. They open a case assigned to them in the system, quickly toggle out of the application to conduct their investigation, and then return to the system at the very end to upload the output of their investigation and mark the investigation complete. As institutions continue to load their systems with more and more records and data, the systems slow down and become even more unusable.
The Big Data Solution
The next generation of case management solutions, based on big data technologies, will unlock automation and efficiency in AML investigations. These next-generation systems will take advantage of Big Data and other contemporary technologies to:
- Aggregate investigation data- Rather than logging into a dozen different applications outside of the case management tool hunting down the transaction, alert, and customer data that an analyst needs for an investigation, next generation systems will present all of the relevant data for a case in a single interface for the analyst. This aggregation is possible due to the Big Data environment’s strength in handling Variety in data formats. Using traditional relational databases, it can be very expensive and difficult to aggregate data because it requires transforming the data into a singular canonical schema. This can be near impossible when the source data systems are drastically different such as in the case of Wire Transfer transaction data vs. checking account transactions. This problem is more easily address in the Big Data environment as you are not required to execute these transformations ahead of time.
- Link/Network Analysis- Another value proposition of Big Data Case Management tools is the capability of leveraging Big Data graph capabilities for Link/Network analysis. Many tools exist in the Big Data ecosystem which can enable mapping of nth degree network connections and providing an interface to the analysts which allows them to shuffle through these network graphs to find risky connections.
- Entity Resolution / Single View of the Customer- One of the most persistent challenges faced by AML programs is drilling down to a single view of their customers. The complexity of institutions who have retail, wealth management, capital market, and other siloed lines of business make it very difficult for AML programs to understand the comprehensive relationship of a single customer with the institution. This is of course even more difficult for institutions with a global footprint. Beyond just bringing the data together, banks must have a way of merging customers together who are probably the same individual. Can you imagine how many “John Smiths” there may be at a financial institution with tens/hundreds of millions of customers? The Big Data environment provides an ecosystem and analytic technologies which allow data scientists to construct more advanced matching algorithms. Using a larger set of matching attributes allows us to snap together to similar entities with greater confidence. By incorporating these matching algorithms into a case management tool, institutions can merge probable matches and provide an interface in which analysts can review the probably matches and affirm or deny the match.