With 2% to 3% of the world GDP lost in money laundering, Anti-money laundering and Countering the financing of terrorism (AML-CFT) compliance has become a major objective for regulators and financial institutions worldwide. In Europe alone, banks spend more than €20 billion annually in AML-CFT compliance. Yet, those efforts are relatively ineffective: less than 1% of global illicit financial flows are seized.
The study of the interactions between criminal economy and financial markets is a burgeoning field of research in economics, law, statistics and data science. Considering the massive investments in AML-CFT compliance, one would expect regulatory actions to be based on scientific studies and empirical research weighing the costs and benefits of regulations and enforcement actions. Yet few if any such studies exist. The optimal level of banking regulation necessarily requires some sort of cost-benefit analysis, and the benefits of regulation have to outweigh its costs for society.
Current AML-CFT approaches are mostly deterministic rule-based and meet relative success (95% of false positives in some organizations). They generate too many false positives, while at the same time missing large amounts of truly suspicious transactions, i.e. false negatives. AI can reduce false positives and bring about greater AML-CFT effectiveness, and accuracy in the way in which AML-CFT risks are monitored. Machine learning models can identify otherwise invisible trends across large data sets as well as reducing the volume of false alerts generated, making them ideal for AML-CFT enforcement.
However, problems of explainability, together with regulatory uncertainty, are cited as two of the main barriers to implementing AI in AML-CFT systems. The size of recent fines for AML-CFT compliance has led to a culture of conservatism by banks. Financial institutions hesitate to introduce new technologies into AML-CFT processes until they have been fully approved by regulators. AI explainability, which refers to a comprehension of the system’s operation and (or) decisions, has emerged as a critical requirement. The Villani Report (2018), the European Commission’s AI Programme (2018) and group of expert's report (2019) , as well as the recent OECD (2019) recommendations on AI all emphasize that transparency and explainability are essential preconditions to AI deployment. While there is general agreement on the principle, there is little understanding of what explainability means in practice for particular use cases such as AML-CFT. For our research group, these issues represent a set of stimulating challenges whose solutions will be the key to the deployment of AI tools in financial regulation.
The research program is divided into three inter-related parts. Part I will develop an economic model showing the costs and benefits for different stakeholders in using AI for AML-CFT enforcement. Part II will identify different explanation scenarios and introduce explainability constraints into the cost-benefit model. Part III will measure the costs and benefits for different scenarios using actual data, in order to define an optimal level of regulation for explainability.