Publications and working papers
09/09/20
- blog entry
  • Astrid Bertrand
The Acpr’S Guidelines On Explainability: Clarifications And Ambiguities
The French banking regulator, the ACPR (Autorité de contrôle prudentiel et de résolution), recently published a report presenting guidelines on the governance of algorothms in the financial services sector. We analyze in this blog post these new requirements on AI, particularly aspects on explainability, reviewing the main clarifications brought by the report along with its limits.
06/08/20
- blog entry
  • Winston Maxwell
  • , Astrid Bertrand
  • and Xavier Vamparys
Current Anti-Money Laundering (Aml) Techniques Violate Fundamental Rights And Ai Would Make Things Worse
In a new paper, we have analyzed current AML systems as well as new AI techniques to determine whether they can satisfy the European fundamental rights principle of proportionality, a principle that has taken on new meaning as a result of the European Court of Justice’s Digital Rights Ireland and Tele2 Sverige – Watson cases. The question we addressed is whether proportionality requirements can be satisfied by AI-powered AML systems. To conduct our analysis we broke the proportionality test down into its various components. We then systematically applied each of the steps of the proportionality test to AML systems, both current rule-based AML systems and then to AI-enhanced systems. Our findings are that current AML systems fail the proportionality test in five respects. AI makes the failures more acute, but does not fundamentally change the reasons for the underlying problems. The one area where AI adds a new specific problem compared to current systems is algorithmic explainability. Our paper has been selected for presentation at the July 17 ICML2020 Law and Machine Learning workshop.
06/08/20
- Conference paper
  • Winston Maxwell
  • , Astrid Bertrand
  • and Xavier Vamparys
Are Ai-Based Anti-Money Laundering Systems Compatible With Fundamental Rights?
In a new paper presented at the July 17 ICML2020 Law and Machine Learning workshop, we have analyzed current AML systems as well as new AI techniques to determine whether they can satisfy the European fundamental rights principle of proportionality. Here is the abstract: Anti-money laundering and countering the financing of terrorism (AML) laws require banks to deploy transaction monitoring systems (TMSs) to detect suspicious activity of bank customers and report the activity to law enforcement authorities. Because the monitoring of customer data to detect money laundering interferes with fundamental rights, AML systems must comply with the proportionality test under European fundamental rights law, as most recently expressed by the Court of Justice of the European Union (CJEU) in the Digital Rights Ireland and Tele2 Sverige - Watson cases. To our knowledge there has been no analysis as to whether AML systems are compliant with the proportionality test as expressed in these latest cases. Understanding how the proportionality test applies to current AML systems is all the more important as banks and regulators consider moving to AI-based tools to detect suspicious transactions. The objective of this paper is twofold: to study whether current AML systems are compliant with the proportionality test, and to study whether a move towards AI in AML systems could exacerbate the proportionality problems. Where possible, we suggest cures to the proportionality problems identified.
02/04/20
- blog entry
  • Winston Maxwell
  • and Xavier Vamparys
Netherlands Welfare Case Sheds Light On Explainable Ai For Aml-Cft
The District Court of The Hague, Netherlands found that the government’s use of artificial intelligence (AI) to identify welfare fraud violated European human rights because the system lacked sufficient transparency and explainability. 1 As we discuss below, the court applied the EU principle of proportionality to the anti-fraud system and found the system lacking in adequate human rights safeguards. Anti-money laundering/countering the financing of terrorism (AML-CFT) measures must also satisfy the EU principle of proportionality. The Hague court’s reasoning in the welfare fraud case suggests that the use of opaque algorithms in AML-CFT systems could compromise their legality under human rights principles as well as under Europe’s General Data Protection Regulation (GDPR).
02/09/19
- article
  • David Bounie
  • and Winston Maxwell
Is Explainability Of Algorithms A Fundamental Right?
“The demand for transparency on the functioning of algorithms must be addressed with discernment”, assert, in a column to “Le Monde”, researchers David Bounie and Winston Maxwell.