News
img
23/03/21
- Chair news
Claire Levallois-Barth And Winston Maxwell Are Part Of The Data And Ai Ethics Council Created By Orange (La Tribune/Cerclefinance)
Orange announces the creation of its Data and AI Ethics Council, chaired by CEO Stéphane Richard. The mission of this advisory and independent body is to support the implementation by the company of ethical principles governing the use of data and artificial intelligence technologies.
img
08/02/21
- Chair news
Ai Ethics News: An Interdisciplinary Approach To Ethical Ai
The Operational AI Ethics research program at Télécom Paris is launching a newsletter dedicated to the theme of ethical artificial intelligence.
img
06/08/20
- international conference
New Article Presented At The 2020 Icml Conference Workshop On Law And Machine Learning
During the 2020 International Conference of Machine Learning, our team was selected to present a new paper "Are AI-Based Anti-Money Laundering Systems Compatible with Fundamental Rights?" at the Law and Machine Learning Workshop. This article analyzes current AML systems as well as new AI techniques to determine whether they can satisfy the European fundamental rights principle of proportionality, a principle that has taken on new meaning as a result of the European Court of Justice’s Digital Rights Ireland and Tele2 Sverige – Watson cases. You can read the corresponding blog post in the link below, which will also take you to the full document available on SSRN.
img
01/07/20
- Chair news
Xavier Vamparys Appointed Head Of Artificial Intelligence Ethics At Cnp Assurances
Visiting researcher at Télécom Paris, Xavier Vamparys has just been appointed Head of Artificial Intelligence Ethics at CNP Assurance. Xavier Vamparys will lead the group’s multidisciplinary AI ethics committee, responsible for guiding the use of AI within the group, including for fraud detection and anti-money laundering. At Télécom Paris, Xavier Vamparys’s research is focused on how AI affects the insurance industry, and in particular the insurance industry’s public interest missions.
img
02/04/20
- publications
Netherlands Welfare Case Sheds Light On Explainable Ai For Aml-Cft
The District Court of The Hague, Netherlands found that the government’s use of artificial intelligence (AI) to identify welfare fraud violated European human rights because the system lacked sufficient transparency and explainability. 1 As we discuss below, the court applied the EU principle of proportionality to the anti-fraud system and found the system lacking in adequate human rights safeguards. Anti-money laundering/countering the financing of terrorism (AML-CFT) measures must also satisfy the EU principle of proportionality. The Hague court’s reasoning in the welfare fraud case suggests that the use of opaque algorithms in AML-CFT systems could compromise their legality under human rights principles as well as under Europe’s General Data Protection Regulation (GDPR).
img
21/03/20
- publications
Algorithms: Biases Control - A Report By The Institut Montaigne
Calculate the shortest route on your phone, automatically create a playlist with your favorite songs, find the most relevant result via a search engine, select CVs that match a job offer: algorithms help you all along the day. But what would happen if a recruitment algorithm was discriminating? Did it systematically leave aside women or ethnic minorities? How do we make sure these errors are highlighted and corrected? Using more than forty interviews, the Montaigne Institute wishes to provide tangible solutions to limit potential drifts and restore confidence in the algorithms. This report attempts to give a French perspective to this issue, which today is mainly treated through an American lens. It extends the paper published in 2019 by Télécom Paris and the Abeona Foundation “Algorithms: bias, discrimination and equity”.
img
07/12/19
- publications
"Integrating Ethics Into Algorithms Raises Titanic Challenges" (Le Monde)
Two researchers from Télécom Paris, David Bounie and Winston Maxwell, describe for “Le Monde” the tangible solutions to tackle the risks of discrimination that platform algorithms can generate. More and more examples in justice, health, education and finance, show that artificial intelligence (AI) tools cannot be deployed without control in security systems or access to essential resources. Without these safeguards, they could generate biases, potentially discriminatory, difficult to interpret and for which no explanation is provided to end-users. The conclusion is becoming increasingly clear: AI must integrate ethics from the design of algorithms. The ethical performance of the algorithm (absence of discrimination, respect for individuals, etc.) must be included in the performance criteria, along with the accuracy of the predictions. But integrating ethics into algorithms raises titanic challenges, for five reasons: First, ethical and legal standards are often unclear, and do not lend themselves to mathematical formulation […] Second, ethics is not universal [ …] Third, ethics are political […] Fourth, ethics are economic […] Fifth, ethics are temporal.
img
02/09/19
- publications
Is Explainability Of Algorithms A Fundamental Right?
“The demand for transparency on the functioning of algorithms must be addressed with discernment”, assert, in a column to “Le Monde”, researchers David Bounie and Winston Maxwell.
img
14/02/19
- publications
Algorithms: Biases, Discrimination And Equity
Algorithms are interfering more and more in our daily life like decision support algorithms (recommendation or scoring algorithm) or autonomous algorithms embedded in intelligent machines (autonomous vehicles). Deployed in many sectors and industries for their efficiency, their results are increasingly discussed and disputed. In particular, they are accused of being black boxes and of leading to discriminatory practices linked to gender or ethnic origin. This article aims to describe the biases related to the algorithms and to outline ways to address them. We are particularly interested in the results of algorithms considering equity objectives, and their consequences in terms of discrimination. Three questions motivate this article: By which mechanisms can algorithm biases occur? Can we avoid them? And, finally, can we correct or limit them? In the first part, we describe how a statistical learning algorithm works. In a second part we are interested in the origin of these biases which can be of cognitive, statistical or economic nature. In a third part, we present some promising statistical or algorithmic approaches that can correct biases. We conclude the article by discussing the main societal issues raised by statistical learning algorithms such as interpretability, explainability, transparency, and responsibility.