While artificial intelligence will very soon be ubiquitous in our societies, it is imperative to establish an ethical framework for it. Through the “Data Science, bias and ethics” webinar organized by the University of Paris Dauphine-PSL, Marie Langé presents examples of biases plaguing AI, explains their causes and suggests solutions …
Artificial intelligence is the most revolutionary technology of the turn of the millennium. By transforming all industries, by interfering in our daily lives, AI will disrupt all facets of society.
Work automation, mass surveillance, facial recognition, decision-making support… so many possibilities offered by the rise of Machine Learning and Data Science. However, if artificial intelligence is indeed ubiquitous in our lives, it is imperative to establish an ethical framework.
Without this framework, the risks of abuse and the dangers are countless. As sci-fi writers have feared for decades, unchecked AI could increase social inequalities, lead to immoral killings on the battlefield, and even jeopardize the future of humanity.
In order to address these crucial issues for the future, and to discuss possible solutions, Marie Langé spoke during the “Data Science, bias and ethics” webinar organized by Paris Dauphine-PSL University.
Founder of the Data & IA consulting firm AMASAI, Marie Langé is currently Head of the Data offer at Adone Conseil. She comes from the IT and e-commerce sector, particularly in the luxury sector. The ethics of data science is at the heart of his concerns.
Some edifying examples of bias in AI
In order to set the scene and present the subject to the uninitiated, the expert began her intervention with a few examples of bias in the field of AI. The first example is that of Joy Buolamwini, researcher at MIT, who realized that facial recognition software did not recognize her. In question ? His skin color. As soon as she donned a white mask, the AI no longer had any trouble identifying her.
Always with the aim of illustrating this problem of discrimination, Marie also quotes a study carried out on the systems of facial recognition of Microsoft and IBM. This study shows that AI “megeneer” much more often people of color than white people. While the error rate is just 1% for white men, it is 35% for women of color.
Other examples are mentioned, such as the FaceApp application that whitewashes Barack Obama to make it sexier, Google Photos perceiving two black people as a couple of gorillas, Samsung’s Bixby voice assistant unable to understand female voices, or Google Translate believing that a woman cannot be a doctor.
Even more worryingly, the COMPAS software, used in the United States to decide on prison sentences, appears to be convinced that no onere is more likely to reoffend. The observation is clear: at present, artificial intelligence suffers from a real problem of sexist and racial discrimination.
What are the causes of the problem?
To explain this phenomenon of bias, Marie Langé goes back to their origin: the development of Machine Learning models on which AI systems are based. The opportunity to better understand how this technology works, and how a lack of data on minorities during the training phase can have terrible consequences during production …
Other sources of the problem are also illustrated with concrete examples, such as lack of context, company bias, or poor selection of data in the machine learning process. In general, although errors in judgment can have different causes, these varied examples demonstrate that AI currently perpetuates cognitive biases of the human being.
From data, or through the people who design it, artificial intelligences inherit the biases already intoxicating relationships between humans in many countries. Worse still, they tend to amplify these biases and can therefore increase the tearing of the social fabric.
Solutions for a more ethical AI
Faced with this gloomy picture, Marie Langé evokes the possible solutions to remedy this problem which plagues AI. It is first of all essential to make the algorithms explainable and avoid black box operation, as the giants of the sector such as IBM or Google Cloud are tackling.
The specialist mentions other avenues, such as the procedures for qualifying data sets, the fight against the lack of diversity among AI designers, educating users, calculating and subtracting data bias, and using bias presence analysis tools like IBM AI Fairness 360.
Awareness is already taking place, and these different avenues for improvement are starting to be followed. The GAFAM have formed a consortium for the ethics of AI, universities offer courses on this subject, and associations are calling for greater transparency. The European Commission is also preparing a legal framework in line with the GDPR.
However, faced with the announced omnipresence of artificial intelligence, it is essential that these measures be adopted in a comprehensive and systematic manner. Gold, China and the United States seem much less concerned by the ethical problems of AI that the European Union …
To know everything on the subject, you can watch the Webinar “Data Science, bias and ethics” at this address. After her presentation, Marie Langé also answered very relevant questions from the audience.
Even before working as a consultant, Marie Langé began to question the ethics of AI during her training at the University of Paris Dauphine-PSL. She is indeed alumna of the Executive Master Statistics & Big Data of Dauphine Executive Education, class of 2019. This Continuing Education produces experts in Data Science, while giving a point of honor to ethics. For more information, visit the official Paris Dauphine-PSL website at this address.