Plugavel.
  • Home
  • Tech
  • Car
  • More
    • Privacy policy
    • About us
    • Contact us
No Result
View All Result
Plugavel.
  • Home
  • Tech
  • Car
  • More
    • Privacy policy
    • About us
    • Contact us
No Result
View All Result
Plugavel.
No Result
View All Result
ADVERTISEMENT

The fight against sexist discourse of algorithmic biases

28 de February de 2022
in Tech
La startup Unbias se donne pour mission d’outiller les concepteurs d’intelligence artificielle éthique. © Unbias
ADVERTISEMENT

You will also be interested


[EN VIDÉO] The 10 Most Dangerous Artificial Intelligence Threats
Artificial intelligence has enabled advances in the field of health, research and industry. But in the wrong hands, it can also be used for criminal purposes or disinformation.

On March 3, the Village By CA Provence Côte d’Azur organise a conference and a round table around ” Natural language, artificial intelligence and the fight against sexist discourse » in partnership with Futura Science and the ClusterIA Côte d’Azur. and participate Unbias, a startup that created the first model that “debiases” ordinary sexism in the French language in artificial intelligences (AI). Daphné Marnat, her co-founder and director of Unbias, tells us about the issues and methods.

Futura: What is an algorithmic bias?

Daphne Marnat: It is true that what is called bias is a bit of a suitcase concept that covers many different definitions and meanings. I would say, from the point of view of the anthropologist, that these are sort of shortcuts taken by the brain, which lead to reality-skewed conclusions or decisions. They are at the heart of our representations, often implicit, it is often difficult to be aware of them and to avoid them. They materialize in our language productions, text in particular, and are reproduced mechanically by artificial intelligence. Anthropology and ethnology have always questioned and analyzed these biases because they can influence the observation of behavior and social practices and give a subjective reading of an objectified reality. From the point of view of information science, the bias is fundamental since without it, the brain or the machines cannot tell the difference between two pieces of information. Also, with regard to the question of discrimination, I retain the anthropological definition. In the case of machine learning, these biases can come from cognitive biases of data scientiststherefore a mechanism of thought at the origin of an alteration of the judgment, or statistics, because they are lodged in the learning data and the process ofmachine learning tend to amplify them.

Futura: What harmful impacts can these biases cause for artificial intelligence?

Daphne Marnat: The danger of bias algorithmic is that it can lead to distorted results and generate forms of discrimination. Tay, the chatbot created by Microsoft in 2016 and trained on text corpora from Twitter, was for example suspended after barely a day for having generated misogynistic, racist, or anti-Semitic messages, qualifying for example feminism as cancer. Some recruitment platforms over-represent technical positions for men and, conversely, advertisements related to care professions for women. It can also lead to racial biases algorithms facial recognition software, which struggle to recognize dark-skinned faces that have mostly been trained on light-skinned faces. Another example, online translation tools systematically translate “nurse” as “nurse” even if the sentence is understood to be a man.

Futura: So why did you choose the subject of sexism?

Daphne Marnat: Declarations of intent on artificial intelligence such as the Villani report are multiplying, but few technological tools are produced to enable data scientists to take concrete action. Unbias was therefore born from the desire to take action and provide the technical means for the development of an ethical AI. It is for this reason that I, an anthropologist trained in innovation, joined forces with Benoît dal Ferro, a data scientist. We had to prove ourselves so we might as well tackle the most difficult! To begin with, we chose to produce a proof of concept on the subject of ordinary sexism in the French language. While hostile sexism is easy to identify because it is expressed through words and insults, ordinary sexism is more difficult to deal with because it is lodged in representations, the symbolic, such as associating a skill with a gender. To process it, machines must therefore be able to decode concepts, this symbolism hidden in language. Then, we chose French which, like any Latin language, is very gendered and with a discussed but established grammatical rule: the masculine prevails over the feminine. The idea is to achieve an epicene language, that is to say which is not discriminating from the point of view of gender.

Futura: How to fix it?

Daphne Marnat: Fighting sexism in AI obviously means diversifying teams both in terms of gender and skills. According to UNESCO, 22% of AI professionals worldwide are women. And I would say that artificial intelligence is not just the domain of la data science, the algorithm, other sciences have a lot to bring to it. The solution can also be technical. We thus created an algorithm to measure the feminine and masculine representations in the models or the corpora, and to see their evolution as the learning progresses. This allows the data scientist to assess whether his model tends to favor feminine or masculine representations, and therefore to make possibly sexist decisions. This makes sense especially for models that assign scores to customer profiles, that match job ads with candidates, on machine translators. We are training a typewriter in epicene language, the ambition is strong, because even the human has difficulties to do it, we have no examples to show him. Our only means is algorithmic power, mathematics in a way. Understanding sexism is not so simple, which is why in the process of creating our algorithms we give great importance to the human sciences, linguists, philologists, sociologists, who have been working on these issues for a long time and are able to guide you on the right paths of work.

Futura: How is your solution perceived by companies and administrations that use artificial intelligence?

Daphne Marnat: The machine can help implement the change that most major institutional and economic players claim to initiate, particularly through their CSR policy. We are also always well received by companies or administrations, especially since our product can be implemented in the form of an add-on, therefore a software component to be added to a system, very light in cost and without risk for the models. tested. But if the intention is present, the implementation application is winded. I think that without legal obligation, we will continue to stay in an exit ofinertia because, faced with their industrial and development reality, the fight against these forms of discrimination at the algorithmic level is not a priority. We are told that this would be an additional constraint, a brake on innovation. We experienced quite the opposite, having developed under constraint: few corpora available, budget for calculation, little computing power, commercial hardware and strong ecological requirements. On the contrary, the constraint stimulated our innovation process, by being more model centric than data centric in particular, it forced us to think outside the framework. We have clearly seen that the GDPR has finally cleaned up the market for the use of personal data. Admittedly, this has limited a lot of economic models, but others have been born, more virtuous and just as stimulating from the point of view of innovation and performance.

Futura: Can your approach ultimately prove to be a chance to change attitudes about discrimination?

Daphne Marnat: Our solution provides a technical answer but actually remains the structural problem. Mathematics are not in themselves discriminatory. With this measurement and correction tool, artificial intelligence actors can reveal and better understand the social problem. A bit like a spell checker that highlights our errors. It could actually change the mindset, especially of data scientists and developers. We do not pretend to solve the problem, but our tools can help find leads that we would not have thought of and, thus, allow us to move forward on the issue.

For people in PACA, come and participate in the debate: “ Natural language, artificial intelligence and the fight against sexist discourse »

Interested in what you just read?

ADVERTISEMENT
Tags: Algorithmalgorithmicartificial intelligencebiasescognitive biasesCorporate Social Responsibilitydata sciencediscoursediscriminationfightmachine learningnatural languageracismSexismsexistsocial anthropology
ShareTweetPin

Related Posts

Soldes d
Tech

Xbox sales: the Elite Series 2 reference controller is on sale on Cdiscount

Want to stay in control of your Xbox gaming sessions? The choice of your accessories will be decisive in your...

26 de June de 2022
How Online Poker is Using Artificial Intelligence
Tech

How Online Poker is Using Artificial Intelligence

The world of online casinos is evolving at an astounding rate. The market's current global value stands at around $61.5bn....

26 de June de 2022
Les IA pourront-elles dépasser les humains ? © ipopba, Adobe Stock
Tech

Westworld Season 4 Releases Today: When AIs Have A Conscience!

The saison 4 of Westworld is imminent: Dolores, Charlotte and Caleb may well surprise us. The revolution of robots will...

26 de June de 2022
Une fois plié, le Galaxy Z Flip3 Bespoke Edition mesure seulement 4,2 pouces. © Samsung
Tech

Samsung summer offers: take advantage of an immediate discount of €260 for the purchase of a Galaxy Z Flip3 Bespoke Edition!

The Samsung Galaxy Z Flip3 Bespoke Edition, the famous foldable smartphone from the South Korean manufacturer, is on sale during...

26 de June de 2022
Next Post
Apprendre à filmer comme un pro © Pexels, Pixabay

71% discount on this training to learn how to film like a pro!

Découvrez Affinity Designer © Free-photos de Pixabay

Introduction to Affinity Designer: take advantage of 80% off this training

Recommended

VIDEO – While trying to lift a tree, this crane truck ends up on the roof of a house

VIDEO – While trying to lift a tree, this crane truck ends up on the roof of a house

12 de May de 2022

Charles Pic, former Formula 1 driver, buys the DAMS team

19 de February de 2022

Europe joins Ukraine on cyber attack front

24 de February de 2022
The drifting session ended in the bushes for this Corvette C8!

The drifting session ended in the bushes for this Corvette C8!

14 de June de 2022
ADVERTISEMENT

Categories

  • Car
  • Carros
  • Tech
  • Tecnologia
ADVERTISEMENT
  • Home
  • Privacy policy
  • About us
  • Contact us
© 2021 Plugavel - News about technology and cars on one site Plugavel.
No Result
View All Result
  • Home
  • Tech
  • Car
  • More
    • Privacy policy
    • About us
    • Contact us