AI, as a mirror of the way its creators understand the world, can have biases inherent in it. Photo: Unsplash/Andy Kelly
Artificial intelligence suffers from some human flaws last month, Facebook’s parent company, Meta, unveiled its most advanced AI chatbot to date. BlenderBot 3, as the AI is known, is able to search the Internet to talk to people about just about anything, and it has capabilities related to personality, empathy, knowledge, and long-term memory.
BlenderBot 3 is also good at promoting anti-Semitic conspiracy theories, claiming that former US President Donald Trump won the 2020 election, and calling Meta President and Facebook co-founder Mark Zuckerberg “scary”.
This isn’t the first time that AI has become a villain. In 2016, Tay AI took less than 24 hours to turn into a right-wing fanatic on Twitter, posting racist and misogynistic tweets and praising Adolf Hitler.
Both experiments illustrate the fact that technologies like AI are just as vulnerable to corrosive biases as the humans who build them and interact with them. This is an issue of particular interest to Carlin Scheele, director of the European Institute for Gender Equality, who says AI may pose new challenges to gender equality.
Skell says women make up more than half of Europe’s population, but only 16 percent of AI workers. For AI to reflect society’s diversity, she says, it “will cause more problems than it solves,” adding that in AI, limited representation creates data sets with internal biases that can perpetuate gender stereotypes.
A recent experiment in which bots were trained by popular Artificial intelligence suffers from some human flaws intelligence algorithms underscores this point. Robots have consistently associated terms such as “janitor” and “housewife” with images of people of color and women, according to Report by Washington Post.
Schell says two challenges must be addressed: the immediate task of reducing the biases that can be incorporated into AI, and the longer-term issue of how to increase the diversity of AI’s workforce.
To counter AI bias, the European Union has proposed new legislation in the form of artificial intelligence law one of its provisions suggests that AI systems used to help recruit, promote or assess workers should be considered “high risk” and subject to third-party evaluation.
The reasoning behind the clause is that AI can perpetuate “historical patterns of discrimination” while keeping one’s career prospects in the balance. Skell supports the legislation, saying it could help women pursue their career aspirations by reducing discrimination in AI.
She says that measures like the law can address biases and discrimination in the short term, but that boosting women’s representation in AI in the long term is just as important. A first step in this direction, Skell says, will be to support women’s pursuit of STEM education by challenging lazy and counterproductive stereotypes. Without deliberate efforts on the gender integration front, she says, “male-dominated domains will remain male-dominated.”
It also says that companies and other entities using AI should encourage greater representation of women in order to ensure a “full range of perspective,” because a more inclusive perspective will foster the development of skills, ideas, and innovations that measurably benefit their performance.
Increasing the proportion of women working in AI is critical, says Abe Senor, chief technology officer of Spanish social data platform Citibeats, because when AI systems are developed, humans “decide[s] Whether the output of this algorithm is true or false is entirely up to the engineer.” She says that engaging people who not only have the right qualifications, but can also identify biases, is crucial.
open source community
Another way to address bias in AI is to share AI models with others, Senior says, referring to the “ethical AI community” of like-minded organizations that Citibeats works with.
Citibeats provides input to governments by measuring public sentiment on various issues by monitoring social media content using natural language processing and machine learning. It shares information with other organizations that maintain its data sets so that it and its collaborators can test AI models and report potential biases or errors to developers.
For example, if a team develops an Artificial intelligence suffers from some human flaws model to scan images and determine the gender of people, they may find themselves limited by the fact that they may only work in one part of the world. But by sharing their model with organizations elsewhere, they can test their model using images of a larger group of people, which contributes to the effectiveness of the AI model.
Senior says creating unbiased AI is not just a job for practitioners, but also for policy makers, who she says “need to accelerate with technology” and would benefit from more engagement with people involved in AI on a hands-on level.
Stanford University seeks to promote this type of participation, and last month invited staff from the US Senate and House of Representatives to a meeting “Artificial Intelligence Training Camp” As AI experts explained to them how technology will affect security, healthcare, and the future of work.
Seneor is also supporting more regulation of big tech companies involved in AI, such as DeepMind, which is owned by Google parent Alphabet, because the algorithms they create affect millions of people. “With such great strength comes great responsibility,” she says.
Regulations can force big tech companies to be open about how their AI works and how it might change. It may also require testing AI models with greater transparency, representing a significant departure from the clandestine way companies in the field currently operate. Seneor says companies are incorporating AI into products everyone uses, but these people “have no idea what’s going on inside.”
Artificial Intelligence in the Temporary Job Economy
The European Institute for Gender Equality says the gig economy is one area where AI can lead to unfair outcomes for women. AI algorithms often determine worker schedules on platforms like Uber and Deliveroo, according to Report It was published at the beginning of the year. Algorithms use data such as employment history, shift changes, absences, and sick leave to allocate new tasks and evaluate performance, which can lead to unequal treatment of women, whose work history can be complicated by maternity and other obligations. In a study of 5,000 workers, the institute found that one in three does a gig while balancing family responsibilities with domestic work.
Schell says that while addressing unfair AI is key, governments can play a role in creating a “work-based economy that works for women” by ensuring workers have access to a robust social security system. Providing health and life insurance, pension and maternal support systems, she says, can give women the “psychological safety of knowing there is a net to catch them” if something unexpected happens in the course of the gig’s work.
As the world continues its digital transformation, breakthrough developments in technology are looming on the horizon, offering a lot of potential to improve people’s lives. But it is important to realize that technology is never completely neutral, and that biases and discrimination can be woven into it as much as any other human creature can.
This makes it so important that technological development, of which artificial intelligence is an increasingly important part you must be aware of the considerations of non-discrimination and fairness if all the growing momentum for digital innovation is to pay off equitably.