Most people, in considering computers and artificial intelligence (AI), naturally assume that because they involve impersonal machines, they could not possibly contain any hidden bias. However, this is not the case at all. Furthermore, these AI algorithm biases have crucial and life-rendering ramifications in a number of areas. This article will identify the sources of this bias intrusion and why they occur, discuss the consequences of these AI biases, and suggest means to eliminate or at least severely limit them.
In an article entitled “5 unexpected sources of bias in artificial intelligence” by Kristian Hammond, she delineates the following bias sources:
· Data-driven bias: For any system that “learns”, any output is determined by the data it receives. The massive volume of examples that many think could eliminate human bias can be trumped if the training set is skewed.
· Bias through interaction: Some systems learn through interaction and biases can enter the system based on the biases of the users involved in the interaction.
· Emergent bias: The decisions made by systems specializing in personalization, such as Facebook, can cause bias “bubbles”. What occurs then is the system ends up automatically shielding users from information that conflicts with their existing belief set.
· Similarity bias: Another type of bubble can emerge from systems producing a set of stories that confirm each other, thereby eliminating the positive effects of conflicting points of view such as creativity and innovation.
· Conflicting goals bias: These AI biases come into play with systems that involve algorithms which provide information yielding the highest number of clicks. An example is given of job descriptions that match existing stereotypes, but which may not be optimal for the job seeker.
As to why these biases exist, Nanette Byrnes, in an article “Why We Should Expect Algorithms to be Biased”, quotes Fred Benenson, Kickstarter’s former data chief, who explains, “Algorithm and data-driven products will always reflect the design choices of the humans who built them, and it’s irresponsible to assume otherwise.” When algorithms attempt to understand word meaning by reading masses of human-written text, the tendency is for them to adopt stereotypes very similar to our own. Another false assumption we have is what Byrnes describes is “mathwashing”, which is our tendency to glorify programs like Facebook’s as completely objective simply because they have mathematics at their core. Other AI algorithm biases can emerge such as the actuality that the majority of the programmers creating these programs are male.
Put simply, Joanna Bryson, a computer scientist at the University of Bath and Princeton, states “AI is just an extension of our existing culture” and further, “algorithms may be unequipped to consciously counteract learned biases.” Complicating the situation of AI algorithm biases, systems even make choices on the basis of underlying assumptions that are not even clear to the systems’ creators, so it is not always possible to determine which algorithms are biased and which ones are not.
What then are the consequences of these AI biases and how important are they? These hidden biases involve factors such as race, gender, and age. In a word-embedding association that Bryson and her colleagues developed called WEAT, white names, represented by numbers, were associated with positive images, African-American names with negative ones. Men were associated with math, science, and work, while women were linked with family and the arts. AI algorithm biases can impact decisions on hiring, receiving a mortgage, being issued credit cards, obtaining advantageous deals on items such as cell phones, ranking teachers, predicting the recidivism rates for parolees, and even identifying terrorists.
Knowing the wide extent of societal ramifications for AI biases, questions have to be asked and solutions offered. Sandra Wachter, a researcher in data ethics and algorithms at the University of Oxford, has suggested that an AI watchdog be established. Bryson recommended adding an extra layer of human judgment on how to decide how to act on such biases. In a paper “Fighting Algorithm Bias and Homogenous Thinking in A.I.” by Mariya Yao, the author mentions Timnit Gebru who co-founded the social community Black in AI. Gebru cites negative factors such as the apolitical nature and anti-immigrant of the AI industry, and the need to expand the parameters of the category of diversity.
Being aware of the many and often subtle challenges of AI biases and the trickiness of language, we must embark on a campaign to identify and limit hidden biases knowing what is at stake. As with anything else, knowledge is power.
—
The Good Men Project is different from most media companies. We are a “participatory media company”—which means we don’t just have content you read and share and comment on but it means we have multiple ways you can actively be a part of the conversation. As you become a deeper part of the conversation—The Conversation No One Else is Having—you will learn all of the ways we support our Writers’ Community—community FB groups, weekly conference calls, classes in writing, editing platform building and How to Create Social Change.
◊♦◊
Here are more ways to become a part of The Good Men Project community:
Request to join our private Facebook Group for Writers—it’s like our virtual newsroom where you connect with editors and other writers about issues and ideas.
Click here to become a Premium Member of The Good Men Project Community. Have access to these benefits:
- Get access to an exclusive “Members Only” Group on Facebook
- Join our Social Interest Groups—weekly calls about topics of interest in today’s world
- View the website with no ads
- Get free access to classes, workshops, and exclusive events
- Be invited to an exclusive weekly “Call with the Publisher” with other Premium Members
- Commenting badge.
Are you stuck on what to write? Sign up for our Writing Prompts emails, you’ll get ideas directly from our editors every Monday and Thursday. If you already have a final draft, then click below to send your post through our submission system.
If you are already working with an editor at GMP, please be sure to name that person. If you are not currently working with a GMP editor, one will be assigned to you.
◊♦◊
Are you a first-time contributor to The Good Men Project? Submit here:
◊♦◊
Have you contributed before and have a Submittable account? Use our Quick Submit link here:
◊♦◊
Do you have previously published work that you would like to syndicate on The Good Men Project? Click here:
Join our exclusive weekly “Call with the Publisher” — where community members are encouraged to discuss the issues of the week, get story ideas, meet other members and get known for their ideas? To get the call-in information, either join as a member or wait until you get a post published with us. Here are some examples of what we talk about on the calls.
Want to learn practical skills about how to be a better Writer, Editor or Platform Builder? Want to be a Rising Star in Media? Want to learn how to Create Social Change? We have classes in all of those areas.
While you’re at it, get connected with our social media:
- To join our Facebook Page, go here.
- To sign up for our email newsletter, go here.
- To follow The Good Men Project on Twitter, go here.
◊♦◊
However, you engage with The Good Men Project—you can help lead this conversation about the changing roles of men in the 21st century. Join us!
◊♦◊
We have pioneered the largest worldwide conversation about what it means to be a good man in the 21st century. Your support of our work is inspiring and invaluable.
The Good Men Project is an Amazon.com affiliate. If you shop via THIS LINK, we will get a small commission and you will be supporting our Mission while still getting the quality products you would have purchased, anyway! Thank you for your continued support!
—
Originally Published on LinkedIn
—
Photo Credit: Pixabay