On May 25th, 2020, an African American man named George Floyd was killed in Minneapolis, Minnesota, sparking widespread condemnation and civil unrest not just in the US, but also in many other parts of the world. Racial inequality has existed since the dawn of humanity. However, technology has completely transformed the way in which it is reported and the way we respond to these incidents day today.
For many people, watching another human being lose their life in such an inhumane way was extremely terrible and traumatic. This was also exacerbated by the global COVID-19 pandemic that had put many countries in lockdown. It felt like for the first time ever, we got to witness the ugliest parts of humanity at the same time! There was outrage.
There were calls for action. Many people took it upon themselves to be proactive, to educate themselves, to march and protest against inequality. But in the same way tech & AI played a central role in galvanizing the #BlackLivesMatter movement, the practical and ethical merits of digital were once again brought into question. Here are 3 things we learned:
1. Algorithms can perpetuate trauma
On the day of the incident, it is alleged that Floyd was placed under arrest on suspicion of using a counterfeit note at a local convenience store. It emerged that a team of four officers were involved in restraining him, even though nearby CCTV footage did not suggest that Floyd was non-compliant or aggressive at any point.
For the next few days, we were bombarded by the image of Derek Chauvin, a white policeman, kneeling on the neck of a gasping George Floyd. The team of officers at the scene also included a man of Asian descent and a man of Afro-Caribbean descent.
So what part did AI play in all of this? Social media algorithms are designed to prioritize what we see based on relevancy as opposed to chronology. This type of sorting is typically helpful because it delivers more of the content that you engage with, rather than random posts that may or may not be of interest.
But could it be argued that in this instance there may have been a downside to the algorithm?
Many people complained about overwhelm from over-exposure to the graphic images that told the story of Floyd’s final moments. These images clogged our timelines and newsfeeds and there was simply no escape. Twitter rage evolved into street protests and then into violence in major cities including London, Paris, and New York.
Activism and social justice have played an important role in creating an equal and fairer society for all. And whilst technology remains a vital tool in these efforts, many people are becoming
increasingly aware of the need to safeguard against overexposure that can have a lasting impact on mental health and wellness.
2. Our human rights can be impacted by AI
On August 7th, the New York Police Department sent a large team of officers, including some in riot gear, to the home of 28-year-old activist Derrick Ingram. He had been accused of assault after allegedly shouting into a police officer’s ear with a bullhorn. A standoff ensued, live-streamed on Instagram, during which Ingram repeatedly asked officers to produce a search warrant. A request they were unable or unwilling to grant.
When a crowd of people supporting Ingram started to congregate at his address, the NYPD stood down and Ingram turned himself into the police custody the following day. It later emerged that the NYPD had used facial recognition software to track down the Black Lives Matter activist from an Instagram post he had previously shared.
The right to come together and peacefully express our views is a basic constitutional and human rights provision in modern society. Therefore the state should not interfere with this right in any way simply because it disagrees with a particular stance.
With images from the protests being widely shared on social media to raise awareness, some police departments took the opportunity to add the people featured to their facial recognition databases. Automatic identification of individuals involved in the #BlackLivesMatter campaign led to subsequent arrests in an effort to suppress protest activity in several US cities.
In a further development, researchers from Stanford created an AI-powered bot that automatically covers up the faces of #BlackLivesMatter protesters in photos. This approach replaces the faces of protestors with a black fist emoji that has become a symbol of the #BlackLivesMatter movement. The hope is that such a solution will be built into social media platforms, but currently, there has been no indication from the tech giants that this type of feature is on the horizon.
3. Machines can learn to be racist
In 2017, a video went viral of an automatic soap dispenser that would only release soap onto white hands. The flaw occurred because the product was not tested on darker skin pigments. A study in March 2019 found that driverless cars are more likely to drive into black pedestrians, again because by default the technology is designed to detect lighter skin.
Artificial intelligence can only be founded on human intelligence. Humans program the machines to behave in a certain way which means they may be passing on their unconscious biases to the software.
The tech and computer industry is still overwhelmingly dominated by white men. In 2016, there were ten large tech companies in Silicon Valley that did not employ a single black woman.
When there is a lack of diversity in the room, it means the machines are learning the same biases and internal prejudices of the majority that are developing them.
The social media algorithms, facial recognition, and digital tools utilized in the aftermath of #JusticeForGeorgeFloyd and #BlackLivesMatter have highlighted the need to address machine bias. An emerging initiative is the field of FAT ML (Fairness, Accountability, and Transparency in Machine Learning).
Their aim is to create a standard for building better algorithms that can assist regulators and others to uncover the undesirable and unintended consequences of machine learning and to contribute to the type of equal and fair society we would all like to be a part of.
—
Originally appeared on unleashgroup.io and is republished on Medium.
* * *
If you believe in the work we are doing here at The Good Men Project and want to join our calls on a regular basis, please join us as a Premium Member, today.
All Premium Members get to view The Good Men Project with NO ADS.
Need more info? A complete list of benefits is here.
Talk to you soon.
Photo Credit: Shutterstock