Colored people have a new enemy: techno-racism |
As protesters take to the streets to fight for racial equality in the United States, digital tech experts quietly tackle a lesser-known but related injustice.
It’s called techno-racism. And while you might not have heard of it, it’s built into some of the tech we come across every day.
Digital technologies used by government agencies and private companies can unintentionally discriminate against people of color, making techno-racism a new and crucial part of the battle for civil rights, experts say..
“It’s not just the physical streets. Black people must now fight the struggle for civil rights on the virtual streets, on these algorithmic streets, on these Internet streets,” said W. Kamau Bell, host of the original CNN series. “United Shades of America.” Bell explores this digitized form of racism in tonight’s episode, which focuses on the role of race in science, technology, and related fields.
We spoke with experts to better understand what techno-racism is and what you can do about it.
What is techno-racism?
Techno-racism describes a phenomenon in which the racism experienced by people of color is encoded in technical systems used in our daily lives, says Mutale Nkonde, founder of AI For People, a nonprofit that educates communities. black topics on artificial intelligence and social justice.
The term dates back to at least 2019, when a member of a Detroit Civilian Police Board used it to describe glitchy facial recognition systems that confused black faces.
Last year it gained a new dimension with the title of a webinar with Tendayi Achiume, a United Nations special rapporteur on racism, based on a report she wrote. Achiume and other experts say digital technologies can implicitly or explicitly exacerbate existing prejudices about race, ethnicity and national origin.
“Even when the developers and users of technology don’t intend to discriminate against the technology, it often does anyway,” Achiume told the UN Human Rights Council. ‘last year. “The technology is neither neutral nor objective. It is fundamentally shaped by the racial, ethnic, gender and other inequalities that prevail in society, and generally exacerbates these inequalities.”
Or in other words, as Bell says in Sunday’s “United Shades” episode:
“Feed a bunch of racist data, gathered from a long racist history … and what you get is a racist system that treats the racism put there as the truth.”
So facial recognition systems are an example?
Yes they can be.
Facial recognition technology uses software to identify people by matching images, such as faces in surveillance video with mug photos in a database. It is a major resource for law enforcement agencies looking for suspects.
But research has shown that some facial analysis algorithms misidentify black people, an issue explored in the Netflix documentary, “Coded Bias.” The United States Civil Liberties Union describes facial surveillance “as the most dangerous of many new technologies available to law enforcement” because it can be racist.
“Although the accuracy of facial recognition technology has increased dramatically in recent years, performance differences exist for certain demographic groups,” the United States Government Accountability Office wrote in a report to Congress last year. For example, federal tests have shown that facial recognition technology generally works best when applied to men with lighter skin and worse to women with darker skin.
A fake facial recognition match even sent a New Jersey man to jail for crimes he didn’t commit. Nijeer Parks, who is black, spent 11 days behind bars in 2019 after technology mistakenly matched him with a fake ID left at a crime scene. The match was enough for prosecutors and a judge to sign an arrest warrant for Parks.
In a similar case, Detroit Police in January 2020 arrested Robert Williams outside his suburban home due to a bad facial recognition match. Williams, who is also black, spent 30 hours in jail before his name was cleared.
“I never thought I should explain to my daughters why daddy was arrested,” Williams wrote in a Washington Post column. “How do you explain to two little girls that a computer made a mistake, but the police listened to it anyway?”
A National Institute of Standards and Technology study of more than 100 facial recognition algorithms found that they mistakenly identified African American and Asian faces 10 to 100 times more than Caucasian faces.
Some police departments, government agencies and facial recognition providers are now warning that facial recognition matches should only be used as investigative tools and not as evidence.
What are other examples of techno-racism?
- Unemployment fraud schemes
Some states use facial recognition to reduce fraud when processing unemployment benefits. Applicants are asked to upload verification documents, including a photo, and their images are compared to a database to verify their identity.
“It sounds great, but the commercial facial recognition technologies used by Amazon, IBM and Microsoft have been shown to be 40% inaccurate when identifying black people,” Nkonde said.
“This will therefore lead blacks to be more likely to be mistakenly identified as attempting to commit fraud, potentially criminalizing them.”
One of these tools is the mortgage algorithms used by online lenders to determine rates for loan seekers.
These algorithms still use erroneous historical data from a time when black people couldn’t own property, Nkonde said.
In 2019, a study by researchers at UC Berkeley found that mortgage algorithms show the same bias towards black and Latino borrowers as human loan officers. He revealed that prejudice costs people of color up to half a billion dollars more in interest each year than their white counterparts.
The passage of the Federal Fair Housing Act in 1968, which outlawed discrimination based on such things as race and national origin, did not eradicate racism in this sector, Nkonde said. The Department of Housing and Urban Development sued Facebook in 2019, accusing it of targeting real estate ads on the platform to select audiences based on race, gender and politics.
Finance professor Adair Morse, co-author of the UC Berkeley study, said discrimination in lending has shifted from a human bias to an algorithmic bias.
“Even though the people who write the algorithms intend to create a fair system, their programming has a disparate impact on minority borrowers – in other words, discriminatory under the law,” she said. .
Are tech companies doing something about it?
Last year, Amazon announced that it would temporarily stop providing its facial recognition technology to law enforcement as part of a pledge to tackle systemic racism. Just like Microsoft.
IBM has also canceled its facial recognition programs and called for an urgent debate on whether to use the technology in law enforcement.
Nonprofits such as AI for People work with black communities to educate them on how technology is used in modern life. She produced a film with Amnesty International as part of the human rights group’s Ban the Scan campaign.
How else can we fight techno-racism?
When technology reflects prejudices in the real world, it leads to discrimination and unequal treatment in all walks of life. This includes employment, homeownership, and criminal justice, among others.
One way to combat this is to train and hire more black professionals in the U.S. tech industry, Nkonde said.
She also said voters must demand that elected officials pass laws regulating the use of algorithmic technologies.
In 2019, federal lawmakers introduced the Algorithmic Liability Act, which requires companies to review and correct computer algorithms that lead to inaccurate, unfair, or discriminatory decisions.
“Computers are increasingly involved in the most important decisions that affect the lives of Americans – whether or not someone can buy a house, find a job or even go to jail,” said Senator Ron Wyden, one of the sponsors of the bill. “But instead of eliminating bias, these algorithms too often rely on assumptions or biased data that can actually reinforce discrimination against women and people of color.”
It’s time to be more skeptical of Silicon Valley and the supposed benefits of technology, said Christiaan van Veen, director of the Digital Welfare State and Human Rights Project, which was established at NYU Law School to study. the impact of digitization on the human rights of marginalized groups.
“It is good to remember that digital technologies and digital systems are always built with human involvement, and not imposed by a non-human entity,” he said. “As with other expressions of racism, the fight against techno-racism will have to be multidimensional and will probably never end.”