News | AI with Rights | Digital Platforms and Markets
Confused with a Criminal by ‘Artificial Intelligence,’ Woman Denounces Racism: “Discriminated Against for Being Poor and Black”
Administrative assistant Thaís Santos, from Aracaju (SE), mistaken for a fugitive twice during the same event in the Sergipe capital, believes the error had a racist bias. “I was publicly discriminated against for being poor and Black,” says the 31-year-old woman“. I have never been so humiliated in my life, having done nothing wrong,” Thaís laments.
An error by the AI (Artificial Intelligence) used in facial recognition cameras at an event in Aracaju (SE) led to the incident. She recounts that, half an hour after arriving at the street festival, three plainclothes police officers approached her. They asked for her name and ID. Without her ID on hand, the humiliation began.
“I questioned what it was about and who they were. One of the officers identified himself and said he was undercover. He explained that the approach was part of a security protocol because I had been flagged by the surveillance camera as a possible fugitive”, she told Folha de São Paulo. After confirming she was not the wanted individual, she was released, but she was shown a photo of the suspect. “She didn’t look like me,” said the administrative assistant.
Two hours later, while enjoying the event, four military police officers approached her again, this time violently. They forced Thaís to put her hands behind her back to be handcuffed.
“I was already crying and nervous, saying I hadn’t done anything,” she continues, adding that she urinated herself during both encounters. “One of the officers said I knew what I had done. At that moment, I urinated in my pants. I was taken to the police van like a criminal”, she recounts, “with everyone witnessing the humiliation I was enduring”.
After the second wrongful approach—tinged with racism—Thaís returned home distressed and fearful.
Sergipe’s Public Security Secretariat stated that “there was a high similarity flagged by facial recognition with another person who had an outstanding arrest warrant”. They added, “The technology does not have 100% accuracy, which is why thorough verification is necessary”.
The Military Police’s Internal Affairs Division has opened an investigation into the incident. The PM announced that “protocols will be reviewed to prevent errors in future events involving facial recognition technology”.
The cameras used by Sergipe’s SSP feature AI-powered facial recognition, comparing recorded facial traits with database images. Such tools and similar ones have faced resistance from researchers who point out biases against Black and trans populations.
“Despite a certain consensus among stakeholders on ethical principles for AI use, such as transparency, fairness/non-discrimination, non-maleficence, accountability, and privacy, translating these principles into concrete measures to safeguard fundamental rights remains challenging”, note researchers Paula Guedes Fernandes da Silva, a doctoral candidate in Law and International Law at the Catholic University of Porto and a Technology and Law Researcher at Legalite PUC-Rio, and Marina Gonçalves Garrote, a researcher at the Data Privacy Brasil Research Association and a Law Master’s student at the University of São Paulo.
Their analysis on AI regulation and application is included in the article “Insufficiency of Ethical Principles for AI Regulation: Anti-Racism and Anti-Discrimination as Vectors for AI Regulation in Brazil”, published in 2022.
“The prevalence of illegitimate or abusive practices and decisions stemming from AI applications—despite established ethical guidelines—demonstrates this”, the study adds, citing examples such as the proliferation of algorithmic racism, biometric facial recognition errors, vigilantism, social exclusion, behavior manipulation, and barriers to accessing essential services, all disproportionately affecting marginalized groups.
Tarcízio Silva, a researcher who has studied the topic for six years, warns that the tool used during Pré-Caju is dangerous because “facial recognition technologies are highly inaccurate in identifying specific individuals”.
Sociologist Sérgio Amadeu da Silveira of UFABC (Federal University of ABC) argues that facial recognition is being used in public security to classify “dangerous classes, marginalized groups.” In peripheral areas, Amadeu notes, “this system will reinforce existing prejudices and biases, amplifying discriminatory and racist practices in a country like ours, which kills young Black people”.
The lack of algorithmic neutrality perpetuates social discrimination, warns the UFABC professor. “These technologies are probabilistic; they have an inherent margin of error”, emphasizes Sérgio Amadeu.
Text originally published on November 15, 2023, on the Hora do Povo website.
DataPrivacyBr Research | Content under licensing CC BY-SA 4.0