‘Digital Resistance’: Black Coders Battle Biased AI Systems

5 hours ago 1

For African Americans, providing our data to artificial intelligence — or AI — doesn’t end when you start a search for a protest happening near you, ask for the best Black-owned businesses, or even about whether you need a lawyer when speaking with law enforcement. 

Technology experts say our personal data and the questions we ask online could inadvertently help build a complex world of surveillance, with facial recognition and predictive policing that many say is being used against people. Predictive policing uses data and algorithms to allegedly assist law enforcement in predicting future criminal behavior and identifying high-danger suspects, but can also perpetuate biased training in policing. 

A small but growing community of Black coders and tech leaders are working to create programs to protect and inform marginalized groups about how sharing data is being used: in algorithmic biases, misinformation, and the ability to predict locations and potentially share it with law enforcement. 

These tech-savvy online activists call themselves ethical hackers, or hacktivists, meaning they use their skills for good — a form of what they call “digital resistance.”

During the first 100 days of President Donald Trump’s second term, there has been an increase in mass surveillance. In May, The New York Times reported that Trump expanded his work with Palantir, a private AI data analysis firm. 

Several government employees and representatives from Palantir told the Times that the company has plans for a program called Foundry, an operating system that the government will use to collect data on Americans. The program would be installed in several government agencies. Data could then be turned over to local law enforcement or the U.S. Immigration and Customs Enforcement agency. 

“There is a fear that this kind of military grade intelligence could be turned against American citizens who choose to dissent or protest what the government is doing,” said Avriel Epps, a computational social scientist and author of A Kids Book About AI Bias

Capital B spoke to several online activists and technology experts about the scope of artificial intelligence, its inherent bias, how to protect your data, and why it’s important to educate yourself in order to combat AI’s overreach. 

Who is training AI? 

AI tools or platforms such as ChatGPT and facial recognition systems have disproportionately impacted people of color because they’re not in the rooms when these programs are being developed, Epps said.

“We’re not saying we need this data set to represent us, it’s that the data set that’s training those systems needs to represent us,” said Epps, who is the AI expert-in-residence with Black Girls Code.

Epps, 35, is also the co-founder of AI4Abolition, a nonprofit promoting literacy and developing tech tools to guide a new form of restorative justice.  Her work teaches young tech enthusiasts and activists about protective tools, how to combat algorithmic overreach, and how their data could be used against them.

She said when it comes to policing, technology can sometimes make inequality and violence against Black communities more efficient. “And so the question about technology is secondary to critiquing the system that the technology is supporting as a whole,” she said.  

She argues that Black people need to have a seat at the table when this technology is being developed, but there’s also room for what she calls “subversive underground spaces,” meaning “groups of builders and dreamers, people imagining different ways” of creating technology.

AI surveillance in the present

Today’s version of surveillance dates back to the campaign against protesters during the Civil Rights Movement. In the 1960s, the FBI extensively surveilled Martin Luther King Jr. and the leaders of Black liberation groups such as the Black Panthers.

Meka Egwuekwe is the founder of CodeCrew. (Courtesy of Meka Egwuekwe)

Fast-forward to today, and most cities have around 11 surveillance cameras per 1,000 people, a report from Comparitech found. In Atlanta, however, a city with a 46.9% Black population, there are 124 cameras per 1,000 people, making it the most surveilled in the U.S. Washington, D.C., came in second, and Philadelphia third. 

Meka Egwuekwe, the founder of CodeCrew, in Memphis, Tennessee, told Capital B that his nonprofit primarily teaches AI and computer science to kids and adults. He said when it comes to closed-circuit television cameras, there are reasonable applications, such as a camera at an ATM. 

Egwuekwe said it’s a balance: Sometimes we just rely on technology, and we go overboard. 

“The concern is compounded by coupling that with facial recognition and artificial intelligence in ways that can be harmful to people,” he said. “Especially because we know those technologies have a mixed record when it comes to accuracy with respect to gender and race.”  

How to combat AI overreach

Camille Stewart Gloster is an attorney specializing in the intersection of technology, cybersecurity, national security, and foreign policy. She explained that people have to be thoughtful about the places in which they engage and how much information they give. Gloster said the onus has to be on the user to protect themselves. 

Attorney Camille Stewart Gloster said that technology users must be proactive in protecting themselves. (Courtesy of Camille Stewart Gloster)

“Things like privacy mode or secrecy mode, depending on the platform, turning on all of the two-factor authentication and the do not track for ads, saying no to cookies — all of those things help provide a layer of abstraction and friction for you that can provide some distance and give you a little bit more privacy,” she said. 

According to the 2019 Stanford University AI Index Diversity Report, 45% of new AI doctoral graduates were white, while just 2.4% were Black. 

Through her organization AI4Abolition, Epps partners with other restorative justice practitioners, such as Justice AI GPT, a platform that helps organizations, individuals, and governments build safer and more transparent AI tools. The tools are created without being trained on massive data collection and without “extracting or replicating colonial harm,” according to its website.  

“We’re trying a really different process of building technology that puts these communities that are often at the margins, in the center,” Epps said. We’re documenting that process of developing the technology, so that other folks can replicate the process and show that there’s a different way to think about building these kinds of technologies.”

In 2023, the University of Southern California began using artificial intelligence to study the Los Angeles Police Department’s interactions with drivers during traffic stops. The 3-year program aims to “promote accountability,” and the findings will be used for training purposes. 

Egwuekwe tells his students that it’s vital to understand the harms that tech can create when it’s inaccurate, like in the case of facial recognition. 

“There’s a responsibility to offset or eliminate the bias, privacy, and security concerns that we’re seeing too many examples of in the AI space,” he said. 

Egwuekewe said he wants young people to see themselves as creators of AI. 

“They have a role to play in intersecting technology, especially AI, with addressing world-class problems like health,” he said. “Not that technology alone will solve these problems because many of these problems are steeped in the lack of political will to address them, but tech can help.”

Tennisha Martin, founder of Black Girls Hack, in Fredericksburg, Virginia, said one AI program that has been trained by Black developers is ChatBlackGPT, which was founded by Erin Reddick in 2024.

In an interview with Mashable, Reddick described her AI as “culturally informed” and “rooted in the acknowledgment of social, economic, and systemic racism — and the diaspora of Black and brown people in America,” she said. 

Martin said a more inclusive technological world will only happen when Black students begin to research these issues. 

“Then we can have people who are basically improving the algorithms, improving the machine learning languages, improving the data sets that they’re being trained on because it’s garbage in, garbage out,” she said. 

Why AI regulation matters

In May, U.S. House Republicans passed Trump’s One Big Beautiful Bill Act. The 1,000-page legislation includes a provision for a 10-year moratorium on state regulations of AI and automation technology. 

Mari Galloway is the CEO of Women’s Society of Cyberjutsu. (Courtesy of Mari Galloway)

“I hope that more people start to see this and say, ‘Wait a minute, this is no longer just impacting this one group, this is impacting all of us,’” said Mari Galloway, CEO of Women’s Society of Cyberjutsu, a nonprofit focused on providing career training for women going into cybersecurity. “These things have long-lasting impacts.” 

“We have to still step up to the plate and demand that change happens,” Galloway added.  

Epps said that without regulations on AI, she envisions a bleak future. She said AI models could be deployed in the world with total disregard for marginalized communities.

“And no one’s even really trying to do the right thing,” she added. 

“If you’re hiding behind the myth of computational objectivity and saying, ‘Well, the computer said we should do it, and the computer is always right,’ and then you intentionally made the computer more racist, that’s a problem.”

Read Entire Article