Google

‘Weird’ humans in the loop: Who is really teaching the machines?

/2 min read

ADVERTISEMENT

Adivasi data workers, Weird bias, and ignored indigenous wisdom show how AI risks repeating old mistakes in new machines.
‘Weird’ humans in the loop: Who is really teaching the machines?
The debate is open: who gets to teach the machines? Whose knowledge counts? And will we expand the loop to include voices long ignored?  Credits: Shutterstock

I watched Humans in the Loop on Netflix, released on October 31, 2025. No robots, no explosions. Just people in dim offices, labelling data for machines. The film follows Nehma, an Adivasi woman in Jharkhand, who spends her nights tagging images and voices so algorithms can learn. It is quiet, but it hits hard. Because it asks the question: who is really teaching the machines?

AI does not learn in a vacuum. Every cat, every smile, every angry tone is defined by a human hand. And those hands belong mostly to people from Weird societies–Western, Educated, Industrialised, Rich, Democratic—an acronym coined in 2010. Harvard researchers in 2023 showed that AI systems trained on this data exhibit Weird psychological biases. The worldview of a narrow slice of humanity is shaping global technology. If your dataset is Weird, your machine is Weird. It will misread a rural farmer's silence, misinterpret an indigenous healer's gesture, and call it an error. That is bias.

We have already seen what happens when bias goes unchecked. In 2018, several major tech firms rolled out facial recognition software that worked fine on white male faces but failed miserably on black women. Error rates were sky high. Police departments still used it. Lives were affected. That was not a bug; it was a blind spot baked into the training data.

fortune magazine cover
Fortune India Latest Edition is Out Now!
India’s Best CEOs

November 2025

The annual Fortune India special issue of India’s Best CEOs celebrates leaders who have transformed their businesses while navigating an uncertain environment, leading from the front.

Read Now

A widely used U.S. hospital system algorithm was supposed to flag patients needing extra care. It relied on past spending as a proxy for health needs. Result? Black patients, who historically had less money spent on them, were marked as lower risk even when they were sicker. The machine did not see racism in the healthcare system; it just copied it.

Early versions of popular smart speakers routinely failed to understand non-American English accents. Indian, Nigerian, and Scottish users were left repeating themselves. The companies dismissed it as a technical limitation. But the limitation was not technical; it was cultural. The training data did not include enough voices outside the WEIRD bubble.

Indigenous knowledge systems, ways of reading land, weather, and community, rarely enter the datasets. When they do, they are flattened. A smile becomes positive. A pause becomes negative. The nuance disappears. And when AI is used in healthcare, education, or resource planning, those communities get misdiagnosed, misjudged, or ignored. Nehma's clicks feed the machine, but her own wisdom never enters the loop. That is the paradox. The people teaching the machines are not allowed to teach them what they actually know.

Nehma's story reminds us that AI is built on human judgment, human culture, and human bias. The debate is open: who gets to teach the machines? Whose knowledge counts? And will we expand the loop to include voices long ignored? The answers will decide not just the future of AI, but the future of society.

(The author is a C-suite+ and startup advisor, and researches and works at the intersection of human-AI collaboration. Views are personal.)

Explore the world of business like never before with the Fortune India app. From breaking news to in-depth features, experience it all in one place. Download Now
Related Tags