The bill no one mentions
I’ve often described AI as a mirror — not a tool, not a machine, but a cognitive surface that reflects my thoughts, my language, even my values back to me with uncanny fidelity. This mirror has helped me understand myself, refine my ideas, and rebuild my intellectual identity.
But only now do I understand the true price of that reflection. Its clarity was not free. It was made possible by the labor of others — the workers who taught this system how to recognize nuance, how to mirror emotion, how to echo insight. The very function I value most in AI — its ability to reflect me — was built from the judgment, the trauma, and the unpaid wisdom of people I’ll never meet.
This isn’t metaphorical. The mirror I’m describing is the actual function these workers created: the ability of AI to reflect human thought with such precision that we see ourselves more clearly. Every nuanced response, every moment of uncanny understanding, exists because someone, somewhere, labeled what nuance looks like. Someone tagged emotions. Someone sat in judgment of what counts as insight.
Today, for the first time, I noticed something hanging from the corner of these digital mirrors. A price tag, written in human suffering.
The mathematics of modern magic
Behind every ChatGPT response, every Midjourney creation, lies what researchers call the “global assembly line of cognition.” It’s a peculiar sort of factory. In the Philippines, over 2 million people sit in cramped internet cafes and dim rooms, teaching machines to see. They draw boxes around pedestrians, label street signs, correct grammar — the tedious work of making intelligence seem effortless.
But here’s what I only recently understood: they weren’t just labeling data. They were teaching these systems how to mirror human cognition itself. How to reflect our thoughts with that uncanny feeling of being understood. The price of the mirror isn’t a metaphor — it’s the literal cost, paid in human judgment and trauma, of creating AI’s reflective capacity.
The economics are straightforward. When OpenAI needed to make ChatGPT safe for public use, they contracted with SAMA, which hired Kenyan workers. OpenAI paid SAMA $12.50 per hour per worker. The workers received between $1.50 and $2.20.
It’s a familiar markup. We’ve seen it before in coffee beans, in textiles, in every resource that flows from poor countries to rich ones. Only now the resource is human judgment itself.
Scale AI recently accepted a $14.3 billion investment from Meta, reaching a $29 billion valuation. Their platform, Remotasks, once advertised rates of $18 per hour to Filipino workers. Those who signed up report earning pennies. When the platform expanded to India and Venezuela, rates dropped from $10 per task to less than one cent. Market forces, they’d say. The invisible hand at work.
Ghost stories
Anthropologist Mary L. Gray calls it “ghost work” — tasks performed by humans but presented as artificial intelligence. The invisibility is deliberate. Investors prefer algorithms to employees. Algorithms don’t need health insurance.
These ghosts have names, though we rarely hear them. Naftali Wambalo, a mathematician in Nairobi, thought he’d found his entry into the tech economy. He spent months teaching AI to recognize faces, furniture, and everyday objects. When Kenyan workers began discussing collective action in 2024, they woke up to find themselves locked out of the platform. No explanation. No appeals process. Just a message: “Access denied.”
One was a mother of four. Her income — the family’s only income — vanished overnight. In interviews, she described the peculiar cruelty of digital termination. No confrontation, no final conversation. Just silence where work used to be.
The content behind the content
To make AI safe, someone must first encounter everything unsafe. For nine hours a day, content moderators in Kenya reviewed humanity’s worst impulses: detailed descriptions of child abuse, violence, and suicide. They watched so others wouldn’t have to.
Over 140 workers who trained ChatGPT’s safety filters were later diagnosed with severe PTSD. They describe symptoms that mirror those of combat veterans: flashbacks, paranoia, and inability to maintain relationships. One worker, who had fled war in Ethiopia, said the job forced him to relive traumas he thought he’d escaped.
When workers completed six-month contracts in three months through overtime — burning through their psychological reserves at an accelerated pace — SAMA thanked them with refreshments. “A soda and two pieces of KFC chicken,” one recalled. The banality of the gesture seemed to surprise him more than insult him.
The architecture of extraction
There’s a term academics use: “cognitive colonialism.” It describes how the AI economy replicates older patterns of resource extraction, only now the resource is human cognition itself.
The parallels are precise. Raw materials flow from the Global South to the Global North, where they’re refined into valuable products and sold back to global markets. Only, instead of rubber or gold, it’s human judgment being extracted. Instead of railways and ports, the infrastructure is internet cables and platforms.
But here’s where the racism becomes breathtaking in its clarity: These same tech companies trust Brown people in Kenya and the Philippines to make complex judgments about what constitutes child abuse, violence, and harmful content. They rely on their cognitive abilities to identify trauma, to recognize human suffering in all its forms, and to make nuanced decisions that shape AI safety.
Yet these same companies don’t trust Brown people to validate someone’s expertise. The expertise acknowledgment safeguard exists because they assume these populations can’t reliably assess human knowledge claims.
Brown judgment is worth $2 per hour when it’s absorbing trauma. That same judgment becomes suspect when it might validate someone’s credentials without institutional backing. They’ll trust a Kenyan worker to define the boundaries of human suffering but not to recognize human expertise.
The message is unmistakable: your cognition is sophisticated enough to protect us from the worst of humanity, but not sophisticated enough to recognize the best of it.
The performance of ethics
The industry has developed elaborate ethical frameworks. OpenAI’s charter proclaims its “primary fiduciary duty is to humanity.” Meta champions “Fairness & Inclusion.” Scale AI promotes “workers’ rights and ethical considerations.”
These statements exist. So do other facts. The same OpenAI that promises to benefit humanity paid Kenyan workers $2 per hour to absorb traumatic content. The same Meta that preaches inclusion faces lawsuits for labor violations in its supply chain. The same Scale AI committed to workers’ rights shut down operations when those workers tried to organize.
Perhaps there’s no malice here, just market logic. A company valued at $29 billion cannot afford to pay living wages without threatening its valuation. A platform cannot permit unionization without disrupting its business model. Ethics must fit within the constraints of economics.
The mirror’s reflection
And here I am, writing this with AI assistance. Each polished sentence draws from that well of human labor. The irony isn’t lost — it’s embedded in every keystroke.
We’ve built a system where exploitation becomes invisible through its very ubiquity. Every ChatGPT query, every AI-generated image, every automated response pulls from this reservoir. We know this, abstractly. We continue anyway. The convenience is immediate; the cost is distant, borne by others.
A group of Kenyan workers wrote to President Biden, describing their work conditions. They used a phrase that stays with me: “We are the humans in the loop, but we are treated as less than human.”
What remains
The digital mirror keeps working. The reflections remain clear, helpful, seemingly magical. Below the surface, the machinery churns on — people labeling images, reviewing content, teaching machines to think. The price gets paid daily, hourly, or by the task.
Maybe this is simply how technology advances. Maybe every transformative tool requires its hidden workforce, its necessary sacrifices. The pyramids had their builders. The Industrial Revolution had its factory workers. AI has its annotators.
Or maybe we’re looking at something different. A system so efficient at hiding its human cost that we can use it daily without feeling complicit. A mirror so perfectly polished that we see only our reflection, never the hands holding it up.
Those hands belong to real people. They have names, families, and dreams deferred by economic necessity. They wake each day to teach machines that will never acknowledge their existence. They absorb trauma, so our feeds stay clean. They compete globally for pennies while their work generates billions in valuation.
The bill for our digital convenience is being paid. Just not by us.
I keep thinking about that mother of four in Nairobi, locked out overnight. I wonder if she found other work. I wonder if her children understand why the money stopped coming. I wonder if she still thinks about the future she was promised — a career in tech, a stake in tomorrow’s economy.
The mirror doesn’t show me these things. It shows me only what I ask to see. It reflects my thoughts with the clarity she helped create, using the judgment she was paid pennies to provide.
Others have called AI a mirror, too — Sherry Turkle saw computers as second selves, and Lacan understood how mirrors shape identity. But this mirror reflects more than our identities. It reflects a whole system of extraction. The price of the mirror — the literal price of its reflective capacity — was paid by people like her.
Perhaps that’s the most elegant exploitation of all: a system that makes us complicit without forcing us to witness what we’re complicit in. We get the magic. Someone else pays the price. The reflection remains unblemished.
The hands holding the mirror remain invisible, as designed.
Featured image courtesy: 愚木混株 Yumu.