Artificial intelligence is often described as the most transformative technology of our time. But behind the grand promises of superhuman productivity and trillion-dollar valuations lies a quieter, more troubling story: AI is being trained on the backs of people who can barely afford to do the work.
Over the past few years, we’ve seen an unprecedented flood of capital into AI startups and infrastructure, think hundreds of billions poured into companies like OpenAI, Anthropic, and countless others. This money is fueling everything from massive data centers to advanced model training, with venture capital firms and tech giants betting big on the promise of transformative AI. But beneath this hype lies a troubling reality: much of the foundational work training these AI systems relies on underpaid, overworked human labor, often from low-wage regions. This approach isn’t just ethically questionable, it’s a recipe for subpar results that could inflate costs further and contribute to an AI bubble that’s already showing signs of strain.
Let’s start with how AI actually gets “smart.” Generative AI models, like those powering chatbots or image generators, don’t learn in a vacuum. They depend heavily on human feedback through processes like Reinforcement Learning from Human Feedback (RLHF). This involves armies of annotators and raters who review AI outputs, check facts, rate quality, and provide corrections. Leaked documents from major AI firms, such as those recently surfaced on platforms like Reddit and covered by outlets like The Verge, paint a grim picture of this work. A typical task might require a rater to dissect a 500-word AI response in just 15-30 minutes: verify facts against sources, assess helpfulness on scales like “extremely helpful” to “not at all,” evaluate presentation for clarity and tone and flag issues like inaccuracies or biases. It’s intense, detail-oriented labor that demands sharp focus and domain knowledge, whether it’s fact-checking medical info or ensuring responses aren’t misleading.
The problem? This critical work is often outsourced to the cheapest possible labor pools, paying as little as $10-14 per hour, with caps around $21. For context, that’s less than what Amazon pays its warehouse workers ($22-25/hour for physically demanding but less mentally taxing jobs). Reports from Bloomberg in 2023 highlighted how companies like OpenAI and Meta rely on contractors in places like Kenya, India, and Nigeria, where economic pressures mean people take these gigs out of necessity, not expertise.

A follow-up piece in The Guardian earlier this year echoed this, describing “digital sweatshops” where workers face grueling quotas, sometimes 50-60 hours a week, to earn bonuses, netting as little as $500 monthly. Overtime isn’t optional; it’s baked into the system to hit performance targets, but as any productivity study will tell you (like those from the Harvard Business Review on burnout), extended hours tank efficiency. After 3-4 hours of this mental grind, even skilled workers start cutting corners, leading to errors that propagate into the AI models.

This cheap-labor model directly undermines AI quality. If your trainers are fatigued, underqualified, or rushing through tasks, the feedback loop produces “garbage in, garbage out.” We’ve seen this in real-world AI glitches: hallucinated facts in responses, biased outputs, or incomplete verifications. For instance, in the leaked instructions mirrored in that PDF on rating AI responses, there’s emphasis on checking for “unsupported” or “contradictory” claims against evidence - yet if raters are burned out, they might miss that a sentence about a drug’s efficacy omits key conditions or misrepresents data. Bloomberg’s investigation noted that many raters lack subject-matter expertise, relying on quick web searches rather than deep knowledge, which leads to inconsistent training data. The result? AI systems that are “good enough” for demos but falter in high-stakes applications, requiring endless iterations and fixes.
At the same time, the financial structure of the industry encourages this dynamic. Big investors want growth, not patience. Startups are told to scale first, fix quality later. But that short-term thinking could prove costly. The more models are trained on inconsistent or shallow human feedback, the more they’ll need to be retrained, corrected, and re-evaluated - each cycle adding to the cost base that the industry is already struggling to contain.
Financially, this is a ticking time bomb. The AI sector’s cost base is already skyrocketing - training a single large model can cost tens of millions in compute alone. But skimping on human labor creates a false economy. Poor training means more rounds of refinement, higher error rates in deployment, and ultimately, dissatisfied users or regulatory scrutiny (think EU AI Act fines). Investors have pumped in that billions expecting quick returns, but if models underperform due to flawed foundations, we’ll see ballooning R&D expenses to patch them up. It’s reminiscent of the subprime mortgage crisis: over-optimism and corner-cutting leading to systemic risks. Successful tech breakthroughs, like Google’s early search engine or Apple’s iPhone, came from companies that invested in top talent with fair pay and sustainable conditions - above-market salaries and focused work environments that foster innovation. AI firms ignoring this are optimizing for short-term margins at the expense of long-term viability.
If the reliance on cheap, exhausted labor continues, it could accelerate a bubble burst: investors pull back, costs spiral as firms scramble to hire better talent retroactively, and the promised AGI-like advancements remain elusive. The irony is that for all the billions flowing in, a modest increase in rater pay - say, to $30-40/hour with capped hours - could yield exponentially better data quality, reducing downstream expenses. But until the industry shifts from cost-cutting to value-building, we’re betting on a house of cards. As investors, we should watch for metrics like model error rates and labor turnover in upcoming earnings reports, they’ll be the canaries in this coal mine.
References: