The Bridge Nobody Is Building

Two multi-billion dollar industries are unknowingly solving the same problem. One trains humans. The other extracts data from humans. The future belongs to whoever unites them.
Two Parallel Worlds
In Seattle, a startup called Yoodli just raised $40 million. Their product: AI simulations where salespeople practice difficult conversations before having them in real life. Users speak with an avatar that responds like a skeptical customer, a demanding boss, or an impatient investor. After each session, they receive instant feedback. The company reports 900% growth in revenue. Gartner projects that by 2026, 60% of large enterprises will use similar tools.
In San Francisco, a startup called Scale AI just received a $14 billion investment from Meta. Their product: armies of human workers who label data to train artificial intelligence models. Every time a human indicates "this answer is better than that one," they are teaching the machine something the machine cannot learn on its own. The AI training data market is worth almost $3 billion today. By 2032, it will be $17 billion.
Two industries. Two astronomical valuations. The same fundamental insight: humans have something machines desperately need.
But there's a problem. Neither of them is building the bridge.
The Invisible Gap
Yoodli and its competitors. Second Nature, Mindtickle, Exec. train humans using AI. Their users practice sales, negotiations, difficult conversations. They develop judgment, empathy, the ability to read context. Skills that, as we explored in "The Art of Asking," are precisely what machines cannot replicate.
Scale AI and its competitors. Surge, Appen, Invisible. extract data from humans to train AI. Their workers spend hours evaluating responses, labeling images, indicating preferences. It's repetitive, unstimulating, and fundamentally extractive work. The human gives; the machine receives. There is no development, no growth, no mutual benefit.
Do you see the gap?
Simulation platforms generate exactly the type of authentic human behavior that AI labs need. decisions under ambiguity, empathetic responses, contextual judgments. but they don't capture that value. Data labeling platforms capture value but don't generate human development. One creates; the other extracts. Neither does both.
It's as if there were gyms where people exercise, and factories where energy is produced, but nobody had invented the stationary bike that generates electricity while you pedal.
Mercor and the Accidental Validation
In October 2025, a startup founded by 22-year-olds called Mercor reached a valuation of $10 billion. Eight months earlier, it was worth $2 billion. What happened?
Mercor started as an AI recruiting platform. It connected companies with talent. But it quickly pivoted to something more lucrative: providing domain experts. scientists, lawyers, doctors. to train artificial intelligence models. It wasn't selling candidates for traditional jobs. It was selling specialized human judgment to teach machines what they cannot learn on their own.
The timing was perfect. When Meta invested $14 billion in Scale AI and Scale's CEO resigned to join Meta, the major AI labs. OpenAI, Google DeepMind, Anthropic. looked for alternatives. Mercor was there, with a network of over 30,000 experts who collectively receive more than $1.5 million per day.
But Mercor has a limitation. Its experts provide knowledge, but Mercor does not develop them. It's a matching model, not an educational one. Humans give; Mercor intermediates; labs receive. The flow is unidirectional.
The Snake Needs Food
In "The Snake That Eats Itself," we explored the problem of model collapse. the degradation that occurs when AI models are trained on content generated by other AI models. By April 2025, 74% of new web pages contained text generated by artificial intelligence. The poison is already in the water.
AI labs face a supply crisis. They need verified, structured, high-quality human data. But traditional methods of obtaining it. internet scraping, massive crowdsourcing, mechanical turk. no longer work. Content is contaminated. Workers are demotivated. Quality degrades.
The extractive model has reached its limit. The snake needs food that isn't itself.
The Gym That Generates Electricity
Imagine a different model. A platform where humans practice valuable skills. sales, negotiation, empathetic communication, decision-making under pressure. and each practice session generates authentic behavioral data.
The user improves. Develops judgment. Becomes more employable. And simultaneously, produces exactly the type of data that AI labs desperately need: real human decisions, genuine empathetic responses, authentic contextual judgments.
It's not extraction. It's symbiosis. The gym generates electricity while people work out.
And there's an extra ingredient that makes all this possible: proof of personhood. Using technology like World ID, each participant can prove they are a real, unique human, without revealing their identity. This protects AI labs from model collapse. they know that the data they receive is genuinely human, not bot-generated.
The Agent Economy
There's an additional element that makes this moment unique: AI agents.
In 2025, Salesforce launched the world's first agent marketplace. AWS, Oracle, and Microsoft followed. The market for agentic AI. AI systems that act autonomously. is valued at $7.5 billion today and is projected to reach $200 billion by 2034.
A revealing fact: 80% of new databases in Neon (a database platform) are automatically created by AI agents, not by humans. Agents are already acquiring resources programmatically.
What does this mean for the data market? That the future is not humans buying data to train AI. It's AI buying data to train itself.
A data marketplace prepared for this reality would not have an interface designed for humans browsing catalogs. It would have APIs and protocols for agents to autonomously discover, negotiate, and acquire training data. The human would provide the data. The agent would buy it. The infrastructure would facilitate the transaction.
Whoever builds this infrastructure first will have a structural advantage that is difficult to replicate.
The New Social Contract
Inevitably, we return to the same old point.
In "The Art of Asking," we argued that human judgment. the ability to navigate ambiguity, read context, decide without a manual. is the skill that machines cannot replicate. In "The Snake That Eats Itself," we showed that this skill is not only valuable in itself but is the essential ingredient for machines to continue improving.
Now the question is: who captures that value?
The current model is extractive. Data labeling platforms pay by the hour, not by value created. Humans are interchangeable commodities. Their data is sold without their knowledge, consent, or participation in the profits. It's the attention economy applied to cognitive labor: your judgment has value, but someone else keeps that value.
But it doesn't have to be this way.
Imagine a model where humans who contribute the data that trains AI:
It's a new social contract. Humans train AI. AI helps train humans. Learners, teachers, and AI labs share the pie.
The bridge is not just between two industries. It's between two visions of the future: one where humans are disposable inputs, and another where they are indispensable partners.
The Scarce Resource
Not long ago, the scarce resource was information. Then it was attention. Now it's human verifiability.
Verified human-generated content is becoming the new low-background steel. steel produced before 1945, before nuclear testing contaminated the atmosphere. That steel is essential for manufacturing high-precision sensors. It trades at premium prices because no more can be made.
Similarly, high-quality, verified, and structured human data is becoming the most valuable resource in the artificial intelligence economy. And unlike low-background steel, we can produce more. if we build the right infrastructure.
The bridge is waiting. The question is who crosses it first.
Join the Conversation
We're just getting started on this journey. If you're interested in the intersection of human quality data and AI, we'd love to hear from you.