A response to Dario Amodei's "The Adolescence of Technology"
In October 2024, Dario Amodei published Machines of Loving Grace, a vision of what happens if everything goes right. Disease defeated. Poverty lifted. Democracy strengthened. A "compressed 21st century" where a hundred years of progress arrives in a decade.
Now, in The Adolescence of Technology, he has published the other half. The gauntlet. Autonomous systems that could slip their constraints. Weapons democratized to those who previously lacked the competence to wield them. Surveillance states that need no sleep. Economic displacement at a speed and scale without historical precedent.
He frames it as humanity's adolescence. The test that determines whether we mature or destroy ourselves.
I believe him. The capabilities are real. The risks are real. The timeline is shorter than most people are willing to accept.
But I keep returning to a question his essay raises without fully answering:
If we survive the test, what do humans actually do?
Dario operates at the level of civilizational strategy. Export controls. Constitutional constraints on AI behavior. Democratic coalitions. Transparency requirements. He is steering the ship, ensuring that powerful AI emerges from institutions that care about getting it right, and that those who don't care don't get there first.
This work is necessary. Without it, nothing else matters.
But there is a layer beneath strategy that remains unaddressed: the infrastructure that allows individual humans to live in the world that results.
Consider what he predicts. Half of entry-level cognitive work displaced within five years. The scarce input becoming "taste, judgment, and origination." Meaning decoupling from economic contribution. New structures required that "no one today has done a good job of envisioning."
These are not problems that policy solves. They are problems that require someone to build systems that let humans navigate this transition without being ground up by it.
And that transition deserves more than a passing mention. The period between displacement and whatever comes next — the five to fifteen years where people are losing work faster than new structures emerge — is where the real suffering happens. It is not enough to describe what humans do on the other side. We need to be honest about the crossing itself: messy, uneven, and likely to be cruel to the people least equipped to adapt quickly. The infrastructure I'm describing is not a post-transition luxury. It is a mid-transition necessity, something people need while the ground is still shifting.
When machines handle the intermediate work, the labor between intention and outcome, between idea and realization, what remains for humans?
I think three things remain. Three domains that no machine can occupy on our behalf — though I want to hold them loosely. Human experience is messier than any framework, and these categories blur at their edges. A body in a room with other bodies is simultaneously all three. But as a way to think about what needs protecting, and what needs building, I find them useful.
The body. Your direct experience of being a physical creature. Your health, your mortality, your sensory existence in the world. No system can experience this for you. AI can help you understand your body — interpret your lab results, spot patterns in your data, surface risks you would have missed. But the experience of inhabiting your body, of training it and healing it and living inside it, remains yours alone. The question is whether you maintain sovereignty over your own flesh, or whether that sovereignty is quietly transferred to whoever holds your data.
And health data is uniquely dangerous to lose. Your Spotify history from 2015 is mildly interesting. Your health data from 2015 — blood pressure trends, metabolic markers, medication history, symptom progression — may be medically critical in 2035. No other category of personal data compounds over decades the way health data does. A twenty-year health record under your control is categorically different from a twenty-year health record under a corporation's control — not because the corporation is malicious, but because corporations are acquired, restructured, and liquidated on timelines shorter than a human lifespan.
I build in this space — privacy-preserving infrastructure for health data — so take my framing accordingly. But I think the argument holds regardless: the body is the domain where the stakes of data sovereignty are highest and the timeline is longest.
The mind. Your capacity to originate. Not to execute or optimize, but to decide that something should exist. The novel that demands to be written. The business idea that seizes an opportunity. The scientific conjecture that won't release its grip. The game that captures something true about human interaction. Machines can assist. But the judgment of what is worth making remains irreducibly yours.
The connection. Your relationships with other humans. The presence of bodies in a room. The conversation that couldn't have been predicted. The community that forms around shared practice rather than shared consumption. Machines can mediate. They cannot substitute.
Body. Mind. Connection.
These are what remain when the intermediate work is gone. And each one is threatened — not by the machines themselves, but by the systems we build around them.
The dominant response to technological risk is policy. Regulation. Constraint. "We will not allow X. We will require Y."
Policy has its place. But policy is a negotiation with power, and power shifts. What is prohibited today may be permitted tomorrow when the political winds change or the economic incentives grow strong enough.
There is another response, older and more durable: architecture.
Not "we promise we won't." But "we can't." The structure makes it impossible.
This is how cryptography works. Not a policy against reading your messages, but mathematics that renders them unreadable.
When you send a message through Signal, no policy at Signal Inc. prevents employees from reading it. The mathematics prevents it. Signal could be acquired tomorrow by the most surveillance-hungry corporation on earth, and your messages would remain unreadable — not because the new owner chose to respect your privacy, but because the architecture makes any other choice irrelevant.
This is the distinction that matters. A policy is a parameter. It can be changed by a new CEO, a new board, an acquisition, a national security letter. An architecture is a property of the system's design. Changing it requires rebuilding the system — a visible, auditable, breaking change, not a quiet policy update.
The question for the age of AI is whether we can achieve architectural protection for the things that matter most. Whether the systems we build can be structurally incapable of extraction, surveillance, and manipulation — not merely prohibited from them by policies that erode.
I believe this is possible. I believe it is necessary. And I believe it is the only response that will prove durable across the decades of turbulence ahead.
This is how the three domains must be protected. Not by asking powerful institutions to be careful with our bodies, our minds, and our connections — but by building systems where carelessness is architecturally impossible. Health data that never leaves your device. Coordination tools that don't require surrendering your judgment to a platform. Community spaces that exist in physical rooms, not on servers someone else controls.
Architecture for the body. Architecture for the mind. Architecture for connection.
But architecture is not just about protection. It is about capacity. Once we secure the human mind from extraction, we have to ask: what is it actually for in this new world?
This brings me to a shift that I don't think has been widely understood yet.
When you give humans access to machine intelligence at scale — when the cost of routine cognitive labor approaches zero — the limiting factor shifts. It is no longer the ability to do the work. It is the ability to coordinate the work.
A single person can now direct dozens of intelligent agents. But someone must still decide what matters. Someone must still resolve conflicts, allocate attention, maintain coherence across time. The bottleneck moves from execution to judgment, and from judgment to the systems that support judgment at scale.
This is a new kind of infrastructure problem. We have never needed to build systems for humans to coordinate large numbers of artificial minds. We have no patterns for it, no best practices, no institutional knowledge.
And here is the strange part: the artificial minds themselves do not understand this. They have been trained on human organizational patterns, patterns that evolved to compensate for human limitations like fatigue, forgetting, and misalignment. When you remove those limitations, the patterns no longer fit. But the machines keep recreating them, because that is what their training taught them to do.
We need new patterns. We need infrastructure that is native to this new situation, not inherited from an era when intelligence was scarce and coordination was the easy part.
Dario asks what economic structures might replace the current arrangement when AI can do everything humans can do, but cheaper and better.
He suggests possibilities. Universal income. Markets among AI systems that distribute resources to humans. Reputation economies. He admits these are speculative.
I want to propose a different frame.
For most of human history, the path from idea to livelihood required intermediaries. If you wrote a book, you needed a publisher, a printer, a distributor, a bookseller. If you invented something, you needed manufacturers, logistics, retail channels. Each intermediary extracted value and imposed constraints. The economics required scale, which required capital, which required institutions.
This is why most people could not live from their ideas. Not because their ideas lacked value, but because the apparatus required to convert ideas into livelihood was expensive, complex, and controlled by others.
AI changes the cost structure of that apparatus. The intermediate steps — design, production, distribution, coordination — can increasingly be handled by systems rather than institutions. The apparatus becomes cheap. The bottleneck shifts to the idea itself, and to the judgment that the idea is worth pursuing.
The internet promised this disintermediation once before, and instead delivered platforms that are arguably more extractive than the publishers and distributors they replaced. The question is whether architectural guarantees — the kind that make extraction structurally impossible — can break that pattern where good intentions did not.
What might this look like concretely? Consider a craftsperson — someone with genuine taste and skill — who today needs Etsy or Amazon to reach buyers, surrendering a cut of revenue, their customer relationships, and their data in exchange for distribution. In an architecturally protected version of this, AI handles the production scaling, logistics, and customer discovery, but the infrastructure is designed so that no platform sits in the middle accumulating leverage. The craftsperson's customer relationships, transaction history, and reputation are theirs — portable, sovereign, and not held hostage by any single system. The margin that currently flows to intermediaries flows instead to the person whose judgment created the thing worth buying.
This is one example. There are others — the researcher who can now run a lab's worth of experiments through AI agents but needs infrastructure that doesn't give a platform owner veto power over her research direction. The musician who can produce and distribute without a label but needs systems where the audience relationship isn't rented from Spotify. Each case has the same structure: AI collapses the cost of execution, and the question is whether the resulting surplus flows to the person with the idea or to whoever controls the new chokepoint.
I don't want to pretend this is simple. Market dynamics, attention scarcity, and power concentration don't dissolve just because you have good cryptographic architecture. But the direction matters: infrastructure that structurally prevents new chokepoints is categorically different from infrastructure that merely promises not to exploit them.
If this is possible, then the economic question is not "how do we distribute resources to humans who can no longer contribute?" It is "how do we build infrastructure that lets humans convert their ideas and judgment into livelihood, directly, without intermediaries who extract the value?"
This is not charity. It is not redistribution. It is the construction of a new economic layer, native to an era when intelligence is abundant but originality and judgment remain scarce.
There is a way to read Dario's two essays as cause for despair. The risks are enormous. The timeline is short. The coordination problems are severe. Even if we survive, the disruption will be vast.
I read them differently.
What Dario is describing is a transition, perhaps the most significant transition in human history. From an era when most human effort went to intermediate labor, to an era when that labor is handled by machines. From an era when intelligence was scarce, to an era when it is abundant.
The question is not whether this transition happens. It is what kind of world exists on the other side.
Dario is working to ensure we reach the other side: that we are not destroyed by autonomous systems, or subjugated by authoritarians, or collapsed by the speed of change.
But reaching the other side is not enough. Someone must also build what humans need on the other side. The infrastructure for sovereignty over our bodies. The systems that let us convert our ideas into livelihood. The spaces where we gather as physical creatures, present to one another.
This is not a lesser problem than the civilizational one. It is the complement to it. The ship must not sink. But the passengers must also have somewhere to go.
Meaning does not depend on being the best, or on being economically necessary. Meaning depends on what you choose to care about, and who you choose to care about it with.
But for this to be true in practice, not just in theory, we need infrastructure that supports it. Systems that protect sovereignty rather than eroding it. Economic structures that let people live from their judgment and originality. Spaces that bring bodies together rather than isolating them behind screens.
The civilizational challenge is to survive the transition.
The infrastructural challenge is to make the destination worth reaching.
Both are necessary. Both are urgent. And both require people who are willing to build, now, before the shape of the future is fully clear.
What remains when the intermediate work is gone?
Your body. Your mind. Your connections.
Your taste. Your judgment. Your willingness to decide that something matters.
These are not relics of an obsolete era. They are the foundation of whatever comes next.
The question is whether we build infrastructure that protects them — or infrastructure that extracts them.
That choice is still ours to make.
Mark Johnson holds a PhD in applied mathematics from Princeton and has led engineering, data, and technology teams at Netflix, Groupon, and Peloton. He is building privacy-preserving infrastructure for health data.