← Essays

The Trades We Made

How we traded understanding for convenience

Mark Johnson · March 2026

The Infrastructure We Need described what we need to build. This one asks how we arrived at needing it.

In 2024, Microsoft signed a twenty-year contract to restart the nuclear reactor at Three Mile Island. The intention was not to heat homes or power hospitals, but instead to feed its AI data centers. Shortly after, Amazon tethered a new data center campus directly to another nuclear facility and Google signed its own nuclear deal. The tech industry's appetite for power had outgrown the grid.

The communities living near these facilities noticed. Across the country, towns are fighting back — not against technology, but against the physical reality of what “the cloud” actually requires. More than sixty billion dollars in data center projects have been blocked or delayed. A movement that barely existed two years ago now spans more than twenty states.

We have abstracted computing so far from the metal that we forgot it requires actual fire and water.

That forgetting didn’t happen overnight. It happened the same way most consequential things happen: one reasonable trade at a time.

My stepfather could rebuild a carburetor on a Saturday afternoon. He’d lay out the parts on a shop rag, clean each one, understand how they fit together, and put the engine back in order. He understood his car because the cars were built from first principles; they were comprised of mechanical parts with physical relationships full of cause and effect you could see and touch and get grease on your hands from.

I cannot do anything meaningful under the hood of my car. The machine itself has fundamentally changed — sealed modules, proprietary sensors, software layers that require dealer diagnostics. The modern car is better in almost every measurable way — safer, more efficient, more reliable — but the relationship between the owner and the machine has been entirely severed.

I’ve been thinking about that trade more broadly. Not just in automobiles, but in the broader arc of how we built the digital world. Like many of you, I watched this same pattern play out from the inside over the last few decades of the internet, and we are only now beginning to calculate the true cost.


The Open Internet

In the beginning, software came in a shrink-wrapped box. You bought it, installed it, owned it. You clicked a button, and you knew a file was being updated on your hard drive. A game ran by reading a script from a physical disk. If it broke, you could — in principle, and often in practice — understand why.

This physical limitation enforced an engineering discipline that we’ve since abandoned. Companies had to invest heavily in testing. If you shipped a catastrophic bug on a million CD-ROMs, you couldn’t just silently patch it over the weekend. The product had to be finished. The quality had to be high because the medium demanded it.

The early internet changed that physics. The ability to update software over the wire was a miracle of distribution, but it subtly altered our incentive structure. “Ship it now, patch it later” became the industry standard. It allowed us to move incredibly fast, but it also allowed slop to go live. The friction of the physical box was gone, and with it went the strict ceiling on acceptable waste.

The early web shared this new ethos. Open, decentralized, nobody in charge. Usenet groups, bulletin boards, hand-coded web pages. The population was small and mostly well-intentioned. People built protocols and shared knowledge because they believed in the idea that information should be free.

This was beautiful. It was also, in ways we could not yet see, the foundation of everything that went wrong.

The openness that made the early internet generous also made it vulnerable. There was no identity layer. No trust infrastructure. No mechanism for proving who you were, verifying where information came from, or establishing that the person on the other end was real. None of this mattered when the community was small. It would matter enormously when the community became the entire world.


The Trades We Made

The first trade was almost imperceptible. Cookies. Accounts. “Remember me.” Websites began storing your preferences — what you liked, what you bought, what you read — on their servers instead of yours. Your configuration moved off your machine and onto theirs.

We all clicked “I agree” without reading the terms.

It seemed like nothing. It was, in retrospect, the template for everything that followed: surrender something invisible in exchange for something convenient. Each individual trade was locally rational — if you were among the people who got to make it. For those who found themselves inside systems they didn’t design and couldn’t opt out of, the accumulation wasn’t a choice. It was a condition. The accumulation was devastating.

The second trade was industrial. Google offered infinite email storage, for free. Facebook offered connection with everyone you’d ever known, for free. The services were extraordinary — genuinely useful, beautifully designed, and available to anyone with a browser. The only cost was your attention, your preferences, your social graph, and your behavioral patterns, packaged and sold to advertisers.

We weren’t customers anymore. The advertisers were the customers. We were inventory.

And we optimized that inventory with ruthless precision. The tool was simple: the A/B test. Tweak an algorithm, run the numbers, ship what holds attention longer. Multiply that loop by billions of interactions, running thousands of experiments simultaneously, and the math has only one destination. It optimizes for the lowest common denominator of human psychology — not because anyone chose that, but because that’s where the metrics pointed. We automated the extraction of our own autonomy, one statistically significant improvement at a time.

The third trade was invisible. While we were debating whether Facebook knew too much about us, a shadow economy emerged underneath. Data brokers — Acxiom (now LiveRamp), Oracle Data Cloud, dozens of others — bought, combined, and resold our information in markets we never saw. Companies we’d never heard of assembled profiles more detailed than anything we’d shared with the platforms directly. Your zip code, your birthdate, your purchase history, your browsing patterns — combined across dozens of sources into a portrait that could anticipate your purchasing decisions before you made them.

We’d been clicking “I agree” for so long it had become muscle memory.

And then: the cloud. Everything moved to someone else’s computer — not just our data, but our software, our computing, our business logic. The convenience was real. Access from anywhere. Automatic backups. Infinite scale. Startups could launch without buying a server. The pitch was efficiency: pay for exactly what you use.

It was efficient. It was also the completion of a transfer that had begun with a cookie. Our data, our software, our computing — all now running on machines we didn’t own, under terms we didn’t read, controlled by companies whose incentives were not ours. We had moved from owning the box to renting the abstraction. And we had done it willingly, one click at a time, because each step was genuinely better than the last.

And underneath that transfer was another one — quieter, more technical, but just as consequential.


The Distance from the Machine

There is a parallel history here that most people outside of software never think about, but that shapes the world they live in.

When programmers wrote in C, they still felt the metal underneath. You managed your own memory. You knew where your data lived. The distance between your intention and the machine’s execution was small.

Then the abstractions grew. Each new layer — managed memory, web frameworks, cloud platforms — lowered the bar for entry and raised the distance from the machine. Millions of new developers. Unprecedented productivity. Each layer was celebrated as progress, and each layer was progress.

But underneath the productivity, something accumulated. Code bloat. Dependency trees hundreds of layers deep. Applications that needed exponentially more hardware to do what a C program did in kilobytes. Frameworks that hid the machine so well that most developers never met it — not because they were lazy or unskilled, but because the abstractions were doing their job, which was to make the machine disappear.

Cloud computing completed the disappearance. “The cloud” is someone else’s computer, but it’s marketed as no computer at all — pure capability, pure scale, pure convenience. And it delivered.

It also made waste invisible.

This is the automobile pattern applied to software. My stepfather’s carburetor was inefficient by modern standards. But he could see the inefficiency, understand it, fix it with his own hands on a Saturday. A modern engine is far more efficient — but when it fails, you can’t diagnose it without proprietary tools. The owner went from mechanic to passenger. The developer went from understanding the machine to trusting the abstraction.

There is a psychological cost to this distance, too. When my stepfather rebuilt his carburetor, his understanding of the machine created a sense of stewardship. You don’t casually discard something you intimately understand and maintain. But as the machine became a sealed black box, it stopped being a relationship and became a pure utility. The tool became a consumable. This is how waste begins: when we lose the ability to understand a system, we lose the desire to steward it. We stop owning and start leasing — whether it’s our cars, our code, or our data. The connection between maker and machine — the intimacy of understanding — is not a luxury the modern world outgrew. It is something essentially human that we traded away.

Nobody designed the waste. It accumulated in the growing distance between the human and the thing being built. Each layer of abstraction made life easier. Each layer also made the consequences harder to see.


The Feast

Running alongside this entire history — from the first BBS to the latest AI model — is a story about generosity and its exploitation.

Wikipedia. Stack Overflow. Reddit. Linux. Python. The open source movement. Usenet archives. Thousands of forums where professionals answered questions for free. The greatest collaborative knowledge projects in human history, built by communities who believed in open access, built for free, with the best of intentions.

A programmer in Kansas answered a Stack Overflow question about database indexing in 2014. She spent twenty minutes explaining it clearly, with examples, because she remembered being confused by it herself and wanted to leave the ladder down for the next person. That answer has been read hundreds of thousands of times. It has taught a generation of developers.

It also became training data for an AI system that now generates database advice faster than she can type.

She wasn’t asked. The terms of service didn’t contemplate machine learning. The Creative Commons licenses weren’t designed for this. She gave her knowledge freely to other humans. It was consumed by machines that now compete with the community that created them.

This is the cruelest irony of the digital age. The people who believed most deeply in open knowledge — who contributed their expertise because they wanted to build a commons — unknowingly prepared the feast for systems that would consume it.

Communiter bona profundere deorum est, wrote Benjamin Franklin when he founded America’s first lending library in 1731 — to pour forth benefits for the common good is divine. Nearly three centuries later, the commons he imagined was harvested by machines that had no concept of the good, the common, or the divine.

Open source developers trained the AI that generates code. Wikipedia editors trained the AI that generates “knowledge.” Reddit posters trained the AI that generates conversation. The community contribution was given freely. It was taken industrially.

And at every stage of this history, the extraction attracted a specific kind of person — one who understood the mechanics of the new thing without understanding its substance. They didn’t build the library. They found faster ways to empty it. The dot-com era had its hucksters. Social media had its growth hackers. Crypto had its scammers and rug-pullers. AI has its wrapper grifters and prompt-engineering influencers promising ten thousand dollars a day. The noise doesn’t just distract. It inoculates people against the real thing — “Blockchain? That’s a scam.” “AI? That’s hype.” — and in doing so, poisons the well for everyone who’s actually building. The nightclub consumes the library, and the library’s own generosity is what made the feast possible.


The Bots

AI systems trained on all of the above — our data, our preferences, our communities’ knowledge — operate at scale in every domain. Bots write articles. Bots answer questions. Bots apply for jobs. Bots generate code. Bots create content that other bots consume.

The waste is now self-generating. This is the “ship it now, patch it later” ethos taken to its absolute, algorithmic extreme. AI agents generate vast quantities of code, documents, and configurations — much of it good, much of it immediately discarded slop. We removed the testing department decades ago, and now we’ve automated the engineers. The models themselves are often built larger than necessary, running on hardware that strains the electrical grid. Communities across the country are rallying to block data center construction in their towns. The physical world is pushing back against the waste that forty years of abstraction made invisible.

But there is something worse than the waste.

Every previous era of the internet had a human somewhere in the chain. Imperfect, often venal, but findable. A company that mishandled your data could be sued. An employee who committed fraud could be fired. A platform that betrayed its users could be boycotted, regulated, shamed.

With AI, the chain doesn’t just fray. It dissolves.

A bot gives your elderly mother wrong medical advice. Who is responsible? The company that trained the model? The one that fine-tuned it? The platform that deployed it? The developers whose Stack Overflow answers became its training data? The question isn’t rhetorical — it’s structural. The accountability chain that made trust possible, even imperfectly, has been replaced by a series of handoffs that ends at no one in particular.

This is new. And it is the thing that makes everything else harder to fix.

And now we discover what we agreed to.


What We Spent

Here is what I think happened, stated plainly.

The open internet started with a reservoir of trust. Not institutional trust — something more organic. The trust of a community small enough to operate on good faith. The trust of people who built things for each other because they believed in the commons. The trust inherent in a system where most participants were human, most contributions were genuine, and most interactions were between people who, at some level, meant what they said.

Each era spent some of that trust. We trusted companies with our data, and they sold it. We trusted platforms with our identities, and they monetized them. We trusted communities to remain human, and they were consumed by machines. We trusted that free services meant generosity, and they meant extraction.

The well isn’t poisoned from the outside. We drained it ourselves, one convenience at a time.

And here is the uncomfortable symmetry: we also spent the proximity that once connected us to our tools. We traded understanding for convenience in software just as surely as we traded privacy for convenience in our data. My stepfather understood his car. I understood my early code. I built some of this. I followed the data too. Our children understand neither their devices nor the systems that run on them, because the systems were designed to make understanding unnecessary.

This is not a story about villains. Most of the people who built each layer — the web developers, the cloud architects, the framework designers, the AI researchers, myself and my colleagues included — were solving real problems and making life genuinely better. That is what makes the arc so difficult to reckon with. Each step was good. The accumulation is the problem.


The Pushback

There is a final chapter beginning, and it is the reason I’m writing this now.

Communities are starting to push back. Not against technology — against the terms.

Reddit is charging for API access, explicitly to protect community-generated content from AI training pipelines. Stack Overflow has restricted AI training use of its content. Artists and writers are asserting copyright claims over training data. Open source projects are reconsidering their licenses, asking for the first time whether “free to use” should mean “free to train a machine that replaces the people who wrote it.” Towns are blocking data center construction, insisting that the waste has physical consequences that their community should not bear.

Individual humans are making smaller, quieter choices. Opting out. Going private. Choosing friction over convenience — reading the terms, withholding the data, using tools that are harder to use but harder to exploit.

This is the community recognizing, slowly and painfully, that openness without boundaries is exploitation by another name. That generosity without structure gets consumed by anyone with the machinery to consume it at scale.

It is also a realization that we have to become the experts again. If we are going to survive an era of automated generation at exponential scale, we have to rebuild the discipline we abandoned when we left the shrink-wrapped box behind. Not by going backward — but by putting ourselves back in the loop. Verifying the provenance. Reading the terms. Understanding, at least in principle, what our tools actually do. The institutions let the testing department go. Now we are the testing department.

The question is not whether to set boundaries. That is already happening. The question is what kind — and it has weight, because the crossing is already underway. The mid-career radiologist whose diagnostic work is being automated faster than her retirement timeline. The tenured journalist whose beat was eliminated the month her last kid started college. They are not on the other side of this transition. They are in it, right now, with no infrastructure built for them yet. The infrastructure this series describes is not a post-transition luxury. It is a mid-transition necessity.

This essay does not end with solutions — deliberately. The diagnosis has to land first, and it has to land hard, before the building makes sense. But if you’ve read the first essay in this series, you already know the direction. The question isn’t whether to build. It’s whether we build in time.

The communities living near Three Mile Island didn’t click “I agree.” They showed up to zoning meetings. They blocked permits. They made the cost visible in a way that forty years of invisible accumulation never could.

The choice of what to build next — whether to keep trading or to start building boundaries — is still ours. The machines are patient, and they never stop clicking “I agree.”


This is the second in a series. The first, “The Infrastructure We Need,” is at maejohns.com. The next essay asks: what was the internet built without — and what happens when we finally build it?

Mark Johnson holds a PhD in applied mathematics from Princeton and has led engineering, data, and technology teams at Netflix, Groupon, and Peloton. He is building privacy-preserving infrastructure for health data.