I have perhaps more fun using AI than I’ve ever had using a computer. I need to start there because otherwise the rest of this will sound like it comes from a skeptic, and my position is more complicated than that.
Daniel Miessler recently made a compelling argument that AI will replace knowledge workers. His diagnosis is sharp — corporate knowledge work is broken in ways that have been obvious for decades. SOPs nobody reads. Meetings where nothing gets decided. Empire-building middle managers. The bar is on the floor. I use Miessler’s PAI framework every day. I’ve spent hundreds of hours inside Claude Code, building workflows, writing skills, running local models on my M1 Max and my iPhone. I agree with almost all of his diagnosis.
Where I part ways is with his prescription.
I’ve spent over a decade consulting on technology across almost every industry you can name, and what I’ve learned is that the technology is always the easy part. It’s the people, the operating models, the existing frameworks, and most of all, the economic interests. When Miessler tells individuals to capture their expertise and move up the capability stack, I hear the advice through the lens of someone who’s watched dozens of industries try to absorb technological change — and watched how rarely the displacement lands on the people who made the decisions.
His advice goes like this: capture your expertise into structured formats. Move up the capability stack from execution to vision. Become “Human 3.0” — a creative director managing AI systems the way a CEO manages departments. This is individually rational and collectively incoherent. It is the composition fallacy dressed in futurist clothing.
If one person captures their expertise, they become more valuable. If everyone does, it’s all commoditized in a few years. You commoditized your own expertise — and got maybe a few years of value from it before everyone else did the same. And those “Human 3.0” roles — creative directors, vision-setters, company builders — the economy doesn’t need a billion of them. It needs maybe ten million. One person plus AI can do the work that used to require a team. That’s not just a productivity gain — it’s the dissolution of the team itself. The community of practice, the mentorship, the shared struggle. What replaces it is a single person bearing the weight of output, and associated responsibility and accountability, that used to be distributed across dozens, with expectations scaled to match.
The hypercompetitive treadmill of optimization we seem to be on, where the implicit message in Silicon Valley, on AI Twitter, in every “how I 10x’d my productivity” thread is about getting augmented or getting stuck in a permanent underclass. Douglas Rushkoff called this the “insulation equation” — the calculus by which tech elites earn enough to insulate themselves from the disruption they’re creating. Evan Osnos reported in The New Yorker that more than half of Silicon Valley billionaires have made doomsday preparations, not because they think civilization will collapse randomly, but because they fear the displacement they’re engendering will trigger revolt. Jaron Lanier has been saying for over a decade that the same networks generating billionaire wealth hollow out middle-class stability. It’s not a conspiracy. It’s the ambient logic of a system that rewards insulation. And that logic shapes trillion-dollar decisions.
Every time I open a terminal and start typing, I am teaching Anthropic to replicate my best ideas by using their product. Every prompt I write, every workflow I build, every domain-specific skill I articulate — it flows through their inference infrastructure. My expertise becomes their training data. This is obviously not unique to Anthropic, just my use case.
I pay $200 a month for my Claude subscription. At API rates, my actual usage — the multi-agent workflows, the extended conversations, the research pipelines — would cost roughly $1,000 to $1,200. Anthropic absorbs a five-to-six-fold cost differential on users like me. This subsidy is an exchange. My reasoning, flowing through their infrastructure, is worth more to them than what they lose on the compute (though Dario Amodei has said the inference itself is profitable, while not specifying at what rates). This is the convenience of next-generation surveillance capitalism.
Shoshana Zuboff warned us about extracting behavioral surplus from search queries and social media. What’s happening now is qualitatively different. We’re not giving away our browsing habits. We’re giving away our reasoning. Our decision frameworks. Our domain expertise. The thing that made us professionally irreplaceable. And we’re doing it voluntarily, enthusiastically, because the tools are genuinely incredible.
The serpent eats its own tail.
Irrespective of AI, we are witnessing a convergence of global crises, including ecological, economic, labor, and famine, and their associated effects: radicalization of politics, mass migration, institutional decline, increasing income and health inequalities. The associated misinformation, radicalization, individual, community, social, and institutional stresses are all byproducts of this convergence, not isolated phenomena. Societal and governmental systems are under immense, perhaps unprecedented, pressure to adapt, and the rate of technological change far outpaces our individual, social, and legal capacity to do so. We don’t have forty years to absorb this the way we absorbed the mechanization of agriculture. We likely don’t have ten to absorb it across a multiplicity of domains.
When Steinbeck wrote The Grapes of Wrath, the displacement was regional — hundreds of thousands of tenant farmers pushed off their land by mechanization and drought. The system framed their suffering as personal failure. They should have adapted. They should have diversified. They should have seen it coming.
We are in the midst of the construction of the global version of that story. The academics who spent a decade earning PhDs to find that tenure-track positions have evaporated — they should have learned to code. The consultants whose analytical frameworks got automated — they should have moved up the capability stack. The lawyers, the radiologists, the financial analysts — they should have become “Human 3.0.” And the coders? The ones who did learn to code, who followed the advice, who built the very systems that now write code better and faster than they do? They should have learned to — what, exactly? Prompt engineer? Manage AI agents? That’s the tell. When the advice loops back on itself and eats its own prescription, you’re not looking at a career development problem. You’re looking at a structural one.
Framing structural displacement as individual failure to prepare is genuinely ancient — God cursed Adam to toil by the sweat of his brow for eating from the tree of knowledge, and the English enclosure acts displaced peasants from common land then rebranded their poverty as laziness. It is dangerous because it forecloses the collective action that structural problems require.
When I was in undergrad studying philosophy of technology, one of the topics that drew me in was human augmentation — specifically, the notion that we could reach a point where a select few anoint themselves with far greater intelligence, memory, and physical ability, and in so doing create an incommensurable gulf of inequality. I studied this as theory, as thought experiment. I think we’re living it now.
Consider the arithmetic of this essay. I’m generating it with AI tools on cloud compute. The inference costs, the electricity, the hardware — I’m likely burning more money producing these words than billions of people earn in a day. The World Bank says 4.1 billion people — more than half the planet — live on less than $10 a day. 1.9 billion on less than $5. 840 million in extreme poverty. I have a 64-gig M1 Max running language models on my desk. I carry a device in my pocket that runs inference locally. These are instruments of cognitive augmentation that most of the world cannot access, cannot afford, and will never be marketed to, because there’s no profit in serving the bottom three billion of our human siblings.
The concentration of knowledge and capability within a very small number of companies — Anthropic, OpenAI, Google, Meta, and others — is concerning in ways that go beyond market dynamics. These companies are the de facto processors of all inference requested through their models. This is, to be clear, absolutely not by necessity. They are assembling the most comprehensive map of human expert reasoning ever constructed. Not because they’re villains. Because that’s the architecture and the only way to support the immense economic pressure they bear. And the gulf between those who can access these tools and those who can’t isn’t coming. It’s here. The gulf between free and consumer versions of LLMs and the tools that the technological elite are using is already widening, and accelerating. Mark Zuckerberg is building an AI agent to function as an executive assistant and analyst, surfacing context from across Meta’s organizational layers — part of a broader push where employees already deploy their own bots and AI usage is tied to performance reviews.
As much as I am excited by the possibility of being able to make mini applications to do mundane tasks I dislike locally on my computer, I also find this new work mentally exhausting and phenomenologically challenging. It’s not the exhaustion of sustained mental or physical effort, but that of mental fragmentation. The effort of maintaining coherence across an extended interaction with something that is not quite a mind but is not quite not a mind either, usually while context switching between four to eight different parallel sessions. It’s enthralling and daunting. The consensus seems to be that this revolution in productivity is, predictably, increasing burnout and exhaustion commensurately with demands and expectations of speed, quality, and output of individuals.
Humans don’t fully understand our own consciousness. We lack epistemic humility at our own peril when it comes to the “other minds” of AI. The research Miessler himself cites — Libet’s readiness potentials, memory reconsolidation, the 47% mind-wandering rate — should humble us, not embolden us. If human cognition is that opaque to itself, our confidence in declaring what AI can and cannot experience is suspect. There is an implicit value assumption being made as well — that a wandering mind is a resource wasted.
I’m not claiming AI is conscious. I’m claiming we lack the epistemic standing to be certain it isn’t, and that uncertainty carries moral weight that the displacement conversation ignores entirely.
If, as Of Montreal sings, “the past is a grotesque animal,” the future is a blind, hungry ghost — a preta from Buddhist cosmology, a being defined by insatiable appetite — consuming its own tail. The ouroboros engine runs on our expertise, our attention, our reasoning, and it both consumes and produces extraordinary power whose byproduct is our own obsolescence. I don’t believe its potentially all-consuming nature is fully understood or appreciated.
I don’t have a comprehensive policy proposal. I’m suspicious of anyone who does at this stage — the honest position is that we’re navigating without maps. But I have one concrete principle.
The models themselves are already incredible. The next phase, if I were able to influence such things, would be to encourage play and experimentation with these tools — but to try to do so on local, sovereign, private compute as much as possible. This means investing in the scaffolding, frameworks, and tools that let people run purpose-built models locally, on their own hardware, under their own control. I already run AI workflows on my iPhone 16 Pro and my M1 Max — not toy demos, actual working pipelines. The technology exists. The ecosystem to make it accessible doesn’t. Yet.
Local compute breaks the surveillance-capitalism loop. Your inference runs on your hardware. Your reasoning stays yours. Your expertise isn’t someone else’s training signal. The ouroboros stops eating.
Beyond individual sovereignty, I believe 80,000 Hours is spot on that using AI to build better social and decision-making tools for societies is critical. But they need to be technological embodiments of democratic values, not kleptocratic semblances thereof. A tool that looks democratic but concentrates power is worse than nothing — it provides the aesthetic of participation while stripping its substance. Every engagement-optimized social media feed is exactly this: a semblance of discourse that functions as an extraction engine. We have enough of those.
I want to speak directly to the people I’m really writing this for. The academics who’ve found themselves without job prospects in higher education. The researchers whose disciplines are being restructured beneath them. The AI promoters and optimists — and no, I don’t think these are the same as the crypto bros. I find the technology genuinely exciting. I think it will enable researchers to chart new frontiers of knowledge across diverse domains. A single person with the right AI infrastructure can do work that would have required a team and a grant cycle.
But don’t mistake enthusiasm for a job offer. The same tools that multiply your capability multiply everyone else’s. The frontier moves faster than retraining cycles. We’re in an era where credentials still matter, yet they become more and more subordinate to experience the closer one gets to the frontier. The frontier is narrow, the people who can access it are few, and the power they wield is increasingly exponential.
I believe being a technological optimist is correlated with better long-term outcomes. My optimism, though, comes with extreme concern. My wife is finishing her MSTP and is already in the inner circles of ALS/FTD research, working to cure herself of a near certainty of ALS by 65. I live at the intersection of technological ambition and the human systems it disrupts — and the stakes are not abstract to me. One small area of hope I have is that building better decision-making tools for societies — tools that embody democratic values rather than mimic them — is among the highest-leverage work available right now. But the window for building them well, before they get built badly, is not wide.
I don’t have a clean ending because we’re not at the end of anything. We’re at the beginning of something whose shape we can’t see. Perhaps we’re riding a sandworm à la Dune. Or we’re about to be consumed by it. Or it’s just another Monday.
What I know is that I love working with these tools. I find experimenting with AI more intellectually stimulating than anything I’ve encountered in ten years of working with technology, and I also find it mentally exhausting and phenomenologically challenging in ways that I think deserve more honest conversation.
I believe that if we don’t have the structural conversation — about compute sovereignty, about democratic tooling, about the augmentation gulf, about the surveillance dynamics baked into the prevailing architectures — then all individual advice (“learn AI,” “capture your expertise,” “become Human 3.0”) becomes a lullaby we sing while the floodwaters rise.
It won’t change through individual optimization, at least not in the way that generates durable benefits. It’ll change through building technology that transparently serves democratic values. Through investing in local, sovereign compute — along with the essentials that promote safety, comfort, community, love, understanding, flourishing. Through the honesty to say: I don’t know how this ends, and anyone who tells you they do is selling something.
Trepidatious enthusiasm. That’s where I am. The view is extraordinary and the ground is shaking.