Book Analysis

You’re buying into an empire. Know whose.

Karen Hao spent seven years and 260 interviews mapping how OpenAI’s mission to benefit humanity became the most effective consolidation playbook in modern tech. Here’s what it means for anyone buying into the AI stack.

$0
OpenAI valuation, April 2026
$0
Musk’s original nonprofit donation
$0
Pay for Kenyan data workers who built ChatGPT’s safety filter
Scroll to read
00 — The Thesis

This is not a technology story. It is a resource consolidation story.

Karen Hao’s Empire of AI argues that OpenAI followed a pattern much older than software. The book draws a direct line between the extractive logic of colonial empires and the way frontier AI companies accumulate chips, data, labor, energy, and water. The comparison is not metaphorical. Hao traces how the same countries once stripped of raw materials by European powers are now supplying the AI industry’s inputs under eerily similar terms.

a
The mission started as idealism
OpenAI launched in 2015 as a nonprofit. Musk, Altman, and Brockman wanted a counterweight to Google’s growing dominance in AI. The plan: open-source research, no profit motive, safety as the core commitment.
b
The mission became a moat
Within four years, OpenAI created a capped-profit subsidiary, took billions from Microsoft, and stopped publishing research. Each pivot was justified by the same mission statement. That flexibility is the formula’s most potent ingredient.
c
The mission now justifies an empire
By 2026, OpenAI is valued at $852 billion, is preparing for a trillion-dollar IPO, and projects $280 billion in revenue by 2030. The nonprofit shell still exists. The original safety researchers mostly don’t.
01 — The Resource Stack

What frontier AI actually requires. The bill comes due somewhere.

Hao’s central reporting contribution is showing where the costs land. Not in San Francisco. Not in Redmond. In the Atacama Desert, in Nairobi’s outskirts, in data centers drawing water from drought-stricken regions. The AI stack has physical inputs that most buyers never think about.

Compute
18,000
Nvidia A100 GPUs in OpenAI's 2021 Microsoft supercomputer. A single training run now costs hundreds of millions.
Data
Trillions
of tokens scraped from the open web. Copyright lawsuits are mounting. The shift: stop filtering inputs, control outputs.
Labor
$230K
Total contract value for Sama workers in Kenya who reviewed hundreds of thousands of violent and sexual images to build ChatGPT's safety filter.
Energy
$4B/mo
OpenAI's current operational burn rate. Data centers consume power at industrial scale. Chile's hydroelectric grid is already strained.
Water
Atacama
The driest non-Antarctic desert on earth, now host to data center cooling infrastructure. Indigenous communities are watching their water tables drop.
02 — The Labor Chain

Someone has to look at the worst of the internet so the model doesn’t repeat it.

Hao’s reporting on the data labor supply chain is the book’s most visceral material. In 2021, OpenAI contracted with Sama, an outsourcing firm in Kenya, to build the content-moderation filter that makes ChatGPT safe for consumer use. Workers reviewed and categorized hundreds of thousands of examples of sexual abuse, violence, and child exploitation. The total contract value: $230,000.

a
The paradigm shift nobody talks about
Early AI models tried to filter bad data before training. That failed at scale. The new approach: train on everything, then pay humans to clean up what comes out. The shift moved the psychological burden from engineers to some of the lowest-paid workers in the global labor market.
b
Why Kenya
It shares a common profile with other countries where Silicon Valley outsources its dirtiest work: poor, in the Global South, government hungry for foreign investment, weak labor protections, a legacy of colonialism that created exactly the conditions that make exploitation viable. The workers in Utawala and Dagoretti South earn roughly $2 per hour.
c
The abstraction layer
When you hear “RLHF” (reinforcement learning from human feedback) at a conference, this is the human part. The language of machine learning has a way of making the human inputs invisible. That invisibility is not an accident.
03 — The Mission Formula

Three ingredients. One playbook for consolidating power under the banner of public benefit.

In Chapter 18, Hao distills seven years of reporting into a structural argument. OpenAI’s mission is not just marketing. It is a mechanism with three interlocking parts that create a self-reinforcing cycle of accumulation. Altman’s own reading habits offer a tell: his favorite book in 2018 was a collection of Napoleon’s quotes on how to consolidate control using revolutionary slogans.

Ingredient 01
Centralize talent around a grand ambition
The AGI mission recruits true believers. Altman himself wrote that the most successful founders create “something closer to a religion.” The mission functions as a talent magnet that self-selects for people willing to accept unusual tradeoffs in exchange for the feeling of working on the most important problem in history.
Ingredient 02
Centralize capital by invoking necessity and threat
The mission justifies massive resource accumulation because the stakes are framed as existential. Who would refuse to fund something that might cure cancer? Especially when the alternative is an “authoritarian” competitor getting there first. The China framing does heavy lifting here.
Ingredient 03
Keep the mission vague enough to redefine at will
This is the most consequential ingredient. “Benefit all of humanity” can mean open-sourcing research (2015). It can mean walling off the model behind an API (2020). It can mean racing to deploy ChatGPT as fast as possible (2022). It can mean a trillion-dollar IPO (2026). Altman told the New York Times that AGI is “a ridiculous and meaningless term” two days before the board fired him.
04 — The Mission Drift

Watch the words stay the same while everything they mean changes.

2015
Nonprofit, open-source, no financial return
The founding announcement. Musk donates $38 million. The plan is to open-source everything and serve as a counterweight to Google.
Valuation: ~$0 (nonprofit)
2016
Open-source erodes behind closed doors
Sutskever writes to leadership: everyone should benefit from AI, but sharing the science is optional. The public doesn't know yet.
2018
Musk leaves. Capped-profit subsidiary created.
The structure is justified as necessary "to marshal substantial resources." The for-profit arm is supposed to stay subordinate to the nonprofit. It won't.
2019
Microsoft invests $1B. GPT-2 withheld from release.
GPT-2 is called "too dangerous" to publish. The safety argument and the commercial argument begin to overlap. Hao visits OpenAI's offices for the first time.
~$1B Microsoft investment
2020
GPT-3 goes behind an API
Altman calls this "a strategy for openness and benefit sharing." The Anthropic team pushes back internally on the commercial direction.
2021
Anthropic splits. Kenya data labeling begins.
Internal road map: "build an aligned system vastly more capable than anything before." Sama workers in Nairobi start categorizing violent and sexual content for the safety filter. Contract value: $230,000.
2022
ChatGPT launches. Microsoft invests $10B.
Fastest-growing consumer app in history. Musk texts Altman: "What the hell is going on? This is a bait and switch."
$20B valuation after Microsoft deal
2023
Board fires Altman. He returns in five days.
Safety researchers start leaving. Altman testifies before Congress, recommends a licensing regime that would entrench incumbents. Reconstituted board removes the dissenters.
$90B valuation (pre-firing)
2024
Superalignment dissolves. Musk files suit.
Altman's blog after GPT-4o: "A key part of our mission is to put very capable AI tools in the hands of people for free (or at a great price)." The safety team's co-lead resigns publicly.
$300B valuation
2026
$852B. Trillion-dollar IPO planned. Oakland federal court.
Musk and Altman face each other in Judge Gonzalez Rogers' courtroom. The nonprofit still technically exists. The mission statement has not changed.
$852B valuation (targeting $1T+ IPO)
05 — Chapter 19 (Unwritten)

The two men who started this are now fighting in court over what it became.

Hao’s book ends before the trial. But the lawsuit is the logical conclusion of every tension she documents. Musk v. Altman is not a personality clash. It is the structural contradiction at the center of the AI industry, rendered as litigation.

Musk (Plaintiff)
$134B
Claimed “wrongful gains”
v.
OpenAI (Defendant)
$852B
Current valuation at stake
a
Both sides are telling the truth. Neither is telling the whole truth.
Musk says he donated $38 million to a nonprofit and got an $852 billion for-profit competitor. That’s accurate. OpenAI says Musk wanted to take unilateral control of the company and is now suing because he failed. Their emails support that too. Hao’s reporting provides the connective tissue neither side wants to show the jury.
b
Musk co-founded xAI. The judge noticed.
Judge Gonzalez Rogers instructed parties not to discuss “the future of humanity” during the trial and pointedly noted that Musk is building a competitor in the same space. xAI has merged with SpaceX and is targeting a $1.75 trillion IPO. The man suing over mission drift is running the same playbook with different branding.
c
The real question the trial can't answer
Courts can adjudicate breach of fiduciary duty. They can unwind corporate restructurings. What they can’t do is resolve the underlying problem Hao identified: the AI industry’s structural incentives make this outcome nearly inevitable regardless of who sits in the CEO chair. Swap the names. The dynamic holds.
The Structural Read
Musk admitted on the stand that xAI uses OpenAI’s models to train its own. Both companies are racing toward trillion-dollar valuations. Both invoke safety as a differentiator. The fight in Oakland is not about whether the AI industry should consolidate into a handful of empires. It’s about which empire.
06 — The Structural Question

If you’re buying AI, you’re buying into this supply chain. Know what you’re funding.

Hao’s book is not anti-AI. She profiles a Maori community using AI to revitalize their endangered language as a counterexample of what’s possible. The problem is not the technology. The problem is who controls it, what it costs, and who bears those costs. If you’re an executive making AI procurement decisions, three things from this book should change how you evaluate vendors.

a
Ask about the supply chain, not just the API
Where is the training data sourced? What are the labor practices behind RLHF? What’s the environmental footprint of the compute infrastructure? These are not activist questions. They are procurement questions. Your customers will ask them eventually. Better to have answers before they do.
b
Watch the mission statements
Hao shows how OpenAI redefined “benefit all of humanity” roughly every eighteen months for a decade. If your vendor’s stated mission can mean whatever they need it to mean, it means nothing. Look at corporate structure, governance, and incentive alignment instead.
c
Diversify your AI dependencies
The book’s empire thesis has a practical corollary: if one company controls your AI infrastructure, you are a vassal state. Open-source models, multi-vendor strategies, and in-house capabilities are not just technical hedges. They are structural ones. The companies preparing for trillion-dollar IPOs need you locked in. Act accordingly.
Source: Karen Hao, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI (Penguin Press, May 2025). Trial reporting via CNN, CNBC, MIT Technology Review.

Want to talk about this?

If something here resonated with a problem you're working on, let's spend 15 minutes on it.

Schedule a Discovery Call
Schedule a Discovery Call