‘Empire of AI’ author on OpenAI’s cult of AGI and why Sam Altman tried to discredit her book
Karen Hao discusses her new book 'Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI' in an interview with Mashable.


When OpenAI unleashed ChatGPT on the world in November 2022, it lit the fuse that ignited the generative AI era.
But Karen Hao, author of the new book, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, had already been covering OpenAI for years. The book comes out on May 20, and it reveals surprising new details about the company's culture of secrecy and religious devotion to the promise of AGI, or artificial general intelligence.
Hao profiled the company for MIT Technology Review two years before ChatGPT launched, putting it on the map as a world-changing company. Now, she's giving readers an inside look at pivotal moments in the history of artificial intelligence, including the moment when OpenAI's board forced out CEO and cofounder Sam Altman. (He was later reinstated because of employee backlash.)
Empire of AI dispels any doubt that OpenAI’s belief in ushering in AGI to benefit all of humanity had messianic undertones. One of the many stories from Hao’s book involves Ilya Sutskever, cofounder and former chief scientist, burning an effigy on a team retreat. The wooden effigy "represented a good, aligned AGI that OpenAI had built, only to discover it was actually lying and deceitful. OpenAI's duty, he said, was to destroy it." Sutskever would later do this again at another company retreat, Hao wrote.
And in interviews with OpenAI employees about the potential of AGI, Hao details their "wide-eyed wonder" when "talking about how it would bring utopia. Someone said, 'We're going to reach AGI and then, game over, like, the world will be perfect.' And then speaking to other people, when they were telling me that AGI could destroy humanity, their voices were quivering with that fear."
Hao's seven years of covering AI have culminated in Empire of AI, which details OpenAI's rise to dominance, casting it as a modern-day empire. That Hao's book reminded me of The Anarchy, the account of the OG corporate empire, The East India Company, is no coincidence. Hao reread William Dalrymple's book while writing her own "to remind [herself] of the parallels of a company taking over the world."
This is likely not a characterization that OpenAI wants. In fact, Altman went out of his way to discredit Hao's book on X. "There are some books coming out about OpenAI and me. We only participated in two... No book will get everything right, especially when some people are so intent on twisting things, but these two authors are trying to."
This Tweet is currently unavailable. It might be loading or has been removed.
The two authors Altman named are Keach Hagey and Ashlee Vance, and they also have forthcoming books. The unnamed author was Hao, of course. She said OpenAI promised to cooperate with her for months, but never did.
We get into that drama in the interview below, plus OpenAI's religious fervor for AGI, the harms AI has already inflicted on the Global South, and what else Hao would have included if she'd kept writing the book.
Mashable: I was particularly fascinated by this religious belief or faith that AGI could be achieved, but also without being able to define it. You wrote about Ilya [Sutskever] being seen as a kind of prophet and burning an effigy. Twice. I'd love to hear more of your thoughts on that.
Karen Hao: I'm really glad that you used religious belief to describe that, because I don't remember if I explicitly used that word, but I was really trying to convey it through the description. This was a thing that honestly was most surprising to me while reporting the book. There is so much religious rhetoric around AGI, you know, ‘AI will kill us’ versus ‘AI will bring us to utopia.’ I thought it was just rhetoric.
When I first started reporting the book, the general narrative among more skeptical people is, 'Oh, of course they're going to say that AI can kill people, or AI will bring utopia, because it creates this image of AI being incredibly powerful, and that's going to help them sell more products.'
What I was surprised by was, no, it's not just that. Maybe there are some people who do just say this as rhetoric, but there are also people who genuinely believe these things.
I spoke to people with wide-eyed wonder when they were talking about how it would bring utopia. Someone said, 'We're going to reach AGI and then, game over, like, the world will be perfect.' And then speaking to other people, when they were telling me that AGI could destroy humanity, their voices were quivering with that fear.
The amount of power to influence the world is so profound that I think they start to need religion; some kind of belief system or value system to hold on to.

I was really shocked by that level of all-consuming belief that a lot of people within this space start to have, and I think part of it is because they're doing something that is kind of historically unprecedented. The amount of power to influence the world is so profound that I think they start to need religion; some kind of belief system or value system to hold on to. Because you feel so inadequate otherwise, having all that responsibility.
Also, the community is so insular. Because I talked with some people over several years, I noticed that the language they use and how they think about what they're doing fundamentally evolves. As you get more and more sucked into this world. You start using more and more religious language, and more and more of this perspective really gets to you.
It's like Dune, where [Lady Jessica] tells a myth that she builds around Paul Atreides that she purposely kind of constructs to make it such that he becomes powerful, and they have this idea that this is the way to control people. To create a religion, you create a mythology around it. Not only do the people who hear it for the first time genuinely believe this because they don't realize that it was a construct, but also Paul Atreides himself starts to believe it more and more, and it becomes a self-fulfilling prophecy. Honestly, when I was talking with people for the book, I was like, this is Dune.
Something I've been wondering lately is, what am I not seeing? What are they seeing that is making them believe this so fervently?
I think what’s happening here is twofold. First, we need to remember that when designing these systems, AI companies prioritize their own problems. They do this both implicitly—in the way that Silicon Valley has always done, creating apps for first-world problems like laundry and food delivery, because that’s what they know—and explicitly.
My book talks about how Altman has long pushed OpenAI to focus on AI models that can excel at code generation because he thinks they will ultimately help the company entrench its competitive advantage. As a result, these models are designed to best serve the people who develop them. And the farther away your life is from theirs in Silicon Valley, the more this technology begins to break down for you.
The second thing that’s happening is more meta. Code generation has become the main use case in which AI models are more consistently delivering workers productivity gains, both for the reasons aforementioned above and because code is particularly well suited to the strengths of AI models. Code is computable.
To people who don’t code or don’t exist in the Silicon Valley worldview, we view the leaps in code-generation capabilities as leaps in just one use case. But in the AI world, there is a deeply entrenched worldview that everything about the world is ultimately, with enough data, computable. So, to people who exist in that mind frame, the leaps in code generation represent something far more than just code generation. It’s emblematic of AI one day being able to master everything.
How did your decision to frame OpenAI as a modern-day empire come to fruition?
I originally did not plan to focus the book that much on OpenAI. I actually wanted to focus the book on this idea that the AI industry has become a modern-day empire. And this was based on work that I did at MIT Technology Review in 2020 and 2021 about AI colonialism.
To really understand the vastness and the scale of what's happening, you really have to start thinking about it more as an empire-like phenomenon.
It was exploring this idea that was starting to crop up a lot in academia and among research circles that there are lots of different patterns that we are starting to see where this pursuit of extremely resource-intensive AI technologies is leading to a consolidation of resources, wealth, power, and knowledge. And in a way, it's no longer sufficient to kind of call them companies anymore.
To really understand the vastness and the scale of what's happening, you really have to start thinking about it more as an empire-like phenomenon. At the time, I did a series of stories that was looking at communities around the world, especially in the Global South, that are experiencing this kind of AI revolution, but as vulnerable populations that were not in any way seeing the benefits of the technology, but were being exploited by either the creation of the technology or the deployment of it.
And that's when ChatGPT came out… and all of a sudden we were recycling old narratives of 'AI is going to transform everything, and it's amazing for everyone.' So I thought, now is the time to reintroduce everything but in this new context.
Then I realized that OpenAI was actually the vehicle to tell this story, because they were the company that completely accelerated the absolute colossal amount of resources that is going into this technology and the empire-esque nature of it all.

Your decision to weave the stories of content moderators and the environmental impact of data centers from the perspective of the Global South was so compelling. What was behind your decision to include that?
As I started covering AI more and more, I developed this really strong feeling that the story of AI and society cannot be understood exclusively from its centers of power. Yes, we need reporting to understand Silicon Valley and its worldview. But also, if we only ever stay within that worldview, you won't be able to fully understand the sheer extent of how AI then affects real people in the real world.
The world is not represented by Silicon Valley, and the global majority or the Global South are the true test cases for whether or not a technology is actually benefiting humanity, because the technology is usually not built with them in mind.
All technology revolutions leave some people behind. But the problem is that the people who are left behind are always the same, and the people who gain are always the same. So are we really getting progress from technology if we're just exacerbating inequality more and more, globally?
That's why I wanted to write the stories that were in places far and away from Silicon Valley. Most of the world lives that way without access to basic resources, without a guarantee of being able to put healthy food on the table for their kids or where the next paycheck is going to come from. And so unless we explore how AI actually affects these people, we're never really going to understand what it's going to mean ultimately for all of us.
Another really interesting part of your book was the closing off of the research community [as AI labs stopped openly sharing details about their models] and how that’s something that we totally take for granted now. Why was that so important to include in the book?
I was really lucky in that I started covering AI before all the companies started closing themselves off and obfuscating technical details. And so for me, it was an incredibly dramatic shift to see companies being incredibly open with publishing their data, publishing their model weights, publishing the analyses of how their models are performing, independent auditors getting access to models, things like that, and now this state where all we get is just PR. So that was part of it, just saying, it wasn't actually like this before.
And it is yet another example of why empires are the way to think about this, because empires control knowledge production. How they perpetuate their existence is by continuously massaging the facts and massaging science to allow them to continue to persist.
But also, if it wasn't like this before, I hope that it'll give people a greater sense of hope themselves, that this can change. This is not some inevitable state of affairs. And we really need more transparency in how these technologies are developed.
The levels of opacity are so glaring, and it's shocking that we've kind of been lulled into this sense of normalcy. I hope that it's a bit of a wake-up call that we shouldn't accept this.
They're the most consequential technologies being developed today, and we literally can't say basic things about them. We can't say how much energy they use, how much carbon they produce, we can't even say where the data centers are that are being built half the time. We can't say how much discrimination is in these tools, and we're giving them to children in classrooms and to doctors' offices to start supporting medical decisions.
The levels of opacity are so glaring, and it's shocking that we've kind of been lulled into this sense of normalcy. I hope that it's a bit of a wake-up call that we shouldn't accept this.
When you posted about the book, I knew that it was going to be a big thing. Then Sam Altman posted about the book. Have you seen a rise in interest, and does Sam Altman know about the Streisand Effect?

Obviously, he's a very strategic and tactical person and generally very aware of how things that he does will land with people, especially with the media. So, honestly, my first reaction was just… why? Is there some kind of 4D chess game? I just don't get it. But, yeah, we did see a rise in interest from a lot of journalists being like, 'Oh, now I really need to see what's in the book.'
When I started the book, OpenAI said that they would cooperate with the book, and we had discussions for almost six months of them participating in the book. And then at the six-month mark, they suddenly reversed their position. I was really disheartened by that, because I felt like now I have a much harder task of trying to tell this story and trying to accurately reflect their perspective without really having them participate in the book.
But I think it ended up making the book a lot stronger, because I ended up being even more aggressive in my reporting… So in hindsight, I think it was a blessing.
Why do you think OpenAI reversed its decision to talk to you, but talked to other authors writing books about OpenAI? Do you have any theories?
When I approached them about the book, I was very upfront and said, 'You know all the things that I've written. I'm going to come with a critical perspective, but obviously I want to be fair, and I want to give you every opportunity to challenge some of the criticisms that I might bring from my reporting.' Initially, they were open to that, which is a credit to them.
I think what happened was it just kept dragging out, and I started wondering how sincere they actually were or whether they were offering this as a carrot to try and shape how many people I reached out to myself, because I was hesitant to reach out to people within the company while I was still negotiating for interviews with the communications team. But at some point, I realized I'm running out of time and I just need to go through with my reporting plan, so I just started reaching out to people within the company.
My theory is that it frustrated them that I emailed people directly, and because there were other book opportunities, they decided that they didn't need to participate in every book. They could just participate in what they wanted to. So it became kind of a done decision that they would no longer participate in mine, and go with the others.
The book ends at the beginning of January 2025, and so much has happened since then. If you were going to keep writing this book, what would you focus on?
For sure the Stargate Project and DeepSeek. The Stargate Project is just such a perfect extension of what I talk about in the book, which is that the level of capital and resources, and now the level of power infrastructure and water infrastructure that is being influenced by these companies is hard to even grasp.
Once again, we are getting to a new age of empire. They're literally land-grabbing and resource-grabbing. The Stargate Project was originally announced as a $500 billion spend over four years. The Apollo Program was $380 billion over 13 years, if you account for it in 2025. If it actually goes through, it would be the largest amount of capital spent in history to build infrastructure for technology that ultimately the track record for is still middling.
Once again, we are getting to a new age of empire. They're literally land-grabbing and resource-grabbing.
We haven't actually seen that much economic progress; it's not broad-based at all. In fact, you could argue that the current uncertainty that everyone feels about the economy and jobs disappearing is actually the real scorecard of what the quest for AGI has brought us.
And then DeepSeek… the fundamental lesson of DeepSeek was that none of this is actually necessary. I know that there's a lot of controversy around whether they distilled OpenAI's models or actually spent the amount that they said they did. But OpenAI could have distilled their own models. Why didn't they distill their models? None of this was necessary. They do not need to build $500 billion of infrastructure. They could have spent more time innovating on more efficient ways of reaching the same level of performance in their technologies. But they didn't, because they haven't had the pressure to do so with the sheer amount of resources that they can get access to through Altman's once-in-a-generation fundraising capabilities.
What do you hope readers will take away from this book?
The story of the empire of AI is so deeply connected to what's happening right now with the Trump Administration and DOGE and the complete collapse of democratic norms in the U.S., because this is what happens when you allow certain individuals to consolidate so much wealth, so much power, that they can basically just manipulate democracy.
AI is just the latest vehicle by which that is happening, and democracy is not inevitable. If we want to preserve our democracy, we need to fight like hell to protect it and recognize that the way Silicon Valley is currently talking about weaponizing AI as a sort of a narrative for the future is actually cloaking this massive acceleration of the erosion of democracy and reversal of democracy.
Empire of AI will be published by Penguin Random House on Tuesday, May 20. You can purchase the book through Penguin, Amazon, Bookshop.org, and other retailers.
Editor’s Note: This conversation has been edited for clarity and grammar.
Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.