OpenAI’s Secrets are Revealed in Empire of AI
Technology reporter Karen Hao started reporting on artificial intelligence in 2018, before ChatGPT was introduced, and is one of the few journalists to gain access to the inner world of the chatbot’s creator, OpenAI. In her book Empire of AI, Hao outlines the rise of the controversial company.In her research, Hao spoke to OpenAI leaders, scientists and entry-level workers around the globe who are shaping the development of AI. She explores its potential for scientific discovery and its impacts on the environment, as well as the divisive quest to create a machine that can rival human smarts through artificial general intelligence (AGI).Scientific American spoke with Hao about her deep reporting on AI, Sam Altman’s potential place in AI’s future and the ways the technology might continue to change the world.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.[An edited transcript of the interview follows.]How realistic is the goal of artificial general intelligence (AGI)?There is no scientific consensus around what intelligence is, so AI and AGI are inherently unmoored concepts. This is helpful for deflating the hype of Silicon Valley when they say AGI is around the corner, and it’s also helpful in recognizing that the lack of predetermination around what AI is and what it should do leaves plenty of room for everyone.You argue that we should be thinking about AI in terms of empires and colonialism. Can you explain why?I call companies like OpenAI empires both because of the sheer magnitude at which they are operating and the controlling influence they’ve developed—also the tactics for how they’ve accumulated an enormous amount of economic and political power. They amass that power through the dispossession of the majority of the rest of the world.There’s also this huge ideological component to the current AI industry. This quest for an artificial general intelligence is a faith-based idea. It's not a scientific idea. It is this quasi-religious notion that if we continue down a particular path of AI development, somehow a kind of AI god is going to emerge that will solve all of humanity's problems. Colonialism is the fusion of capitalism and ideology, so there’s just a multitude of parallels between the empires of old and the empires of AI.There’s also a parallel in how they both cause environmental destruction. Which environmental impacts of AI are most concerning?There are just so many intersecting crises that the AI industry’s path of development is exacerbating. One, of course, is the energy crisis. Sam Altman announced he wants to see 250 gigawatts of data-center capacity laid by 2033 just for his company. New York City [uses] on average 5.5 gigawatts [per day]. Altman has estimated that this would cost around $10 trillion —where is he going to get that money? Who knows.But if that were to come to pass, the primary energy sources would be fossil fuels. Business Insider had an investigation earlier this year that found that utilities are “torpedo[ing]” their renewable-energy goals in order to service the data-center demand. So we are seeing natural gas plants and coal plants having their lives extended. That’s not just pumping emissions into the atmosphere; it’s also pumping air pollution into communities.So the question is: How long are we going to deal with the actual harms and hold out for the speculative possibility that maybe, at the end of the road, it’s all going to be fine? There was a survey earlier this year that found that [roughly] 75 percent of long-standing AI researchers who are not in the pocket of industry do not think we are on the path to an artificial general intelligence. We should not be using a tiny possibility on the far-off horizon that is not even scientifically backed to justify an extraordinary and irreversible set of damages that are occurring right now.Do you think Sam Altman has lied about OpenAI’s abilities, or has he just fallen for his own marketing?It’s a great question. The thing that’s complex about OpenAI, that surprised me the most when I was reporting, is that there are quasi-religious movements that have developed around ideas like “AGI could solve all of humanity’s problems” or “AGI could kill everyone.” It is really hard to figure out whether Altman himself is a believer or whether he has just found it to be politically savvy to leverage these beliefs.You did a lot of reporting on the workers helping to make this AI revolution happen. What did you find?I traveled to Kenya to meet with workers that OpenAI had contracted, as well as workers being contracted by the rest of the AI industry. What OpenAI wanted them to do was to help build a content moderation filter for the company’s GPT models. At the time they were trying to expand their commercialization efforts, and they realized that if you put text-generation models that can generate anything into the hands of millions of people, you’re going to come up with a problem because it could end up spewing racist, toxic hate speech at users, and it would become a huge PR crisis.For the workers, that meant they had to wade through some of the worst content on the Internet, as well as content where OpenAI was prompting its own AI models to imagine the worst content on the Internet to provide a more diverse and comprehensive set of examples to these workers. These workers suffered the same kinds of psychological traumas that content moderators of the social media era suffered.I also spoke with the workers that were on a different part of the human labor supply chain in reinforcement learning from human feedback. This is a thing that many companies have adopted where tens of thousands of workers have to teach the model what is a good answer when a user chats with the chatbot.One woman I spoke to, Winnie, worked for this platform called Remotasks, which is the backend for Scale AI, one of the primary contractors of reinforcement learning from human feedback. The content that she was working with was not necessarily traumatic in and of itself, but the conditions under which she was working were deeply exploitative: she never knew who she was working for, and she also never knew when the tasks would arrive. When I spoke to her, she had already been waiting months for a task to arrive, and when those tasks arrived, she would work for 22 hours straight in a day to just try and earn as much money as possible to ultimately feed her kids.This is the lifeblood of the AI industry, and yet these workers see absolutely none of the economic value that they’re generating for these companies.Some people worry AI could surpass human intelligence and take over the world. Is this a risk you fear?I don’t believe that AI will ultimately develop some kind of agency of its own, and I don’t think that it’s worth engaging in a project that is attempting to develop agentic systems that take agency away from people.What I see as a much more hopeful vision of an AI future is returning back to developing AI models and AI systems that support, rather than supplant, humans. And one of the things that I’m really bullish about is specialized AI models for solving particular challenges that we need to overcome as a society.One of the examples that I often give is of DeepMind’s AlphaFold, which is also a specialized deep-learning tool that was trained on a relatively modest number of computer chips to accurately predict the protein-folding structures from a sequence of amino acids. [Its developers] won the Nobel Prize [in] Chemistry last year. These are the types of AI systems that I think we should be putting our energy, time and talent into building.Are there other books on this subject you read while writing this book or have enjoyed recently that you can recommend to me?I’d recommend Rebecca Solnit’s Hope in the Dark, which I read after my book published. It may not seem directly related, but it very much is. Solnit makes the case for human agency—she urges people to remember that we co-create the future through our individual and collective action. That is also the greatest message I want people to take away from my book. Empires of AI are not inevitable—and the alternative path forward is in our hands.
On our 2025 Best Nonfiction of the Year list, Karen Hao’s investigation of artificial intelligence reveals how the AI future is still in our hands
Technology reporter Karen Hao started reporting on artificial intelligence in 2018, before ChatGPT was introduced, and is one of the few journalists to gain access to the inner world of the chatbot’s creator, OpenAI. In her book Empire of AI, Hao outlines the rise of the controversial company.
In her research, Hao spoke to OpenAI leaders, scientists and entry-level workers around the globe who are shaping the development of AI. She explores its potential for scientific discovery and its impacts on the environment, as well as the divisive quest to create a machine that can rival human smarts through artificial general intelligence (AGI).
Scientific American spoke with Hao about her deep reporting on AI, Sam Altman’s potential place in AI’s future and the ways the technology might continue to change the world.
On supporting science journalism
If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
[An edited transcript of the interview follows.]
How realistic is the goal of artificial general intelligence (AGI)?
There is no scientific consensus around what intelligence is, so AI and AGI are inherently unmoored concepts. This is helpful for deflating the hype of Silicon Valley when they say AGI is around the corner, and it’s also helpful in recognizing that the lack of predetermination around what AI is and what it should do leaves plenty of room for everyone.
You argue that we should be thinking about AI in terms of empires and colonialism. Can you explain why?
I call companies like OpenAI empires both because of the sheer magnitude at which they are operating and the controlling influence they’ve developed—also the tactics for how they’ve accumulated an enormous amount of economic and political power. They amass that power through the dispossession of the majority of the rest of the world.
There’s also this huge ideological component to the current AI industry. This quest for an artificial general intelligence is a faith-based idea. It's not a scientific idea. It is this quasi-religious notion that if we continue down a particular path of AI development, somehow a kind of AI god is going to emerge that will solve all of humanity's problems. Colonialism is the fusion of capitalism and ideology, so there’s just a multitude of parallels between the empires of old and the empires of AI.
There’s also a parallel in how they both cause environmental destruction. Which environmental impacts of AI are most concerning?
There are just so many intersecting crises that the AI industry’s path of development is exacerbating. One, of course, is the energy crisis. Sam Altman announced he wants to see 250 gigawatts of data-center capacity laid by 2033 just for his company. New York City [uses] on average 5.5 gigawatts [per day]. Altman has estimated that this would cost around $10 trillion —where is he going to get that money? Who knows.
But if that were to come to pass, the primary energy sources would be fossil fuels. Business Insider had an investigation earlier this year that found that utilities are “torpedo[ing]” their renewable-energy goals in order to service the data-center demand. So we are seeing natural gas plants and coal plants having their lives extended. That’s not just pumping emissions into the atmosphere; it’s also pumping air pollution into communities.
So the question is: How long are we going to deal with the actual harms and hold out for the speculative possibility that maybe, at the end of the road, it’s all going to be fine? There was a survey earlier this year that found that [roughly] 75 percent of long-standing AI researchers who are not in the pocket of industry do not think we are on the path to an artificial general intelligence. We should not be using a tiny possibility on the far-off horizon that is not even scientifically backed to justify an extraordinary and irreversible set of damages that are occurring right now.
Do you think Sam Altman has lied about OpenAI’s abilities, or has he just fallen for his own marketing?
It’s a great question. The thing that’s complex about OpenAI, that surprised me the most when I was reporting, is that there are quasi-religious movements that have developed around ideas like “AGI could solve all of humanity’s problems” or “AGI could kill everyone.” It is really hard to figure out whether Altman himself is a believer or whether he has just found it to be politically savvy to leverage these beliefs.
You did a lot of reporting on the workers helping to make this AI revolution happen. What did you find?
I traveled to Kenya to meet with workers that OpenAI had contracted, as well as workers being contracted by the rest of the AI industry. What OpenAI wanted them to do was to help build a content moderation filter for the company’s GPT models. At the time they were trying to expand their commercialization efforts, and they realized that if you put text-generation models that can generate anything into the hands of millions of people, you’re going to come up with a problem because it could end up spewing racist, toxic hate speech at users, and it would become a huge PR crisis.
For the workers, that meant they had to wade through some of the worst content on the Internet, as well as content where OpenAI was prompting its own AI models to imagine the worst content on the Internet to provide a more diverse and comprehensive set of examples to these workers. These workers suffered the same kinds of psychological traumas that content moderators of the social media era suffered.
I also spoke with the workers that were on a different part of the human labor supply chain in reinforcement learning from human feedback. This is a thing that many companies have adopted where tens of thousands of workers have to teach the model what is a good answer when a user chats with the chatbot.
One woman I spoke to, Winnie, worked for this platform called Remotasks, which is the backend for Scale AI, one of the primary contractors of reinforcement learning from human feedback. The content that she was working with was not necessarily traumatic in and of itself, but the conditions under which she was working were deeply exploitative: she never knew who she was working for, and she also never knew when the tasks would arrive. When I spoke to her, she had already been waiting months for a task to arrive, and when those tasks arrived, she would work for 22 hours straight in a day to just try and earn as much money as possible to ultimately feed her kids.
This is the lifeblood of the AI industry, and yet these workers see absolutely none of the economic value that they’re generating for these companies.
Some people worry AI could surpass human intelligence and take over the world. Is this a risk you fear?
I don’t believe that AI will ultimately develop some kind of agency of its own, and I don’t think that it’s worth engaging in a project that is attempting to develop agentic systems that take agency away from people.
What I see as a much more hopeful vision of an AI future is returning back to developing AI models and AI systems that support, rather than supplant, humans. And one of the things that I’m really bullish about is specialized AI models for solving particular challenges that we need to overcome as a society.
One of the examples that I often give is of DeepMind’s AlphaFold, which is also a specialized deep-learning tool that was trained on a relatively modest number of computer chips to accurately predict the protein-folding structures from a sequence of amino acids. [Its developers] won the Nobel Prize [in] Chemistry last year. These are the types of AI systems that I think we should be putting our energy, time and talent into building.
Are there other books on this subject you read while writing this book or have enjoyed recently that you can recommend to me?
I’d recommend Rebecca Solnit’s Hope in the Dark, which I read after my book published. It may not seem directly related, but it very much is. Solnit makes the case for human agency—she urges people to remember that we co-create the future through our individual and collective action. That is also the greatest message I want people to take away from my book. Empires of AI are not inevitable—and the alternative path forward is in our hands.
