What are AI researchers worried about?

Based on a survey of over 4,000 AI researchers, this paper reveals that contrary to public and media narratives dominated by existential threats, researchers prioritize immediate sociotechnical risks and show significant convergence with public opinion on risk assessment, suggesting a need for collaborative dialogue focused on mitigating present-day harms rather than speculating on future catastrophes.

Cian O'Donovan, Sarp Gurakan, Ananya Karanam, Xiaomeng Wu, Jack Stilgoe

Published Mon, 09 Ma
📖 5 min read🧠 Deep dive

Imagine the world of Artificial Intelligence (AI) as a massive, bustling construction site. Right now, the public debate is like a noisy crowd gathered behind a fence, listening to a few loud megaphones held by a handful of wealthy developers and famous scientists. These megaphones are shouting two extreme things: either "This machine will save us all!" or "This machine will destroy humanity tomorrow!"

But the people actually doing the work—the thousands of engineers and scientists laying the bricks and mixing the cement—have a very different, much quieter conversation going on inside the site.

This paper is like a team of sociologists sneaking onto that construction site to ask the workers, "Hey, what are you actually worried about?" They surveyed over 4,000 AI researchers to see if their fears match the scary headlines or the utopian dreams we see in the news.

Here is what they found, translated into everyday language:

1. The "Sci-Fi" vs. "Real Life" Gap

The Headline: The media loves to talk about "Existential Risk"—the idea that AI will become a super-intelligent robot that wakes up, decides humans are useless, and ends the world (think The Terminator or SkyNet).

The Reality: When asked, "What is your biggest worry?", only 3% of AI researchers said they were scared of the robot apocalypse.

  • The Analogy: It's like asking a group of professional chefs what they are most worried about regarding a new, high-tech oven. The news says, "They're terrified the oven will grow legs and eat the neighborhood!" But the chefs say, "No, we're mostly worried that someone will burn the toast, the oven will leak gas, or the electricity bill will be too high."
  • The Takeaway: Researchers see AI as a "normal" technology with real-world glitches, not a magical monster. They are far more concerned about misinformation, cybercrime, and people using the tools for bad purposes than about the AI gaining sentience.

2. The Optimism Gap (But Shared Fear)

The Headline: The public is often skeptical, thinking AI is mostly a scam or a danger.

The Reality: The researchers are actually quite optimistic! 87% of them believe the benefits outweigh the risks. They see AI as a super-tool that could revolutionize education and healthcare.

  • The Analogy: Imagine a group of car mechanics (the researchers) and a group of commuters (the public). The commuters are terrified the new self-driving cars will crash and kill everyone. The mechanics, however, are excited because they know how to fix the brakes and believe the cars will save lives.
  • The Twist: Even though the researchers are more hopeful about the good stuff, they agree with the public on the bad stuff. Both groups are equally worried about fake news, data privacy, and cyberattacks. They just disagree on how likely the "robot apocalypse" is.

3. The "Black Box" Mystery

The Headline: Everyone is worried that AI is a "black box"—a machine that makes decisions we can't understand.

The Reality: While the public is very worried about not understanding how AI thinks, the researchers are actually less worried about this than the public is.

  • The Analogy: The public is like a passenger in a car who is terrified because they can't see the engine. The researchers are the mechanics; they know the engine is complex and sometimes hard to explain, but they aren't losing sleep over it because they are busy fixing the other problems (like the brakes or the fuel efficiency).

4. Who is in Charge?

The Headline: Big Tech companies are racing to build the biggest, fastest models.

The Reality: The researchers are unhappy about this. They feel that a few giant corporations are holding the steering wheel, and they want more open-source (publicly shared) models. They also want to make sure AI is used for good things like healthcare and education, not for making weapons or just making money.

  • The Analogy: It's like a group of scientists who invented a new type of fertilizer. They want to share the recipe with all the farmers to help the world grow food. But a few big corporations have locked the recipe in a vault, only selling it to the highest bidder, and the scientists are frustrated that the public isn't getting the benefits.

5. The "Deficit" Myth

The Headline: Some people think the public is scared of AI because they are ignorant and don't understand science.

The Reality: The paper argues this is wrong. The researchers' worries are actually very similar to the public's worries.

  • The Analogy: It's not that the public is "dumb" and doesn't get it; it's that the public is reacting to the same real problems (like job loss or privacy) that the researchers see every day. The problem is that the loud voices in the media are shouting about "super-intelligence" while ignoring the real, boring, but dangerous problems like bias in hiring algorithms or deepfakes.

The Big Conclusion

The paper ends with a warning: The conversation about AI is being hijacked by a few powerful voices. This is dangerous because it pushes the actual experts (the researchers) and the public apart.

  • The Metaphor: Imagine a town hall meeting about a new bridge. Instead of listening to the engineers who built it and the citizens who will walk on it, the meeting is dominated by a few actors in costumes shouting about whether the bridge will turn into a dragon.
  • The Solution: We need to stop listening to the "dragon" stories and start listening to the engineers. We need to talk about the real risks (like privacy and fairness) that both the scientists and the public care about, rather than speculating about science fiction scenarios.

In short: AI researchers are not waiting for the robots to take over. They are busy worrying about the same things you are: fake news, privacy, and making sure the technology helps everyone, not just a few rich companies. They want a real conversation, not a sci-fi movie.