Growing Divide: AI Experts and the Public Hold Sharply Diverging Views on Artificial Intelligence, New Stanford Report Finds
The gap between how AI industry experts and ordinary people view artificial intelligence is widening at an accelerating pace, according to Stanford University’s annual report on the global AI industry, which was released this Monday. The report specifically highlights a steady rise in public anxiety around AI, with U.S. residents most vocal about concerns over the technology’s impact on core societal systems including employment, healthcare, and the broader national economy.
Stanford’s findings align with recent polling that already tracks rising negative sentiment toward AI: a recent Gallup survey notes Gen Z is leading this shift toward increased criticism. The poll found that younger adults are growing less optimistic and more frustrated about AI’s spread, even though roughly half of all Gen Z people use AI tools on a daily or weekly basis.
For many working in the technology sector, this widespread public backlash against AI has come as an unexpected surprise. AI leaders have long centered their risk planning around the long-term threat of Artificial General Intelligence (AGI) — a hypothetical form of advanced superintelligence that can complete any intellectual task a human can, and operate with independent reasoning. But for everyday people, the most pressing concerns are far more immediate: how AI will cut into their earnings, and whether household energy bills will rise as massive, power-hungry AI data centers are built across communities.
This stark divide played out publicly in online reactions to the recent attack on OpenAI CEO Sam Altman’s personal home. For example, AI industry insiders expressed shock on social platform X after seeing a wave of Instagram comments that celebrated the attack on Altman. Many of these comments carried the same tone as posts that circulated online after the 2024 shooting of the UnitedHealth Group CEO and the recent arson of a Kimberly-Clark warehouse by an employee protesting unlivable wages. Some commenters even went as far as to argue that more drastic action, comparable to a mass popular movement to rein in AI, is needed.
Stanford’s report helps unpack the roots of this widespread public negativity, by compiling and synthesizing public opinion data on AI from dozens of independent research sources.
For instance, the report cites a Pew Research Center study published last month, which found that only 10% of U.S. adults feel more excited than concerned about AI’s growing integration into daily life. By contrast, 56% of AI experts surveyed said they expect AI will deliver a net positive impact on the U.S. over the next 20 years.
The gap between expert and public opinion grows even larger when examining specific areas of AI’s societal impact. Per the report, 84% of AI experts believe AI will have an overall positive effect on healthcare over the next two decades, but only 44% of the U.S. general public shares that belief. When it comes to AI’s impact on work, a 73% majority of experts hold positive views of AI’s effect on how people do their jobs, compared to just 23% of the general public. Sixty-nine percent of experts also expect AI will benefit the overall U.S. economy. Amid widespread news coverage of AI-driven layoffs and widespread workplace disruption, it is unsurprising that only 21% of the public agrees with that assessment.
Additional Pew data cited in the report confirms experts are far less pessimistic about AI’s impact on the job market. Nearly two-thirds of U.S. adults (64%) believe AI will lead to fewer available jobs for American workers over the next 20 years. On the topic of government regulation, the U.S. also recorded the lowest level of public trust in responsible AI regulation of any nation measured: just 31% of U.S. respondents trust their government to regulate AI properly, per Ipsos data included in the Stanford report. Singapore ranked highest on this metric, with 81% of respondents expressing trust in their government’s AI oversight.
A separate analysis of state-level regulation attitudes found that across the U.S., 41% of respondents believe federal AI regulation will not go far enough to protect the public, while only 27% think federal rules will go too far.
Even amid widespread public fear and criticism, AI recorded one small uptick in positive global sentiment: the share of people worldwide who say AI products and services offer more benefits than harms rose slightly from 55% in 2024 to 59% in 2025. But at the same time, the share of respondents who say AI makes them feel “nervous” also grew over the same period, climbing from 50% to 52%, according to the report’s compiled data.
Growing Divide: AI Experts and the Public Hold Sharply Diverging Views on Artificial Intelligence, New Stanford Report Finds