Adopting artificial intelligence tools to analyze data and model outcomes has a huge impact on the career prospects of young scientists, significantly increasing their chances of rising to positions of influence in their fields, according to a new study. But that boon for individual researchers appears to be coming at a broader cost to science.
Researchers at the University of Chicago and Tsinghua University, in China, analyzed nearly 68 million research papers across six scientific disciplines (not including computer science) and found that papers incorporating AI techniques were cited more often but also focused on a narrower set of topics and were more repetitive. In essence, the more scientists use AI, the more they focus on the same set of problems that can be answered with large, existing datasets and the less they explore foundational questions that can lead to entirely new fields of study.
“I was surprised at the dramatic scale of the finding, [AI] dramatically increases people’s capacity to stay and advance within the system,” said James Evans, a co-author of the pre-print paper and director of the Knowledge Lab at the University of Chicago. “This suggests there’s a massive incentive for individuals to uptake these kinds of systems within their work … it’s between thriving and not surviving in a competitive research field.”
As that incentive leads to a growing dependence on machine learning, neural networks, and transformer models, “the whole system of science that’s done by AI is shrinking,” he said.
The study examined papers published from 1980 to 2024 in the fields of biology, medicine, chemistry, physics, materials science, and geology. It found that scientists who used AI tools to conduct their research published 67 percent more papers annually, on average, and their papers were cited more than three times as often as those who didn’t use AI.
Evans and his co-authors then examined the career trajectories of 3.5 million scientists and categorized them as either junior scientists, those who hadn’t led a research team, or established scientists, those who had. They found that junior scientists who used AI were 32 percent more likely to go on to lead a research team—and progressed to that stage of their career much faster—compared to their non-AI counterparts, who were more likely to leave academia altogether.
Next, the authors used AI models to categorize the topics covered by AI-assisted versus non-AI research and to examine how the different types of papers cited each other and whether they spurred new strands of inquiry.
They found that, across all six scientific fields, researchers using AI “shrunk” the topical ground they covered by 5 percent, compared to researchers that didn’t use AI.
The realm of AI-enabled research was also dominated by “superstar” papers. Approximately 80 percent of all citations within that category went to the top 20 percent of most-cited papers and 95 percent of all citations went to the top 50 percent of most-cited papers, meaning that about half of AI-assisted research was rarely if ever cited again.
Similarly, Evans and his co-authors—Fengli Xu, Yong Li, and Qianyue Hao—found that AI research spurred 24 percent less follow-on engagement than non-AI research in the form of papers that cited each other as well as the original paper.
“These assembled findings suggest that AI in science has become more concentrated around specific hot topics that become ‘lonely crowds’ with reduced interaction among papers,” they wrote. “This concentration leads to more overlapping ideas and redundant innovations linked to a contraction in knowledge extent and diversity across science.”
Evans, whose specialty is studying how people learn and conduct research, said that contracting effect on scientific research is similar to what happened as the internet emerged and academic journals went online. In 2008, he published a paper in the journal Science showing that as publishers went digital the types of studies researchers cited changed. They cited fewer papers, from a smaller group of journals, and favored newer research.
As an avid user of AI techniques himself, Evans said he isn’t anti-technology; the internet and AI both have obvious benefits to science. But the findings of his latest study suggest that government funding bodies, corporations, and academic institutions need to tinker with the incentive systems for scientists in order to encourage work that is less focused on using specific tools and more focused on breaking new ground for future generations of researchers to build upon.
“There’s a poverty of imagination,” he said. “We need to slow down that complete replacement of resources to AI-related research to preserve some of these alternative, existing approaches.”