AI in Science: Useful or Overhyped? The Debate Continues

The Hype vs. Reality of AI in Research

Artificial intelligence is a hot topic in the scientific community, but is it living up to the hype? A recent conference in London, Metascience 2025, brought together experts to discuss the impact of AI on research. While AI has produced breakthrough tools like AlphaFold, which can predict protein structures, the overall sentiment was that AI is unlikely to fundamentally revolutionize the job of a scientist.

Instead, it’s more likely to serve as a useful tool that streamlines certain tasks. Matt Clancy from Open Philanthropy noted that science has a long history of integrating new tools without undergoing a complete transformation. This perspective suggests that AI, while powerful, is another step in the evolution of research, not a complete paradigm shift.

A New Wave of Tools and Their Limitations

The latest wave of AI breakthroughs, including large language models (LLMs) and specialized software like AlphaFold, has raised hopes that AI could automate tasks like generating hypotheses, replicating findings, or summarizing existing literature. The European Union is even launching a dedicated AI in science strategy, with its research commissioner, Ekaterina Zaharieva, noting that AI is “transforming research.” However, these advancements come with a dose of skepticism.

Doubts about the usefulness of generative AI in the business world, evidenced by a recent MIT survey showing “95% of organizations are getting zero return” from the technology, cast a long shadow. This skepticism is further fueled by what critics see as “absurdly overinflated” claims from some AI company leaders, such as Google DeepMind’s CEO suggesting AI could cure all diseases within a decade.

The Long History of AI in Scientific Discovery

Despite the current excitement, AI is not a new tool in science. Iulia Georgescu from the UK’s Institute of Physics told the conference that the history of AI in science dates back more than half a century. She traced its roots to a 1956 tool for proving theorems. Machine learning, a core component of modern AI, was used extensively in physics as far back as the 1990s for tasks like pattern recognition.

Georgescu also pointed out that machine learning was instrumental in analyzing the data that led to the discovery of the Higgs boson in 2012. This historical context suggests that scientists are already familiar with using machine learning to handle complex data, making the current debate less about whether to use AI and more about how to effectively integrate a new generation of more powerful tools.

Read More: UK’s £1 Trillion Tech Plan: Driving Global Leadership in AI and Quantum Innovation

Accountability and Trust: A Major Hurdle

While AI has the potential to automate routine tasks, such as summarizing existing literature, academics remain cautious. The issue of accountability is a major hurdle. As Clancy noted, a scientist must be able to “really trust the results that come out of this machine” because they have to put their name on the research paper. The jury is still out on the reliability of current AI tools for tasks like literature reviews.

A recent test by the Columbia Journalism Review found that five different tools provided “underwhelming” and, in some cases, “alarming” results, pulling completely different papers and disagreeing on scientific consensus. This unreliability creates a significant barrier to widespread adoption in a field where trust and accuracy are paramount.

AlphaFold: A Beacon of AI’s Potential

AlphaFold stands as the most compelling example of AI’s power in science. Created by Google DeepMind, it earned its chief executive, Demis Hassabis, a share of the Nobel Prize in chemistry. Before AlphaFold’s release, researchers had manually deciphered around 200,000 protein structures, a “very time-consuming process,” according to Anna Koivuniemi, head of DeepMind’s impact accelerator.

However, AlphaFold has since cracked the structures of 200 million proteins, with over three million researchers using these discoveries in their work. Koivuniemi emphasized that this success was built on the foundation of more than 50 years of work by structural biologists who provided the good data needed to train the AI model, highlighting that AI is a powerful tool, not a replacement for human discovery.

Wary of Offloading Routine Work

Some experts are warning researchers to be cautious about offloading what seems like “routine” scientific work to AI. Sabina Leonelli, a historian and philosopher of science, told the conference that what is considered “routine” can often be a source of a new discovery. She cited the example of Rosalind Franklin, who discovered the structure of DNA while working on seemingly “boring crystallography problems.”

This perspective suggests that scientists should be careful not to outsource the very tasks that could lead to unexpected and revolutionary breakthroughs. Furthermore, Leonelli pointed out the tendency to underestimate the significant costs and demands of validating and maintaining complex AI models, adding another layer of caution to the AI-in-science debate.

The Threat of Scientific Spam and Other Concerns

One of the most significant fears surrounding the rise of generative AI is the potential for it to create a wave of scientific spam. Researchers are already overwhelmed by the exponentially growing number of academic papers, and there are concerns that AI will be used to generate ballooning numbers of fraudulent articles or to help academics write more low-quality papers. As one researcher noted, this could “further overwhelm researchers,” burying genuinely new findings.

In India, a survey found that only a small minority of scientists are using LLMs, largely due to concerns about cost and the lack of conviction that AI is truly “driving the science.” The one exception is using AI to polish writing, which could benefit academics whose first language isn’t English, though journal policies sometimes prohibit this. This points to a potential “existential crisis” for research, where the sheer volume of AI-generated content could devalue the entire scientific endeavor.

IMPORTANT NOTICE

This article is sponsored content. Kryptonary does not verify or endorse the claims, statistics, or information provided. Cryptocurrency investments are speculative and highly risky; you should be prepared to lose all invested capital. Kryptonary does not perform due diligence on featured projects and disclaims all liability for any investment decisions made based on this content. Readers are strongly advised to conduct their own independent research and understand the inherent risks of cryptocurrency investments.

Share this article