How is Generative AI Being Used in Biomedical Research?
A New Report Shares Findings from a Survey of Academic Researchers
When ChatGPT was released in November 2022, it prompted an ongoing national conversation about the role of generative AI across all sectors of intellectual labor. Within the academy, that conversation has focused primarily on generative AI’s impact on instruction, with relatively little attention being given to its role in scholarly research. The field of biomedical research in particular has provided some of the most promising use cases for generative AI, as well as being a site for potentially significant harm caused by this new technology. Institutions and policymakers need to understand how biomedical researchers are using generative AI in order to provide guidance to maximize potential benefits and minimize harms.
Today, Ithaka S+R is excited to publish findings from our international survey of academic researchers, with a particular focus on those in the biomedical sciences. The survey, conducted with the support of the Chan Zuckerberg Initiative, provides a snapshot of researchers’ opinions about generative AI, how often they use it, which research tasks they use it for, and factors that serve as barriers to its use. Universities, professional organizations, and publishers will find the report to be valuable as they develop best practices around researchers’ use of generative AI. Funders and other stakeholders will gain insight into the adoption of this new technology to inform their decision making about how best to support cutting-edge research.
The survey’s key finding is that generative AI adoption is so far very mixed. While many biomedical researchers have experimented with using generative AI, that use is limited in scope and frequency. The main barrier to adoption is serious concern about the accuracy of generative AI’s outputs, while ethical concerns also rank highly as a barrier. Given the current quality of generative AI outputs and the lack of compelling best practices and models for its use, adoption of generative AI may plateau.
Other key findings include:
- Biomedical researchers have a moderate degree of interest in using generative AI in their research. Over 60 percent have experimented with doing so, but most use it sparingly or no longer use it at all.
- There are many barriers to greater adoption of generative AI, but the most significant are concerns about the accuracy of generative AI and a lack of clarity about best practices for using AI effectively and ethically.
- Over half of biomedical researchers expressed strong interest in biomed specific generative AI products, but only 14 percent had used existing biomed specific LLMs or tools.
- Use of generative AI in research is concentrated on scholarly communication tasks such as editing, writing, and accessing and interpreting scholarly literature.
- Many researchers would appreciate more support from funders, publishers, and universities to develop their skills using generative AI in their research.
What’s Next for Ithaka S+R?
The survey findings published today are an important addition to our ongoing work on the implications of generative AI. Later this year, we will publish our first report on the generative AI cohort project, which will include close analysis of the largest qualitative dataset to-date on the use of generative AI in higher ed. Our interviews with partners from across the cohort explore the impact of generative AI on teaching, learning, and research.
We are also currently designing a new cohort project on AI literacy that will launch in early 2025. The project will consider how existing information literacy frameworks can be adapted or revised to reflect AI-driven transformations in the information economy. Our researcher survey project—the companion to our instructor survey, and the continuation of our longstanding faculty survey—will also launch in the first half of 2025 and will include questions about researchers’ use of generative AI.
For more information about these projects, contact Dylan Ruediger (dylan.ruediger@ithaka.org).