Human Values and AI Adoption in the Research Enterprise
Insights from the Second NSF GRANTED Workshop at Chapman University
Research administrators play an essential role in the research enterprise. Their work managing expenditures and monitoring compliance with rules related to the ethical conduct of research ensure that public money is spent and that research data is collected in ways that protect privacy, minimize risks to participants, and meet the complex legal and contractual obligations required by funders. At large research universities, these and other tasks associated with research administration are undertaken by trained specialists: at emerging research institutions (ERIs) they often fall on a small number of generalists working in sponsored projects offices. Among the challenges that ERIs face is building and maintaining the staffing capacity to expand their research portfolio.
Ithaka S+R, Chapman University, and Montclair State University, with funding from the National Science Foundation, have organized two in-person workshops to explore how the adoption of AI into workflows and processes might create efficiencies to support ERIs through personnel bottlenecks, and understand the risk/reward calculus of introducing AI into research administration. The first workshop in the series included representatives from ERIs across the state of New Jersey. The second workshop brought together librarians, IT directors, and research officers from public and private institutions in Southern California.

Three workshop participants in discussion at a round table.
Working in small groups, participants identified promising ways that administrative staff are already using AI to improve workflows, often acting on their own initiative rather than in response to specific institutional policies or mandates. Many of these use cases, such as drafting email correspondence, inserting routine redlines into draft contracts, and automating low-stakes data entry tasks, were intended primarily to speed up routine and monotonous tasks. However, it was striking how many participants described use cases focused on helping staff identify areas that most needed human attention. These included using AI to review budgets for expenditures that were at high risk of violating university or funder policies or to flag proposals requiring careful attention to data security or privacy protections. While using AI may help save time, in these cases AI works as a tool for focusing rather than replacing human judgment and expertise.
Throughout the day, participants identified the human and social challenges of AI implementation. Several participants described the importance of developing a common vocabulary to facilitate cross-unit interoperability of AI tools, build trust between research administrators and researchers, and serve as a foundation for microcredentials or other certifications related to AI usage and expertise. We heard repeated concerns that the broad social impacts of AI would exacerbate existing social hierarchies and biases, and that research administrators—especially those involved in compliance with research ethics—had clear responsibilities to proactively mitigate those risks.
Managing the complex human dynamics of institutional AI adoption emerged as one of the key themes of the workshop, arguably a greater challenge than any of the many and significant technical and logistical challenges. Many participants pointed out that AI adoption is occurring at a time when universities are cutting budgets and when federal indirect rates may be cut dramatically, raising legitimate fears among staff about their job security. Others noted the irony that despite these fears, many of the specific tasks that institutions are interested in automating are those that research staff themselves have identified as places where AI would be a valuable time saver. In the plenary session that opened the workshop, Ed Clark, CIO of the California State University System described shared governance as the most important “infrastructure” component of AI adoption, because it provided a basis for building consensus about the purpose and goals of integrating AI into research administration and, more broadly, into the university.

Panel discussion featuring speakers Essraa Nawar, Sylvia Bradshaw, Ed Clark, and Brett Pollack.
What’s next?
In early 2026, Ithaka S+R, Montclair State University, and Chapman University will publish complete findings and recommendations from the Advancing AI Governance and Implementation at Emerging Research Institutions project. Participants from both workshops are considering opportunities for further collaboration on AI adoption by research offices and support services. Ithaka S+R will continue exploring critical issues in research administration, through a new project, funded by the Henry Luce Foundation, focused on university compliance with new policies and regulations relating to the international exchange of scholars and scholarly knowledge. For more information about our work on research administration, contact Ruby MacDougall (ruby.macdougall@ithaka.org).

This material is based upon work supported by the National Science Foundation under Grant. No. 2437518. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.