The Value of Skepticism in the Age of AI
In the second session of the Ithaka S+R cohort program on Teaching and Learning with AI, participants got the chance to learn more about the technical aspects of how large language models (LLMs) work, including how vast amounts of training data enables them to function as prediction machines to select the next most likely word following the previous sequence of words.[1] This ability to predict a statistically likely and reasonable sounding next word is distinctly different from an AI system actually “knowing” something in the traditional, cognitive sense. The value of understanding this distinction for instructors and students was a key point of discussion among the participants. Although generative AI literacy frameworks can be quite varied, all suggest that users bring a critical eye to what LLMs produce. Instructors must internalize this critical lens on AI outputs in order to convey that same inherent skepticism to their students.
One participant noted that the current caution for students to remain critical of the veracity of responses from LLMs is not so different from the stance instructors previously took with the rise of Wikipedia. Students previously had to learn that not everything written online was truly authoritative. Now they must learn that regardless how confident an LLM may sound, the responses are not necessarily accurate. The ubiquity and accessibility of generative AI technology requires that instructors help students critically engage LLMs through curiosity, critique, and credible verification rather than try to avoid them altogether.
With this healthy skepticism in mind, it is key to understand when and how generative AI can enrich teaching and learning. There is a growing practice literature of strategies, recommended practices, and classroom policies that aim to leverage AI for student learning. Many of these include aspects of reflection, critique, and verification of facts, all of which are foundational skills for learning regardless of the technologies involved.
An important guidepost for generative AI use in the classroom is to think of it as a fallible collaborator, one that perhaps thinks it knows more than it does and is incapable of saying, “I don’t know.” Instructors should teach students to interrogate a source, to evaluate what is presented, and to draw their own conclusions. This approach, as teachers have always practiced, can engender the habits of mind and critical thinking skills that so many fear are now at risk.
[1] This description is a simplification of the technical processes behind tokenization and prediction that is sufficient for most users’ understanding.