Taking a Student-Centered Approach
A Spotlight on College Unbound’s Approach to Generative AI Policy
The flurry of announcements in recent months of new, and ever improving, generative AI technologies, is causing universities to reexamine every area of their operations. At the current, very nascent phase of university-wide adoption, institutions are responding primarily by providing background information to their stakeholders. How can an institution move from general information sharing to developing and deploying the guidance, services, and tools that will enable staff and students to harness this technology’s potential—and pitfalls?
This fall, Ithaka S+R is convening a group of universities committed to making AI generative for their campus community to collaborate on a two-year research project. A key activity of this new project will be pooling examples of policies created to guide students, instructors, and researchers on acceptable use of AI technologies, including through course-level implementation. We will be periodically highlighting examples of noteworthy initiatives and emerging best practices already happening on the ground.
As part of this initiative, we are sharing here today an interview with Lance Eaton and Loreley (Lora) Roy at College Unbound. Lance serves as director of digital pedagogy and Lora is a student at College Unbound. They are developing College Unbound’s student-centered approach to creating guidance on the acceptable use of generative AI. This approach reflects College Unbound’s unique model as a not-for-profit accredited college focused on helping adults re-enter and stay in college, including through comprehensive wrap-around services.
What policies does College Unbound have in place to advise students and instructors about the use of generative AI tools in the classroom and how are they being developed?
Lance: In early January, College Unbound created a temporary policy for faculty and students. We wanted to make it normal and acceptable to use the ChatGPT tool, thereby limiting the sense that students need to hide it. Having explored and used it ourselves, we found that it was a powerful tool. We’ve observed that the technology can be useful for brainstorming, exploring, learning, and reflecting and therefore did not want to outright ban it for students or instructors. Additionally, banning a technology such as this can create more problems than it solves, given that a ban would also create lots of secondary and tertiary effects that most institutions aren’t ready to deal with such as using faulty AI-plagiarism detection tools to “catch” students, which would inevitably result in false positives.
In parallel to developing our temporary policy we also kicked off a process to create a long term policy reflective of and including of student voices. As an institution that centers students’ voices and believes in democratic ideals, it was important to talk with students and not about students. The student centered approach was also necessary because we realized that students are going to have to navigate generative AI—not in a few years, but in a few months. ChatGPT had over 100 million users in its first two months; that’s a tool that is going to become relevant to nearly every industry within the next one to three years. Leaning away from this as a tool, resource, and challenge rather than an opportunity to learn with and from our students feels antithetical to the mission of College Unbound.
In Spring 2023, I facilitated a course, Digital Interventions: AI & Education, where the students and I explored generative AI in the educational context and co-created a set of usage guidance for faculty and students. The course ran twice as a one-credit eight-week course. The first course developed the guidelines and the second course tested them out to see how they worked. Faculty are now reviewing the proposed guidelines and providing additional feedback before we implement them.
How are the policies on AI translating into day-to-day practices at College Unbound?
Lora: Students must clearly indicate what portion of their work is generated by AI tools and which tool that they used. The policy advises against copying and pasting direct outputs—instead generative AI content should be edited, revised, and only a certain percentage of work submitted should have been generated by the tool. Students generally shouldn’t use AI generative tools for reflective submissions, and they are responsible for any negative outcomes resulting from their use of these tools.
Faculty can develop their own usage expectations within their courses, but they must follow the same guidelines and process for students who do not follow expectations. They should also indicate how much of their own content was generated by the AI tool, if it was used. Faculty, too, are required to use the appropriate citation formats. Faculty cannot require students to have an account of any AI generative tools. At this time also, faculty should be mindful of their use of AI generative tools and maintain a balance between what they ask of the students and what they themselves do.
What is it like to participate as a student who is co-developing their school’s approach to generative AI use while also applying it to their work in real-time?
Lora: I feel that the student centered approaches for developing these guidelines for their classrooms or university use are important for several reasons. It definitely enhances student engagement. When a student is involved in developing these guidelines for their school, they have a stronger sense of being engaged and invested in their own learning process. By giving students a voice and a stake in the decision making, I think they feel more empowered and motivated to use the tool appropriately and more effectively. It promotes an ethical and responsible use of the AI tool.
Throughout my studies this semester engaging with AI tools and the policy development work related to their use, I’ve grown more as a student. Because these questions were brought up that made me think outside the box. I found myself thinking more deeply and critically reflecting on how these tools are used. I think the process can help many students to open up their minds just a little bit more and essentially become more successful in their college career. I just think allowing the students to have their voices heard in promoting these AI tools is important for these guidelines.
In addition to the work going on at College Unbound, you are also involved in gathering examples of course materials like syllabi that incorporate policies or activities related to Generative AI—can you tell us more about this initiative? And are there any other resources you see as especially helpful?
Lance: My background is in open educational practices—whether it’s open educational resources (which I’ve been creating, teaching about, and using for more than a decade) to open access (which my dissertation focuses on)—so putting things out for others to share and use is a habit for me. Also, many instructional designers come from the school of “share & save time” where we create things and pass them along our networks. In this case, I knew from conversations and observations that many institutions didn’t have a lot of ideas about what to do. So I simply decided that I would look to see who was sharing syllabi policies and start to crowdsource AI syllabus policies in this document (folks can also fill out the form at the top to add theirs).
In terms of learning more or finding others to have these conversations, Facebook, Twitter Mastodon, and YouTube were great places to see what others were doing. There were a couple communities that were more dedicated to this discussion that were also great spaces to connect, learn, and share. These include the Facebook groups: Instructional Design in Education and Higher Ed discussions of AI writing as well as the Google Group, AI in Education. In fact, Anna Mills, Maha Bali, and myself just published a piece about how open educational practices during this time has been incredibly useful for navigating generative AI.
You recently shared about your student centered approach to generative AI policy development at NERCOMP. What were your key takeaways from connecting with, and learning from, other institutions who are doing this work on-the-ground?
Lora: During our discussion panel with NERCOMP, it became clear that integrating generative AI into teaching and learning practices requires a collaborative and multidisciplinary approach. Different institutions have varying priorities, needs, and challenges, so there is no one-size-fits-all solution. However, common themes emerged, including the importance of collaboration and communication among stakeholders, transparency and accountability in policies and practices, ethical considerations, and ongoing evaluation and refinement. Ultimately, AI is here to stay, and institutions must have policies and procedures in place to ensure its ethical use.
Lance: The four-hour workshop was a pre-conference event for NERCOMP’s annual conference focused on “Institutional Policy Development for AI Generative Tools in Teaching and Learning.” It included a one-hour student panel and a series of activities and brainstorming about what is needed and uncertain for different areas of the institution. A major declaration from attendees was the importance of including students in the conversation. Additionally, folks realized that different areas (faculty, IT, student support, etc.) had different needs, concerns, and questions about how to go about thinking and developing guidelines for usage.
Any advice for those developing resources to facilitate teaching and learning with generative AI at their universities?
Lance: All of this is new and it’s going to continually change over the next few years. Given that, I think it means a few things for teaching and learning. First, faculty should learn a bit more about it and play with it. Think about its uses and relevance to your field, discipline, and industry. Second, it exists and acknowledging it is a good first step; you can have a policy about it and also know that there might be no good way to enforce that policy. Yes, there are AI plagiarism programs out there—but they largely will not help in the long run given the complexity of this problem. Third, faculty should revisit and rethink what it means to demonstrate learning and the different ways that can happen–we have defaulted to the written word in highly rhetorical and formatted structures, but look beyond that. There’s lots of great communities of practice out there for nearly every discipline. Fourth, we can all do better with teaching and learning if we embrace Jesse Stommel’s four-word pedagogical advice: “Start by trusting students.” We can do that by recognizing that teaching and learning are relationships and we need to foster meaningful relationships with students where they feel safe to fail and fumble in the course. Through authentic relationships, we can motivate students to produce work that reflects their efforts. There’s obviously a lot more to say on this (ever-changing) topic, but we created this video that provides some highlights and guidance for faculty around thinking about generative AI in higher education as well.