On June 3, 2025, the University of Baltimore hosted its second AI Summit, organized by the university’s Center for Excellence in Learning, Teaching, and Technology (CELTT). The event, as described by the organizers, was “an in-depth exploration of artificial intelligence’s transformative impact across sectors, with a particular focus on workforce development, educational adaptation, and responsible innovation.” A key strength of the summit was its commitment to bridging the gap between what is happening with AI in the classroom and what is happening in the workplace. Determining how to best prepare students to enter the workforce as AI grows increasingly ubiquitous was a recurring theme throughout the day.

The summit brought together an array of stakeholders from within education, industry, and public policy, hailing from the local region and beyond. Maryland Secretary of Labor Portia Wu and Secretary of Higher Education Sanjay Rai gave opening remarks, signaling just how significant AI’s impacts on education and the workplace will continue to be. The subsequent sessions addressed the topics of AI literacy, the challenges and opportunities for today’s instructors and students, and the impacts of AI on entrepreneurship and business, among others. I was excited to have the opportunity to share findings from the recent “Making AI Generative for Higher Education” cohort project (the University of Baltimore was one of the participating institutions) at the summit.

In this post, I share a few insights from the event.

#1: AI skills are important, but not the only skills needed to enter an AI-infused workplace.

The local entrepreneurs and software developers speaking at the AI Summit reported already seeing an increase in efficiency through integrating AI into their workplaces. During the “AI in Business and Entrepreneurship” panel, Bill Karpovich noted that his company is not replacing existing employees with AI, but is now able to hire fewer staff while still growing. Familiarity with AI, according to several speakers, is a crucial skill in many fields. They also indicated that colleges and universities should be taking steps to integrate AI into student curricula.

That said, speakers agreed that students need other capabilities—beyond knowing how to use AI—to enter the workforce. For instance, a few speakers cited “versatility,” as an essential quality for graduates, one that will allow them to more easily adapt to different kinds of work as AI continues to transform job tasks and the job market. Other speakers noted that “soft” or “durable” skills remain important in the age of AI, since employees will still need to excel in these “human” functions like leadership, teamwork, and communication that AI cannot sufficiently replace. One speaker, France Hoang, CEO of BoodleBox, additionally proposed that “domain expertise” remains a valued quality, as it allows employees to effectively evaluate AI outputs in a subject area they know well.

This variety of crucial skill sets for an AI-infused job market indicate a few things. First, determining what makes a graduating student most employable today is complex and ever evolving, just as it was before the recent rise of AI. And, what students need extends beyond just familiarity with AI. Discussions at the summit indicated that industries where AI is having a transformative impact are still figuring out exactly what skills their employees will need moving forward. Even seemingly opposing skill sets—such as “versatility” and “domain expertise,” for example—could both end up being relevant depending on the specific profession and workplace.

Second, as stakeholders across higher education grapple with the place of AI in student learning, they will want to make sure other crucial learning outcomes beyond AI literacy do not entirely slip out of focus. This entails, for example, that educators continue to help students develop capabilities like versatility, expertise in a discipline, and soft/durable skills—as well as ensure that when they integrate AI into their teaching, they are not compromising these other learning outcomes.

#2: Overwhelmed faculty are struggling to keep pace.

Individual faculty opinions on AI are mixed, to say the least. However, many have heard the message that their students need basic familiarity with AI tools to enter the workforce and are willing to engage.

That said, one of the primary messages from faculty at the summit is that they are overwhelmed. The space is moving too fast for them to keep up, and they feel they do not have the necessary time to adapt. We all frequently hear how generative AI technology is fundamentally changing teaching and learning and will continue to do so. If that is the case, it makes sense that instructors feel overwhelmed: rethinking decades-old practices is not something that can be accomplished overnight.

One of the trickiest issues is assessment. In the wake of ChatGPT’s commercial release in late 2022, the question of how instructors would prevent or detect student “cheaters,” particularly in essay writing, was a hot topic. While plagiarism is no longer dominating the headlines to the same extent—even if the issue is not resolved—the underlying question of how to assess student work in the age of generative AI remains a topic of concern.

This subject came up a number of times at the AI Summit. A “process over product” approach—in which a student’s process for completing their assignment is assessed, rather than only their final product—dominated most of these conversations. Instructors expressed hope that this could encourage critical thinking in students, while also allowing them to use generative AI. Many instructors seem to still be figuring out exactly what successful models of “process over product” assessment look like, though, with speakers like Mike Kentz proposing their own models.

If widely adopted, these changes in assessment practices would shift dominant student culture away from its traditional focus on the final product—something many summit participants saw as a potential positive impact.

#3: The AI space is moving fast, but we need to keep our eye on who’s moving it where, and why.

The AI Summit was characterized by a palpable excitement about the opportunities that AI technology could open up for higher education, industry, and even the local Baltimore community. However, at different points throughout the day, there were also reminders to remain aware of the wider societal dynamics that inform how we think about AI.

During the morning session on “AI’s Grand Challenges,” speaker Amen Ra Mashariki, director of AI and data strategies at the Bezos Earth Fund, cited a quote attributed to former hockey player Wayne Gretzky: good players skate to where the puck is, but great players skate to where the puck will be. Mashariki used this quote to propose an analogy: when we make decisions regarding AI in education, industry, or government, we are either skating towards the puck, or—if we are a bit wiser—towards where we think the puck is headed. The latter refers to more reflective decision-making, where we think about where we are headed in the future with AI and try to make strategic choices now to prepare ourselves for that. Mashariki also pointed out that the private sector, such as from AI developers, “move” the puck to where they want it to be through their actions and public statements, to encourage the rest of us to skate in that direction.

The message I took from these comments was to remember to be critical and conscientious in decision-making when it comes to AI. To extend Mashariki’s metaphor, perhaps stakeholders in education want to question whether they too have the ability to “move” the puck to where they want it to be, rather than just skating towards where others are telling them it is.

Ultimately, higher education as a whole will want to “skate” in the direction that will have the most positive impact on a wider scale. In order to know what that “right” direction even is, though, we need to continue fostering opportunities—like this AI Summit at the University of Baltimore—for discussion and analysis to better understand the challenges and ethical dilemmas AI is bringing about.