Exploring AI, the Oberlin Way
A campus-wide effort centers curiosity, caution, and community in shaping the future of artificial intelligence.
May 7, 2026
Office of Communications
Photo credit: Yevhen Gulenko
At Oberlin, the Year of AI Exploration began not with a directive, but with a shared sense of curiosity and responsibility.
Artificial intelligence had arrived in classrooms, studios, and workplaces across campus, raising essential questions. When might it be beneficial to engage with AI—and when might it be unproductive or even harmful? How might its impact differ across a college and conservatory with so many distinct disciplines? And what responsibility does the institution bear, not only to its campus community of today, but to the students, faculty, and staff of tomorrow?
Rather than rushing toward adoption, Oberlin chose to begin with exploration—an approach grounded in curiosity, caution, and concern. The goal was to understand AI within the institution’s own context and to learn as a community. Only through thoughtful, research-based engagement—engagement inclusive of diverse perspectives—could we begin to shape how AI should be used on our campus and beyond.
This effort was supported by an AI Advisory Group convened by Oberlin President Carmen Twillie Ambar and made up of faculty and staff representing all corners of campus. Its efforts were coordinated by Chris Drennen, Oberlin’s Director of Academic Technology and Instructional Support, as well as Associate Professor of Computer Science Adam Eck and Professor of Music Theory Joseph Lubben, who served as co-directors of AI strategy and innovation on behalf of the college and the conservatory, respectively.
Like all new technology, its use and development will grow and change; consequently, faculty, staff, and student usage will grow and change.”
—President Carmen Twillie Ambar
Faculty, staff, and students gathered to hear from higher education AI experts, including professors Lauren Goodlad (Rutgers University) and Christopher White ’05 (University of Massachusetts) and Tricia Bertram Gallant, Director of the Academic Integrity Office at the University of California San Diego. Each brought a different disciplinary lens to the discussion.
Workshops and events complemented these talks. A dozen sessions—organized by groups including the Center for Information Technology, the Lemle Center for Innovation and Excellence in Teaching and Scholarship, the Oberlin College Libraries, and individual faculty—created space for hands-on learning and reflection. Topics like “AI & the Liberal Arts Imagination” and “What We Talk About When We Talk About AI” encouraged participants to think critically about AI’s role in a liberal arts setting.
In the conservatory, a series of faculty- and alumni-led workshops fueled discussions on historical perspectives on computational creativity, use of generative AI in music-making, and examinations of how AI music generation works.
Also in the fall, faculty and staff gained access to institutionally licensed versions of two leading AI platforms, OpenAI’s ChatGPT and Google’s Gemini, thus creating space for examination of the tools in a secure environment.
Plans for the spring semester extended this momentum, with additional workshops, the formation of communities of practice, and new opportunities for curricular development. Nine grants of up to $5,000 each were allocated to support experimentation and innovation, allowing faculty and staff from a host of campus offices and departments to explore how AI might fit into their work. In the coming year, each grant recipient will report the key takeaways of their research to campus.
At the same time, attention turned to building AI literacy and shaping policy.
Efforts began to provide students with training on how to responsibly engage with generative AI. This included not only technical understanding, but also consideration of academic integrity, creativity, emotional well-being, data security, environmental impact, and accessibility.
Institutional policies were revisited. The writing requirement came under review, with a focus on how AI might affect students’ ability to communicate effectively, think critically, and adapt their writing to different audiences and disciplines.
The Honor Code, revised on its regular cycle, was updated to remain flexible in the face of rapidly evolving technologies. While the default position limits the use of generative AI in assignments, instructors are given the authority to determine appropriate use within their courses.
“The faculty have made clear that the default framing of AI in the classroom is that it is not permitted,” says President Ambar. “But, this ‘default’ gives way to the individual faculty members’ discretion about the use of AI for research, broad understanding, uses in specialized projects, assessments, and more.
“Like all new technology, its use and development will grow and change; consequently, faculty, staff, and student usage will grow and change. We will, in our various settings on campus, continue to develop the ways AI can and should be used, and where it isn’t appropriate.”
The most important outcomes of our collective exploration were the rich communal conversations we had during the year. These conversations resulted in a better campus-wide understanding of AI, its benefits, and challenges. We now have clarity on the range of ways faculty and administrative units will use AI going forward.”
—Ambar
Out of this period of exploration, new academic pathways have begun to take shape.
A Critical AI Studies minor, set to launch in fall 2026, aims to prepare students to evaluate AI systems—examining their usefulness, ethical implications, and limitations within broader social contexts.
“The most important outcomes of our collective exploration were the rich communal conversations we had during the year," says Ambar. “These conversations resulted in a better campus-wide understanding of AI, its benefits, and challenges. We now have clarity on the range of ways faculty and administrative units will use AI going forward.”
In the near future, all Oberlin students will have the opportunity to receive training in responsible use of AI and gain access to campus-licensed versions of AI tools. According to Ambar, faculty approval of the student training module laid the groundwork for student access to the institution’s enterprise AI platform—most likely Gemini—in the year ahead.
“Implementation of student training and broad student access to the institution's chosen AI platform will happen fully in the coming year,” she says. “Additionally, we are piloting various AI projects that would support the work of administrative units in substantive ways.”
Together, these initiatives reflect a commitment to a human-centered approach—one that builds on Oberlin’s historic strengths of academic and artistic excellence and its commitment to developing future leaders who are empowered to foster change around the globe.
“I think conversations have evolved in their depth over the course of the year as we have collectively gained new insights and understandings about generative AI, as well as its impacts on higher education and society around us,” says Eck. “Those conversations started with trying to understand what is generative AI—including cutting through the hype ever-present in public discourse—and have evolved to deeper conversations weighing the short- and long-term benefits and risks of utilizing this technology in different scenarios in education and beyond.”
The Year of AI Exploration was never intended to produce simple answers. Instead, it opened space for ongoing inquiry, collaboration, and reflection. The invitation remains: to continue exploring, together.
“That evolution is never finished,” says Eck. “It will continue in the coming years as the Oberlin community continues to explore and evaluate these emerging technologies.”
You may also like…
JeffriAnne Wilder Named Inaugural Executive Director of Center for Justice, Equity, Diversity, and Inclusion
Newly created center deepens Oberlin’s nearly two-century commitment to creating access and opportunity; Wilder has played a leading role in establishing diversity programs across the educational spectrum.
Going Above and Beyond Diversity
Oberlin’s inaugural HR director of diversity, equity, and inclusion is focused on building an inclusive work culture.
John Petrucelli Named Oberlin’s First Postdoctoral Fellow in Jazz History
Educator and performer to teach courses and guide student research.