Running to the Noise, Episode 23

Artificial Intelligence: Tom Dietterich ’77 on the Promise and Pitfalls of Machines that Learn

Thomas Dietterich.

Artificial intelligence has evolved from an abstract concept into one of the most transformative forces of our time. When Tom Dietterich graduated from Oberlin in 1977 with a degree in mathematics, AI was still largely theoretical. Over the decades that followed, his pioneering research helped turn theory into reality.

A distinguished computer scientist and one of the early architects of machine learning, Dietterich’s work laid the groundwork for the algorithms that now drive everything from voice assistants and climate models to medical diagnostics and drug development. Tom’s work has made him a sought-after authority. He advises the U.S. government on AI technologies and has earned some of the field’s top honors, including the Award for Research Excellence from the International Joint Conference on Artificial Intelligence—a career achievement shared by just 25 scientists since 1985.

In this episode of Running to the Noise, Oberlin College President Carmen Twillie Ambar sits down with Tom Dietterich, Distinguished Professor Emeritus at Oregon State University, to explore both the promise and the pitfalls of artificial intelligence. Together, they trace the evolution of AI from its beginnings to its current influence across nearly every industry, and discuss how a liberal arts education uniquely prepares us to ask not just what can AI do, but what should we do with it?

From the environmental impact of large-scale computing to the creative and ethical questions facing artists and educators, Dietterich offers a nuanced, hopeful, and deeply human vision for how we can shape the future of intelligent machines.

This isn’t just a conversation about technology; it is a reflection on curiosity, ethics, and what it means to stay human in an age of algorithms.

What We Cover in this Episode

  • The origins of machine learning and how early innovators taught computers to “learn.”
  • The environmental and ethical implications of AI and how efficiency and innovation can coexist.
  • Why AI’s biggest challenge is not what it can do, but what humans choose to do with it.
  • How a liberal arts foundation fosters critical thinking, ethics, and responsible innovation.
  • The promise of “computational sustainability” and AI’s role in addressing global challenges.

Listen Now

Carmen Twillie Ambar: I am Carmen Twillie Ambar, president of Oberlin College and Conservatory. Welcome to Running to the Noise, where I speak with all sorts of folks who are tackling our toughest problems and working to spark positive change around the world. Because here at Oberlin, we don’t shy away from the challenging situations that threaten to divide us.

We run toward them.

When Tom Dietterich earned his math degree from Oberlin in 1977, artificial intelligence systems couldn’t learn or adapt. They simply followed rules. Over the next few decades, Tom helped change that. He became one of the pioneers of machine learning—the technology that allows AI to make sense of data, spot patterns, and make decisions.

Today, it powers everything—from Siri on your phone to the algorithms planning your commute, recommending your next podcast, or helping doctors develop targeted medical treatments. Tom’s work has made him a sought-after authority. He advises the U.S. government on AI technologies and has earned some of the field’s top honors.

His research has shown how data and algorithms can solve real-world problems: improving wildfire management, protecting endangered species, enhancing agricultural productivity, and advancing drug development. When machine learning technologies leapt from the lab into daily life, Tom’s liberal arts education helped him step back and ask the bigger questions.

How will AI shape our institutions? How will it affect our understanding of ourselves? And what ethical responsibilities come with creating machines that learn and make decisions that increasingly impact our lives? In this episode, I speak with Tom, distinguished professor emeritus at Oregon State University, about the power and the peril of artificial intelligence—and why the real challenge of AI isn’t what it can do, but what we choose to do with it.

I’m so excited to have Tom Dietterich here today because we rolled out a year of AI exploration at Oberlin—a campus-wide initiative that includes speakers, workshops, and opportunities for people to think about the impact of AI in all the work they do. Obviously, we’re thinking about environmental and privacy impacts, but also how this technology might reshape higher education—what things we can take from it that we feel really positive about and what things we should be concerned about.

I’m really excited to have this conversation with you because I think it fits within that AI theme, and I’m hoping you can help us think more clearly about this technology and what it means. Tom, I guess I wanted to start with whether you think it’s right to describe you as one of the founders of AI or machine learning. How would you want people to think about your early contributions to this technology?

Thomas Dietterich: I’m not a founder. I’m probably one of the first graduate students who worked in artificial intelligence. Generally, we date the founding of AI to 1956—John McCarthy, Herbert Simon, and people like that. And I was only two years old at the time, so I was not at that first workshop in Dartmouth.

But starting in 1978—or fall ’77, right after I graduated from Oberlin—I began a PhD program at the University of Illinois and later moved on to Stanford to finish my PhD. At that point, the field of machine learning was first given its name and started to have regular meetings. There was a workshop in 1980—thirty people attended. This year, I think the big AI conferences are having 15,000 to 17,000 people attending. So it’s been quite a ride.

Carmen: I’m wondering if you can help us understand what those early days were like. I think to myself—what were the courses? What were you trying to accomplish in those early days as you were thinking about machine learning and its possibilities? Take us back to those early moments.

Tom: Of course, whenever you’re programming a computer to do something, you need to somehow specify what you want the computer to do and how to do it. Ideally, you want to provide a step-by-step recipe, and we write that in a computer programming language, of course.

But when people started to think about things like speech recognition, language translation, or maybe controlling a robot, we quickly realized that we had no idea how to write down the step-by-step process by which our eyes see that—oh, that’s a dog running across the quad—or that we hear something and recognize, oh, that’s a Bach prelude.

It happens in a part of our brains that we cannot introspect into. How are we supposed to write those programs? So one thought was maybe we could teach the computer—quote-unquote. I always hesitate to use these human-loaded words—but to provide examples of what the input should be and what the output should be, and then see if we could write a program that could find patterns in the input that would tell it how to produce the output.

That was the idea of machine learning: trying to fit a function, or mathematically, find a mapping between inputs and outputs.

Carmen: As I was preparing for this interview, I read that you mentioned when your team was teaching robots to move, you were literally falling and trying again—that was how you thought about the work, trying to teach the computer to “learn how to learn.” Maybe that’s not quite the right phrase?

Tom: No, it’s just that for some tasks—like language translation—we can have a sentence in Spanish and the corresponding sentence in English because a human translator has created it. We sometimes call that supervised learning—it’s giving the right answer.

But it’s much more difficult in many tasks, and robotic motion is one of them, although I personally haven’t worked on that so much. I’ve worked on the underlying mathematics and algorithms. Another example is in biological conservation, where you’re considering what steps could we take to prevent an invasive species from taking over.

There’s no human expert that knows the right answer, so you can’t just tell the system, “When you have this situation, here’s what you should do.” Instead, what we tend to do is build a simulation of some kind, and then have the computer test out different strategies in simulation, figure out which ones work best, and then learn from those. That’s known as reinforcement learning—or, technically, bandit feedback, which comes from the idea of a slot machine being a “one-armed bandit.” You can only learn what payout you get from a machine by pulling the lever.

Carmen: So by pulling the lever multiple times, you get different answers, but ultimately you arrive at the right answers through continual experimentation—is that the concept?

Tom: Right, yes. Of course, with robots, it’s extremely difficult in the physical world to have them learn by trial and error, so we usually do that first in simulation. One of the big advances over the last 30 years is that we have much better physical simulators that can simulate the mechanics and friction and everything that the robot would experience. The computer can do millions or billions of simulated experiments and come up with a strategy for walking, let’s say, that’s pretty good—and then you test that out in the real world.

Several years ago, Sony came out with this robot called the Aibo—it’s the little puppy. Colleagues of ours at the University of New South Wales and Carnegie Mellon wore out the hip joints of their robots testing different ways of walking across carpeted floors and had to give them hip replacements as a result.

Carmen: So I was wondering if you could talk a little bit about one of the concerns people raise—I’m sure you’ve heard many—around environmental issues, ethical issues, all the things that people bring up in relation to AI. Maybe you could take just one of those, like the environmental impact, and give us your perspective. Should we be concerned? What should we be concerned about, and how should we analyze our progress toward addressing those concerns?

Tom: Let’s first talk about the environmental impact. I think that’s actually the most tractable of the questions. Over the last few years, there’s been a huge investment of capital in trying to scale up large language model systems.

Back in the early 2020s, this approach for building large language models—known as the transformer model—was invented at Google. The initial results were intriguing, so people thought, “Maybe we should just train them on a lot more data,” and that led to GPT-3, then GPT-4, GPT-5, and all their siblings and competitors.

But there was very little attention paid to making this efficient. Everyone was just curious what kind of behavior we could get out of these systems—how they would act. I would characterize this as the brute-force period. People threw huge amounts of computing power, electricity, and vast quantities of water to cool the data centers. It was very inefficient.

Our colleagues and competitors in China, not having access to such high-end machines, were forced to innovate—and they immediately achieved 20-times more efficient methods. In the meantime, NVIDIA, which makes the majority of the chips used to run these systems, has been increasing the efficiency of their chips at a very rapid pace.

So there’s a huge motivation from the companies spending billions of dollars running these systems to make them more efficient, cheaper, and less energy-intensive—because it’s costing them real money. And it’s not clear they’re making that money back yet, since the technology is still so immature and not being used as widely as they’d like.

So I think the problem of environmental impact will, to some extent, solve itself—because the incentives will be there.

Carmen: The incentives are very powerful here, right?

Tom: Exactly. We have a lot of expertise in computer science, electrical engineering, and power systems that we can bring to bear. We know how to make these things run faster. The question is, what happens if demand increases exponentially?

We could end up with something like adding lanes to a freeway—where more people get on the freeway and traffic gets worse. We’ll probably see some of that, particularly if the technology turns out to be as useful as people claim it will be.

But I think there’s a deeper question: is there a better way of building AI systems that doesn’t rely on massive amounts of training data and immense models?

I feel like the machine learning field is in a moment of crisis—in the sense of what Thomas Kuhn describes in The Structure of Scientific Revolutions. A scientific field proceeds through routine experimentation until things just don’t seem to be working.

As a machine learning researcher, I could never have imagined we’d have this much capital investment. We’ve scaled up these systems massively, yet we’re finding they still don’t do what we want them to do.

The most famous issue, of course, is hallucination—systems making things up. The original term came from language translation, where a translation would mention something that didn’t appear in the source text—it was just something very likely to have been there. In computer vision, you might ask a computer to describe a picture of a bathroom showing a sink and towel rack, and it might say, “There’s also a mirror there.” It’s not in the image, but probably in the room.

That’s where the term hallucination came from—it’s imagining things that aren’t there. But more broadly, we’ve seen over 150 legal cases where documents submitted by lawyers cited non-existent law cases. There are also made-up academic citations in scientific papers.

Another problem is that these systems don’t seem able to learn abstractions that generalize. It’s much closer to memorizing billions of facts and interpolating between them, but not going far beyond the data.

A good example: you can train a computer on multi-digit multiplication problems—two digits, three digits, up to nine digits—but no matter how many examples you give, the models can’t handle problems that are much larger. If you train them on eight-digit multiplication, they can’t do sixteen or thirty-two digits.

They’re not learning the general rules. What’s funny is you can ask them what the rules are, and they’ll tell you—but they can’t actually use that knowledge. You can also ask them the rules of chess—but they can’t play chess.

That’s evidence we may need a fundamentally different way of building AI systems than what we have now.

Carmen: I think that’s so fascinating because one of the things that’s been happening on our campus—there’s been a lot of concern about AI technology replacing people’s jobs, eliminating the work that creatives do. On the faculty side, people are certainly wondering whether students will lose the same critical thinking skills and cultural competency skills because of their use of AI.

So I’m interested in your perspective about this notion of AI replacing human thinking. You’ve talked about some of its limitations and the things it really can’t do—and maybe the model will be built in a way that changes that—but I’m wondering whether you have that same level of concern about how AI might eliminate industries and the kind of human involvement in the work that we do.

Tom: Well, absolutely, I share those concerns. Where to begin—writing, thinking, and knowing how to construct a logically sound argument is an absolutely essential skill, and something that we have to teach in our universities all the time. We do that mostly by teaching writing, and like every academic, it’s only when I sit down and try to write out my arguments clearly that I start to see all the holes in them.

I think it’s absolutely essential that we do that. Now, the question is—particularly in the earlier courses where we assign writing assignments that are fairly routine—some of the large language model systems have read all of these previous essays on the same topics and can generate a new one that’s just another version of those.

I think we have to ask whether those exercises were actually helping students learn to think, even in the old days. They’re certainly now going to be able to outsource that thinking to these models.

Carmen: So are you asking us to rethink how we approach the academic experience and what we’re trying to teach students in light of this new technology? Are you suggesting that we still want to teach the same things—like critical thinking, understanding how an argument is structured, what it means to have evidence, whether that’s evidence in a philosophical argument, a scientific argument, or a mathematical one?

Tom: A weakness of these large language models is that they don’t understand what an argument is or what evidence means. So we still want to teach that. But the question is how. If it turns out that students can produce a sort of simulation of having thought through a problem and we can’t tell the difference, that means that we weren’t doing it right to begin with, in some sense.

Carmen: And you’re saying there’s probably a higher-order level of teaching that we need to do so we can help students get to that next level of inquiry and understanding.

Tom: I mean, of course, as teachers, we set tasks that we hope have strong pedagogical value for our students. Maybe we need to do more in-class writing. Maybe we need to do more human-to-human paired writing. This is where our creativity as teachers and as students needs to be brought to bear. But I totally grant—it’s a challenge, and a lot of our old pedagogy, at least our techniques, will have to be replaced. And that replacement is likely to be more expensive.

Carmen: One of the reasons why I wanted to do this year of exploration of AI at Oberlin is because we can’t ignore this technology. It’s here. We’d be doing our students a disservice not to help them understand it and think about it. And if we need to shift how we do our work in order to help better prepare our students, this year of exploration is about figuring out how to do that—because turning a blind eye to AI isn’t going to help us solve the challenges.

One of the things I really appreciate about what you’ve said about your own experience being here at Oberlin is how few computer scientists have a liberal arts background that has informed the way you’ve thought about this technology. It’s impressed upon you the importance of thinking about its ethical impacts in ways that, I assume, might not have been as clear for you if you hadn’t had the type of background you had at Oberlin.

Tom: The most important class I took at Oberlin was on political philosophy with Harlan Wilson. It was an extremely popular class, especially for more scientifically oriented students. In his class, he said, “Karl Marx claimed to be a scientist—read Marx and critique his thinking. And also go read Thomas Kuhn’s The Structure of Scientific Revolutions for a glimpse at what was then contemporary philosophy of science.”

It was really that introduction to the philosophy of science that led me, in graduate school, to think about machine learning. The goal of machine learning is, in some sense, to automate scientific inquiry. The central question of the philosophy of science—or one of them—is: how do we acquire new knowledge and learn new things about the world in a systematic and rational way?

How do we go from our common-sense understanding of the world to quantum mechanics through a series of rational steps? I thought that was a fascinating question worthy of study, and that really motivated me to do more reading in the philosophy of science after I left.

But the other thing, of course, is that one of the things you’re taught at Oberlin is—it’s complicated. Basically, everything’s complicated. You take classes in history, economics, sociology—and it’s just drummed into you that if anybody proposes a simple answer, it’s got to be wrong.

Carmen: One of the ways I describe the experience here is that students learn how to analyze complexity.

Tom: Well, or at least you know it’s there. You may run away from it if you can, but engineers often emphasize seeking out the part of the problem that can be solved by engineering methods—by applying physics and chemistry and so on—and focusing on that, while trying to ignore the larger sociotechnical concerns.

That was the world I lived in. I would say for the first ten years of my career, I was having a good time solving little mathematical puzzles in my field, doing normal science. But I was a little disappointed that they weren’t leading to any real-world impact. Around 1996, when I was first promoted to full professor, I thought, you know, it’s time to branch out. So I started collaborating with ecologists here at Oregon State University.

Carmen: Was that when you were modeling migrations?

Tom: I didn’t get to bird migration for another five or six years. I was first working on global vegetation models, because the global climate models at that time only modeled the atmosphere—and the Earth was just considered a flat surface. The question was, how about if we bring all of plant life into these models—would that help?

I worked with collaborators here who were doing that, and then I moved on to some other things. Eventually, I got connected with the Cornell Lab of Ornithology. They have a big citizen science project called eBird—one of the oldest—where bird watchers fill out checklists every time they go birdwatching. The question was: could we actually do any science with this so-called citizen science data?

One of the things we attempted was to predict bird migration and forecast when it’s going to happen—because then we might be able to make some interventions, particularly turning off lights in skyscrapers or other sources of artificial light, because that really confuses the birds.

Carmen: Could you talk a little bit about those parts of your research project? I know you’ve done some work on fires and the liability standards associated with wildfires. Maybe you could help our audience understand how AI plays a role in that type of research. Some of what concerns people is they can only see the things that make them fearful about what this technology may be. They’re not as well versed as you are about some of the ways that this technology might help us do right by the planet, do right by the world. So maybe you could help our audience understand some of your research in that context.

Tom: It is important to realize that there are many, many branches of artificial intelligence. It’s a banner that is quite confusing. Large language models are really just a small part, although they’re sucking all the oxygen out of the room right now. But an area that I worked in with wildfire and also with invasive species is really the problem of management. So in wildfires, invasive species, and really in agriculture generally, you can think of it as trying to manage an ecosystem—make sure that it’s functioning well, that it’s not going to go extinct, and so on.

We end up with a problem very similar to the robot problem we were talking about before. We don’t know the right way to manage these systems. We usually build a simulation first. In the case of wildfire, we were able to repurpose and combine several big simulators that already existed—particularly something called FARSITE from the U.S. Forest Service in Missoula, Montana—and then simulate different management strategies.

Now, in that case we were interested in the legal question of what kind of liability rules should apply to wildfire. Right now, the usual standard is: let’s say we have two different landowners who own adjacent property and a fire starts on, say, Alice’s land and burns into Bob’s land. Does Alice have any obligation to pay for the damage that Bob experienced, or not? Currently the answer is no. Wildfire is treated as an act of God; everybody just has to deal with their own property.

But there was a proposal to have a liability rule in which Alice would have to reimburse Bob for his losses. There was also the thought that if Alice had taken steps to try to reduce fire risk on her property—say, reduced the load of accumulated fuels in our forests here in the West—then maybe she should not be liable, so we could have some incentive for her to behave differently.

So we compared various rules. What we found was the liability rule that said Alice would be responsible for Bob was a disaster. Where does the AI come in? In simulation. We simulate what would happen with various wildfires and simulate what Alice and Bob would do—what would be their response? Each of them has some objectives they’re trying to achieve. In this case, we were looking purely at economics.

What we found is that fire is so rare that the optimal behavior is to do nothing, because chances are a fire won’t start on your property. And if you’re hurt by it, someone else will pay for it, so you have no incentive at all. This was really applied economics, in some sense. We found that if you had a requirement that you would have no liability if you had brought your property up to a minimum standard, that was better and it altered behavior—although we still found that your risk was much less predictable under the liability scheme than under the scheme where you just have to take care of your own property no matter what.

Which makes sense: if you have to take care of your own property, then you just have to live with the risks and you’ll do things. If there’s a chance that someone else will pay for it, then you could have a much bigger or much lower risk, and the variability goes way up. In some sense, it’s a less desirable state of affairs. But in economics you talk about the social optimum—imagining one decision maker could control the entire economy. What would be the outcome? We found that we could approach that with this liability standard.

Carmen: Interesting. I do think there is this sense of AI as this kind of amorphous, probably evil thing that has no benefit to society. You seem to be saying that ways we can do large-scale simulations can help us think about policies that can solve complicated problems for us in ways we may not be able to do if we didn’t have AI as a framework for thinking about this work. I guess one other question: Oberlin has probably almost one third—not totally, but getting close. We have 500 students in the Conservatory. We probably have 300 students in what I would describe as the practicing arts—studio art, painting, theater—name the practicing arts. There’s a lot of concern in the creative space, particularly about the impact of AI on artists and their ability to have their work used and not appropriated in the wrong way, or the ability for AI to create whole movies and storylines. What would you say about that space in AI?

Tom: It’s a fascinating set of questions. It looks like existing copyright law does not really protect artists in the way we might want them to be protected. Of course, there’s a long tradition in art of making copies of the work of other artists as a form of practice and learning. But if you try to sell those, that’s generally considered forgery.

I think we maybe need new law that says not only the exact previous expression—the exact painting, the exact movie, whatever— is protected, but in some sense the style is deserving of some protection. That’s going to be very hard to define, but the style of the artist should have some protection. If I can walk into your studio, take pictures of everything, and then produce 10,000 works in the same style, at a minimum I’m going to reduce the price anyone will pay. That just seems like a clear case of theft.

Right now, the law says if I’m trying to pass them off as your work, that’s illegal. But if I just say, “I like the style, so I made a bunch under my name,” that’s considered okay. I don’t think that’s okay. I’m less worried about AI systems, in a very simple push-button way, generating whole movies or news stories—at least with the current AI technology. It’s very statistical and exhibits this regression-to-the-mean phenomenon. It generally produces things that are pretty boring and vanilla—no offense to lovers of vanilla.

In fact, there’s going to be increasing economic or artistic value placed on original expression because the AI stuff is not going to be good enough. Now, AI in the hands of a good artist could become even more interesting. I’ve already seen interesting pieces. I follow a couple of artists on social media who use AI tools to design ceramics, which they then build and manufacture, and they’re really fascinating. There’s space here, and it will be very interesting to see how that unfolds.

Carmen: I think I heard someone say AI’s not going to replace the human endeavor, but humans who don’t understand how to use AI and its impact will have challenges. What do you think?

Tom: I feel like that’s too much of an AI-booster statement. People will be able to make fantastic art without knowing anything about AI. But some people will choose to—just as digital artists already use computer-based tools. What will be interesting is what they do with those tools and how they bring their humanity to it. I’m excited to see where that takes us.

Carmen: I certainly know we’re going to have to figure out how to think about AI on college campuses in all sorts of ways, and its impact can be wide-reaching. I’m excited about this year of exploration because I think it will help us come to better conclusions about our own thinking, and also the impact on the next iteration of teaching, learning, and careers. That’s one of the things I’m hoping to accomplish this year. So, could you talk a little about something your colleagues call “computing for a better world and sustainable future”?

Tom: I co-led two very big grants on what we call computational sustainability, and I’ve been involved in some other AI-for-social-good efforts.

Carmen: We haven’t heard people talk much about computational sustainability—AI for a better future. It seems a little lost in these conversations about AI. Could you help our audience understand what those conferences have been about and their purpose?

Tom: This area of computational sustainability was developed by Carla Gomes and myself—she led the project. She’s a faculty member at Cornell. Our vision is: how can we develop methods in computer science to promote sustainability in the natural environment and in our economic and social systems, within the Sustainable Development Goals framework of the United Nations?

We’ve found a wide variety of projects under this umbrella. Some have been like the bird migration or invasive species management problems. There’s been work on law enforcement for anti-poaching efforts—how do you model where poachers are going and predict where they’ll be using machine learning? How do you design the routes forest wardens should take so they’re very unpredictable and have a higher probability of catching the bad guys?

We’ve also looked at materials science. Some of the biggest potential benefits of AI are going to come from applications in new drugs and new materials—for example, more efficient batteries, better energy transmission, all the things we need to decarbonize the economy. Those are areas where we’ve already seen AI techniques provide powerful tools for predicting the structure of proteins from their sequences—AlphaFold from Google DeepMind and Rosetta from the University of Washington.

Back in 1991, I was involved in a startup company trying to use machine learning for drug design. We were about 30 years too soon. Now there are many companies in this space, and they’re starting to see real success.

One project I’m currently involved with is in weather networks. If you look at a map of the weather stations around the world—ground stations that record temperature, rainfall, wind speeds—you see huge empty spaces in South America and Africa. At the same time, through the wonders of miniaturization and electronics, we can now build weather stations that are maybe twice the size of a Coke can, but 10 to maybe 100 times cheaper than traditional stations, and we have cellular data networks.

I’m involved in a project called TAHMO—the Trans-African Hydro-Meteorological Observatory—which hopes to operate a network of, our dream is, 20,000 ground weather stations across all of Sub-Saharan Africa. We have 750 stations right now across 22 countries. It’s a nonprofit headquartered in Nairobi, with field technicians throughout those countries. My role, which is relatively minor, is to try to detect when a weather station needs a technician visit to clean it and replace broken components. It’s a statistical maintenance problem—but surprisingly difficult.

Carmen: To solve, right? No, that is fantastic. For those of you regular podcast listeners, you know that this podcast was named after that statement, as Michelle Obama described Obies and Oberlin graduates and students as people who “run to the noise.”

Tom, I guess I’m wondering what you would say to that statement. How do you, in your work or personal life—however you want to answer the question—run to the noise?

Tom: I’m always looking for ways that I can make a difference in the world that go beyond just publishing papers and solving lovely technical problems that are fun. One of the ways I’ve done that is by building a network—a sort of professional social network on my own campus and across the country—of people who know people who can introduce me to the right folks when I have an idea or see an opportunity.

So, when I saw an opportunity to think about wildfire, for example, I contacted someone on my campus, and she introduced me to two professors in the forestry school who were experts in that area. With the Africa project, that was more a sequence of coincidences. A colleague of mine, John Selker, a hydrologist by training, is one of the masterminds behind this Africa project. I said, “Oh, that looks cool—I bet I could help.” So I volunteered my time on that and saw what we could do. Actually, I’ve had three PhD students finish, all working on these weather network problems.

Carmen: We thank you so much for your time and for being one of those Obies our students can look to, to know that what they want to achieve is possible. We’ll be following you and your work—and I’m sure calling on you to help us think about the Year of AI Exploration at Oberlin. We’d love for you to come and speak and help us understand your work, so be on the lookout for another phone call from us.

Thanks for listening to Running to the Noise, a podcast produced by Oberlin College and Conservatory. Our music is composed by Professor of Jazz Guitar Bobby Ferrazza and performed by the Oberlin Sonny Rollins Jazz Ensemble—a student group created through the support of the legendary jazz musician.

If you enjoyed the show, be sure to hit that subscribe button, leave us a review, and share this episode online so Obies and others can find it too. I’m Carmen Twillie Ambar, and I’ll be back soon with more great conversations from thought leaders on and off our campus.

Running to the Noise is a production of Oberlin College and Conservatory.