STANFORD: How is AI affecting teaching and learning here?
Jon Levin: We’re in the early stages of adapting to AI tools. We’ve given faculty freedom to experiment. Some are requiring students to use AI for research, coding, or presentations. Some want to emphasize thinking and writing without AI, so there’s a trend back to oral exams and blue books. I think it’ll take time to find the right mix—ultimately we want students to use AI not as a substitute for deep thought, but as a way to augment their knowledge and skills.
There’s also a big question about what students need to learn in the age of AI. For instance, do they need more technical skills? I think it’s more likely that AI will reward the general skills—how to think critically, pose questions, communicate effectively—associated with a broad liberal education.
If I’m graduating from Stanford and need a job, how should I be thinking about AI?
There is a lot of speculation about how AI will impact entry-level jobs, especially in fields like finance, consulting, and technology, although it’s very early days. I have to believe that AI eventually will affect most jobs, just like computing, so it will reward people with complementary abilities: technical skills to use AI and humanistic skills that can’t easily be automated.
It’s important to keep in mind that a Stanford education isn’t meant to prepare students for their first job—although it should do that. The goal is to prepare students for productive and fulfilling careers and lives. I often reflect that my Stanford math major was the foundation for my career in economics, but nowadays I appreciate my English major because it was such good preparation for communicating, empathy, and leadership.
Can you describe AI-related research at Stanford?
Stanford has been at the forefront of research into AI since computer science professor John McCarthy set up the Stanford AI Lab in 1963. Today, we have students and faculty working in pretty much every area of AI—they’re doing open research that’s intended for publication and creating tools that can be adopted by companies or other researchers. And the application of AI to scientific discovery is one of the most exciting things going on across campus, whether it’s neuroscience, molecular medicine, or the social sciences. Even compared with companies that are investing billions in AI, Stanford has an unmatched ability to bring together talented researchers across different domains and generate new ideas.
Every major advance in technology also needs to be incorporated in a way that improves human well-being, and that requires an ethical and societal perspective as well as a technological one. Stanford’s Institute for Human-Centered AI and Data Science is our intellectual hub to connect faculty and students thinking about AI and scientific discovery, AI and education, and how AI will affect labor markets, political institutions, and human interactions. We aspire to be the leading place in the world for these discussions.
Do you use AI in your own work?
I use it all the time. Not so much for writing—I need to write myself to formulate ideas, put them into a logical structure, and be precise. But if I’m going to talk to a group of faculty about quantum sensing, I’ll ask an LLM to teach me about it as if I were in third grade. That’s kind of magical.