Embracing AI in the Classroom
A computer science education has never been strictly about coding. More than ever, it has become about specification, testing, higher-level reasoning, and understanding how systems work.
I recently sat down with a student reporter from The Chicago Maroon to discuss how I’m incorporating AI tools into my teaching.
As someone who’s been teaching computer science for more than 20 years, I’ve watched large language models emerge as transformative tools in education. Below I talk about my philosophy towards how I encourage students to think about them in my own classes, as well as how I’ve been using AI to enhance my own pedagogy and classroom delivery.
My Philosophy: AI as Normal Technology
My approach is straightforward: these tools exist, students will use them in their careers, so we need to teach them how to use them effectively. As I mentioned in the interview, there’s an excellent article called “AI as Normal Technology” that captures this philosophy perfectly. This is just another tool—like IDEs, Stack Overflow, or GitHub before it. It’s potentially more powerful, but fundamentally, it’s another instrument in the problem-solving toolkit.
My AI Policy
I fully encourage the use of AI tools in all my courses. Here’s the core principle I share with students: You must understand what you’re turning in. I’ll ask you about it on exams, in discussions, and during assessments. Use whatever tools help you learn and complete your work, but you’re responsible for understanding the output. I also require attribution—tell me what assistance you used.
The classroom is a simulator for the real world, and in the real world, these tools are everywhere. Ignoring them doesn’t serve our students well. We need to acknowledge the environment we’re operating in and teach accordingly.
Here is what I currently say in my syllabus for my courses.
Use of AI Tools and Large Language Models
I acknowledge and even expect that you will use large language models (LLMs) and other AI tools for assistance in completing assignments. This is perfectly acceptable and encouraged. However, you are expected to understand the output these tools generate.
AI-assisted coding improves efficiency, but only if you understand its outputs. Just as I would not be able to evaluate an LLM-produced essay in philosophy if I didn’t understand the subject matter, these tools do you no good if you cannot understand, evaluate, improve, debug, and iterate on their output.
Therefore, you are allowed to use AI tools to help automate anything you are confident that you understand well enough to implement manually yourself. This approach best simulates the real world, where you will inevitably need to know how to use these tools well and to your advantage, but you will ultimately be responsible for understanding the software and code that you produce.
When using AI tools, please acknowledge their use in your submissions and be prepared to explain and defend any code or solutions you submit.
How I Use AI to Make Teaching More Efficient
I also have been using AI to improve the delivery of my courses. Here are the specific ways I leverage AI tools to enhance my teaching:
1. Automated Class Notes
I take transcripts of my lectures and use LLMs to generate summaries and notes for students. These get posted immediately after every class, giving students a resource they can reference while the material is fresh. For example, here’s the agenda.md file from my Machine Learning for Computer Systems course—an automatically generated “what did we cover” summary for each class session.
2. AI-Assisted Exam Creation
I use past exams and instruction files with LLMs to generate draft copies of exams. Even better—I share the prompts with students so they can generate their own practice exams based on past materials and covered topics. This empowers them to take control of their preparation. You can see examples of past exams and the precise instructions I provide to the LLM for exam generation in my course repository.
3. Discussion Preparation
For classes with reading responses, I use LLMs to help organize student submissions. This helps me identify themes, generate discussion notes, and prepare to lead more engaging class conversations. To date, I have organized responses thematically around three topics: (1) requests for technical clarification (allowing me to focus my lecture towards details that students actually want to hear; (2) points that could be teed up for general discussion (helpful for class discussion, breakouts, etc.); (3) case studies that people want to hear more about (helping me go do additional research on “current events” that I can speak about in class to bring more real-world examples to class.
4. Reducing Setup Friction in Assignments and Labs
AI has made it easier to assign more hands-on work. Much of the “scaffolding” for assignments—the tedious setup that doesn’t directly relate to the concepts I’m teaching—can now be automated. This means students spend time on the concepts that matter rather than configuration headaches.
The Real Risk: Not Understanding
When asked about the greatest risks of AI in the classroom, my answer is simple: people not understanding what they’re turning in. That’s why my approach emphasizes comprehension over prohibition. Students can use these tools, but they must be able to explain their work, defend their choices, and demonstrate understanding on written exams.
Computer Science has Never Been About Coding
A CS degree has never really been about writing code—it’s been about abstraction, specification, testing, software engineering, algorithms, and data structures. If anything, AI makes these higher-level skills even more important.
We’re moving toward a world where people aren’t writing much code anymore. Entry-level isn’t “junior developer” anymore—it’s project manager, software engineer focusing on testing and specification. The “grungy and inaccessible” parts are becoming automated, which means the differentiator is exactly what a computer science degree teaches: how to think about systems, specify requirements, test effectively, and reason about complex problems.
I believe the CS degree is actually becoming more important, not less. These models aren’t capable of higher-level reasoning—they’re good at automating tasks we can specify for them. Understanding how to specify, test, debug, and reason about software remains fundamentally human work.
The Bottom Line
We can’t deny the existence of technology. We have to teach within the world as it is, not as we wish it were. That means embracing AI tools, teaching students to use them responsibly, and focusing on the enduring skills that make computer scientists valuable: clear thinking, specification, testing, and the ability to reason about complex systems.
This post is based on an interview conducted for The Chicago Maroon in November 2025.


