Students Aren’t Asking for Help Anymore. That Could Be a Good Thing.
AI is rapidly disrupting how both students and faculty teach in their classroom. There is no single right approach. The only wrong approach is to ignore it.
Bespoke AI tutors and teaching assistants are here and they are replacing the traditional functions of professors and teaching assistants. This is a wake-up call.
Earlier today, a colleague recently shared some striking data from his course this quarter: page views on Ed Discussion down 65%, discussion threads down 48%, comments down 44%. TA office hours are ghost towns.
Other instructors in our department are reporting similar trends. While it’s important to distinguish between correlation and causality, the hypothesis is clear: Students are increasingly asking for help from LLMs instead of human instructors (specifically, professors and teaching assistants in office hours).
Some of my colleagues have responded in particular ways: concern, a bit of alarm, suggestions that we should run all assignments through GPT before releasing them to see what it spits out. One colleague noted that even plain Google searches now surface LLM-generated responses—making AI assistance essentially “unavoidable” (as LLMs were a technology to be avoided).
I think this discussion needs a careful re-thinking and framing.
The Uncomfortable Reality
LLMs are already good tutors and teaching assistants, and they are going to get better.
If students are consulting “bots” instead of humans, we should consider the uncomfortable reality that the way we used to do our jobs is potentially quite replaceable.
As students become more skilled at using these tools and evaluating their outputs—which is, in part, what we should be training them to do—the old model of “ask the TA how to fix your code” becomes less central to learning.
This presents both disruption and opportunity.
A Different Approach
In my courses, I’ve leaned into this shift rather than fighting it. I give students all of my old homeworks, midterms, and finals. I share full lecture summaries and transcripts. I even share the prompts I used to generate draft exams, so students can create essentially infinite practice material. I’ve written about my approach to AI in the classroom this year in a previous post.
I was really nervous about how this would work out. The result? Students loved it.
Just as many students came to my office hours as in the past—and the discussions were much more interesting. They were engaged at a higher level, asking deeper questions rather than asking “how do I get this to compile?”-style debugging questions
I did discover, that, in allowing my students to use LLMs to assist with assignment completion—a couple of assignments were completed more quickly than in past years. I noted several where it was clear that the assignment was perhaps too “AI-friendly”, in the sense that my earlier assignment design makes it a little too easy for students to “phone it in” without understanding the learning objectives.
My response to that, “in the offseason”, will be to think about how to tune the assignments and syllabus accordingly—probably adding a couple more, as well as asking students to think more deeply about the outputs, testing them more on exams on the concepts I want them to be learning.
In short, we need to think hard about how we adapt learning objectives to these new realities training our students to work with the tools as we prepare them for their eventual occupations.
And, as educators ourselves, we need to be thinking hard about how to incorporate these tools into our own occupation as educators. That ultimately makes us better educators for two reasons: one, we can teach better having had direct experience with the tools, and two, we can improve our own teaching efficiencies with AI to improve the delivery of our own content and steamlining the logistics of teaching (and anyone who has taught before can appreciate, there are a lot of rote logistics that are ripe for automation).
Caveat: Your Mileage May Vary
It’s early days here, and nobody has the answers. In my courses, I treat my students as “co-learners” and partners in the journey. I bring value to the classroom, and AI can also bring value—to myself as an educator, and to the students as learners. The answers are not all clear, but one thing I think is clear even in these early days: The answer is not to ban or police the tools or deny their existence, but to think hard about what integrative approaches can help us achieve learning objectives, which themselves are ultimately going to to evolve.
And finally, I’d caution against any prescriptive approach here, especially one that generalizes across all courses and contexts. Pedagogy, material, learning objectives, style of instruction, and general philosophy all matter. What works for a systems course may not work for discrete math. What works for a Master’s seminar may not work for an intro programming class.
But if the old indicators of engagement—Piazza/Ed posts, TA queues—are dropping, maybe we should ask whether those were ever the right metrics for learning in the first place.



