Fighting back the tide of AI Learning Shortcuts, with AI

Any conversation on teaching with AI needs to start from “how do we learn?”. Here I find Barabara Oakley’s many books on learning, Mindshift, A Mind for Numbers, Learning How to Learn, etc to provide an honest answer: we learn “with effort” and lots of it! Low-effort tasks like listening to lectures, reviewing notes, rereading notes are not going to give students the learning gains they need, at least not beyond passing an imminent exam that tests our capacity for word-association. Actively recalling a lecture and readings through recreating notes, solving problems, asking questions, getting help when stuck, and repeated rehearsals over periods of time, would however. These are all effortful. Most of Oakley’s learning tips are not about eliminating such efforts. Rather, they are about setting up the right environment (place and time) and, more importantly the right mindset (motivation, and warming-up with pomodoro techniques) conducive for learning that lasts. 

Now that we understand the reality of how effortful learning is, we can talk about the impact of AI on education. Marc Watkins’s post on the plethora of Ed-Tech AI tools promising to remove the effort from learning should ring alarm bells in the heads of many educators. There are two cases to explore here: educators are doing the right thing, and educators are not doing the right thing.

The second case is widely discussed and is definitely the stance many struggling students take and for good reason. Remember the days when you would hear teaching descriptions along these lines: “Its like throwing pies at a wall and hoping something sticks”. Now I hear: “Its like aiming a fire hose at students and hoping they drink!” I can’t shake the image of battered and drenched students from my head. As a computer scientist, I can relate. The research field is constantly evolving, with an ever-growing complexity (or incomprehensibility) that makes one envy Lacan’s students. AI tools that summarize papers, hour-long lectures and talks, and answer questions can help me get a handle on the hose and push it away, but I am completely aware of how deficient my learning is when doing so. Our fields have grown, our primary sources have multiplied, and we are trying to bring our undergrads up to the cutting edge — this ain’t working.  Let’s simplisticly (and incorrectly for the sake of illustration) assume that our brains are a somewhat fixed volume — a cuboid — with its width representing different disciplines, its length representing the different subfileds within a discipline and its depth representing concepts and techniques within each subfield. If we keep increasing the width and length, the depth’s got to give. And even if we promote learning in ways that share concepts across fields and disciplines and allow our brain to succinctly codify knowledge, student motivation and time cannot keep up with what we are expecting them to learn. If we don’t distill our fields into meaningful trickles that students can build on as they see fit, then they will turn to these AI tools to push away the fire hose. If I assign five primary source readings a week, I should not expect any deep grasp of these readings and if students can get the cliff notes of these readings from an AI tool, then that is fair game. With that much assigned, a shallow, superficial understanding is all that one can hope for after all. Which poses the question: why do so in the first place? If we aren’t distilling our fields then we are leaving it up to our students to do so in ways that boost the sales of these AI tools and we can’t fault them for not learning as we intended. The same argument extends to our writing assignments, labs, problem sets, etc. For our students, using AI “shortcuts” is not about reducing effort but about surviving the term. Perhaps our great challenge as educators is not about curtailing AI or other shortcuts, but about taking the responsibility of curation and pruning to ensure depth that matters.

This brings us to the first case, where we are doing the right things and assigning the right kind of learning work. In this world, I assign one primary source reading and engage my students critically in it over a reasonable period of time, and I also sufficiently motivate the students, so they believe in the value of learning the material. The same AI tools that helped with the second case are detrimental to learning. A tool that does “the effort of learning” for students strips them of learning. We need to have conversations with students about how they should learn to preempt these behaviors. These conversations must stress how to use AI tools appropriately, i.e. to set up the right environment and mindset to get the effort rolling. This can be difficult when there is an abundance of tools that aim to eliminate effort and an inaccessibility to tools that actually promote learning. 

In joint work with NYUAD alum, Liam Richards, and Prof. Nancy Gleason, we built one such AI tool, ReaderQuizzer. To help students with active-reading, ReaderQuizzer generates questions at one of two reading levels, comprehension and analysis, to help students stay on track and self-assess their reading at every page. This tool is in contrast with other AI tools that provide answers to questions asked of a reading or that summarize it. Our tool puts the effort on the student but helps guide and maintain it, akin to the 20-minute pomodoro timer that gets student started on any learning exercise. ReaderQuizzer for now is locked in as a research prototype, inaccessible to the many students who could benefit from it. The same is true for many possible AI education tools designed by educators who understand how learning works. Another tool my lab is building is a syllabus reading tool that transforms syllabi into actionable learning tasks directly in your calendar. Why are they locked? First, institutions unlike start-ups are held more accountable from a legal perspective. Does a student uploading a reading onto a tool like ReaderQuizzer constitute a copyright violation? Until a legal team settles this, higher education institutes may not wish to risk legal battles with publishers. Second, a resistance and a fear of change, breaching the subject of open access to syllabi can lead to hours of heated faculty debate. Third, institutions don’t have the IT/tech development resources to launch such tools. Where to host the servers, who manages the subscriptions, what are the right subscriptions to purchase and so on. 

Outside academia, the incentives are such that tools that offer certain features, especially those that promise “shortcuts” will inherently attract more students. If there was a magic pill that keeps one healthy and fit, would we take the pill or sweat for a tedious hour everyday? When the side-effects aren’t obvious — you aren’t truly learning — but there are immediate gains — you complete the assignment or pass the test — its hard to make the right choices. Naturally, bad tools will flourish. 

So what can be done? We can create in-house AI Ed-Tech sand boxes where both legal and development teams support on a tool by tool basis rather than aim to resolve grander issues for all. Uploading certain readings can violate copyright but not all do. A single subscription or LLM model may not work for all possible tools but since money doesn’t seem to be the primary resource constraint for now, let 100 tools flourish each with their own subscription or LLM models. What we shouldn’t do is wait until the market is flooded by bad options that we can’t pull our students back from.



Leave a comment