An Educator’s Guide to “AI”
Posted: 4 April 2026
Artificial Intelligence, if one takes the formal computer science definition, is a broad and legitimate area of academic inquiry.
In the common parlance, however, “AI” is almost universally used to mean chatbots powered by large language models (LLM), such as ChatGPT. Such models are neither artificial nor intelligent. An LLM does nothing more than assemble words based on an estimated probability that they should appear in a certain order. It is not intelligent in any plausible meaning of the word, because it does not think. Although it does many orders of magnitude more computations per second, it is no more “intelligent” than a Hewlett-Packard 9100A. It is also not really artificial, because the probability parameters are determined by training the model on real data, meaning works produced by humans. It does not and cannot function without vast quantities of human-produced data. Nearly all of that data was taken without the consent of the authors, which is one of the many controversies surrounding its development and deployment.
Or, rather, work that was produced by humans at first. An important and delightfully grim concept for LLMs is model collapse. Prior to the advent of LLMs, all meaningful text on the web was produced by humans. Once LLMs became widely adopted, “AI” slop started polluting the web. As a side remark, this is also a key part of why your google search results keep getting worse (the other reasons being SEO—that’s search engine optimization, which is a tactic websites use to get their pages to rise to the top of a search—and ad sales; all of this is driven by the growth-at-all-costs model, see e.g. Ed Zitron’s talk at Web Summit 2024). As successive versions of the models train on the data available on the web, more and more of that data is LLM-generated, which degrades model performance. That degradation comes in the form of less output diversity, meaning the LLM-produced text will be more anodyne, as well more errors and more hallucinations, meaning the LLM-produced text will be more wrong and more nonsensical. This is, in some respect, nothing more than the old saying, originating as early as 1957: “garbage in, garbage out”.
It is not guaranteed that model collapse is inevitable, and LLM researchers investigate potential means for mitigation. However, the rapid and uncritical adoption of LLMs makes it more likely, and there are already indicators that models are no longer improving, which could be an indication of an inflection point beyond which model performance degrades, or after which technical improvements in the models are held back by lower-quality data.
Even if we aren’t seeing the beginnings of model collapse, it is clear that the models do not work well. Many are familiar with the phenomenon of “AI” hallucinations. The BBC reported that “AI assistants misrepresent news content 45% of the time – regardless of language or territory”. The models also have a tendency to reinforce user beliefs because of one of the real-time training methods, reinforcement learning through human feedback (RLHF), a phenomenon known as AI sycophancy. This is a major challenge in the legal profession, for example.
The corporatization of academia has led to many ills, the rushed and uncritical adoption of “AI” tools chief among them. Studies have demonstrated cognitive decline in the users of LLM-powered chatbots. Further studies have demonstrated that adult users are losing skills they had previously acquired, while children are no longer acquiring those skills at all.
As tech reporter Ed Zitron observes, “the point of homework isn’t to produce homework, there isn’t some gremlin we feed the homework to; the point is to learn”. Educators know this. Administrators seem not to. It is incumbent on us to resist the whims of administrators, who are highly susceptible to fads. Nevertheless, a cautious and circumspect educator could reasonably ask themselves, while there are risks, are there legitimate use cases for “AI” in the classroom? I guess that depends on your perspective.
You should not use an LLM to generate your own lecture material. In addition to making you dumber (and since you’re an educator you clearly do not want to be dumber), it is sufficiently error-prone that the amount of time you spend fact-checking and correcting the output will likely cancel out the time you save by not writing your own material. Studies have already demonstrated minimal changes in productivity at companies, and net-negative efficiency for coders and software engineers.
You should not allow students to use “AI” to help them with with their assignments, for the obvious reason that, as stated above, it leads to cognitive decline and lost opportunities to develop skills. Furthermore, in addition to the harm it causes students, there is no benefit. Using a chatbot is not a skill any more than reading a website or scrolling through social media is a skill. Do not be bullied into pretending it is.
Should you include LLM-generated material as part of your class? This is, at least in part, a moral question. Do you want to normalize the use of “AI” among your students? Consider the intellectual theft at the core of model training. Consider the vast amounts of environmental destruction wrought by data centers. Do you think this is okay? Do you want your students to think it’s okay?
Consider now the moral hazard of outsourcing your thinking to an “AI” system and forgetting to update the data—something that leads to, for example, a targeting system telling pilots to drop bombs on a school full of little girls. This particular case is not a failure of the model itself, rather it is a failure of the users to supply it with correct, up-to-date data. But the cognitive surrender that “AI” usage creates makes this failure more likely, and it makes the likelihood increase over time.
Is this the future you want? Is this the future you want for your students?