A Student’s Guide to “AI”
Posted: 17 April 2026
Academic Integrity
As a student, you know you’re not supposed to cheat. Using an “AI” tool to do your homework is no better than paying someone to do it for you; in fact, it is much worse, for reasons we’ll discuss shortly.
As technology reporter Ed Zitron says, “The point of homework is not to produce homework. There isn’t some gremlin we feed the homework to. The point is to learn.” Indeed. The point of homework is not to punish you, and it’s certainly not to merely occupy your time. If you are struggling to understand the intent behind a particular assignment, you can ask your teacher about it, but you should never assume it’s just pointless busy work—that is essentially never the case.
While lectures are important because that is how the information is delivered, it is in the execution of homework that actual learning takes place. It is the process of actually doing things that creates knowledge acquisition and skill development. You intuitively know that you can passively watch videos about playing soccer and piano all day long but that you will not actually get good at them unless you actually put the work in and practice. In the same way, you will not learn critical thinking without actually sitting down and organizing your thoughts through writing, and you will not learn quantitative reasoning without actually sitting down and working through the math on your own.
It is absolutely, completely fine to ask for help when you’re struggling. You can and should always ask your instructor and, better yet, work with classmates whenever the assignment allows you to do so. Students often learn the most when discussing things with each other.
Self-Care and Social Responsibility
You may or may not be aware that, while Artificial Intelligence is a real discipline and a legitimate area of academic inquiry, what most people mean when they say “AI” is stuff like ChatGPT, Claude, Copilot, etc—that is, chat bots that are powered by large language models (LLMs). An LLM is neither artificial nor intelligent. All it does is assemble words (or more precisely “tokens”) in a certain order based on probability. The probabilities are determined by “training” the model on “real data” which almost universally involves dramatic, large-scale theft of people’s work. Books, articles, encyclopedia entries, websites, social media posts, etc. All of those things taken without the consent of the authors. Indeed, this “AI” is absolutely not possible without mass theft.
The use of “AI” does a lot of things to you, all of them bad. It leads to various forms of cognitive decline, including memory loss, cognitive offloading, and cognitive surrender. Depending on your age, it takes away skills you used to have and/or prevents you from developing new ones. In rare cases it leads to AI psychosis—here are two general summaries (content warning for both) Psychology Today, Psychiatry Online.
Okay, so, there are many risks to your health, safety, and wellbeing when you use “AI”. What are the benefits? In the short term, you may feel overwhelmed with assignments, and that therefore the obvious benefit is that it saves you time on your coursework. In the long term, though, that hurts you (for all the reasons just stated) and does not help you (because you have not learned anything). That is to say, there is no benefit. Why would you choose to hurt (and potentially endanger) yourself like this while simultaneously gaining no benefit? I am here to tell you that you should not.
Let’s now take a moment and think about others and the world around us. The data centers needed to run LLMs are causing vast amounts of environmental destruction. For the people unfortunate enough to live near them, there’s a wide range of additional harmful effects, such as increased electricity bills, noise pollution, and excessive heat. When we use “AI” chatbots for any purpose, we are implicitly accepting these data centers and all the environmental and social harms they cause.
This next one is a little more indirect, but also very grave. Any computing system, LLM or otherwise, is only as good as the data you give it—garbage in, garbage out. The over-reliance on “AI” and the cognitive surrender that engenders make it more likely that people will fail to update model data. That leads to, for example, an LLM-powered military targeting system telling pilots to drop bombs on a school full of little girls. When we use “AI” chatbots for any purpose, we are implicitly accepting their use in military operations and, therefore, any “collateral damage” that comes along with that usage.
So, the next time you think about using “AI” to help you with your homework (or to do anything else for that matter), ask yourself these questions:
- How much mental (and possibly physical) harm am I willing to subject myself (and possibly others) to in order to answer this question?
- How many people’s work am I willing to steal to answer this question?
- How many acres of rainforest am I willing to burn down to answer this question?
- How many school children am I willing to murder to answer this question?
I don’t think it’s worth it, and I don’t think you do either.