Is “AI” Inevitable?
Posted: 24 April 2026
Introduction
Some of our colleagues are saying “AI” is inevitable. There are several problems with that claim. First is what is meant by “inevitable”. Setting aside for a moment the debate around historical determinism, we can take a charitable interpretation of inevitable as “very likely to happen based on the current information we have”. There is also a definitional problem of what we really mean by “AI”. The corporate branding of chatbots and other plugins powered by large language models (LLMs) as “AI” is a very clever sleight of hand, since LLMs are neither artificial, as they require vast amounts of human-produced training data, nor intelligent, as they are nothing more than algorithms that guess which words should appear in which order based on probabilities determined based on training data.
When someone says “AI” is inevitable, they probably mean something like “chatbots and plugins powered by large language models are very likely to be widely adopted and used for a long time to come”. In other words, they are the next big thing in a long line of durable human innovations, from electricity to the lightbulb to the integrated circuit to the computer to the internet and the world wide web.
Part 1: it’s a scam
My response to that is a few simple questions. Who told you that? Why did they tell you that? What do they stand to gain from convincing you of that?
The main answer to that is the big tech firms and especially their CEOs, whether the primary “AI” labs like OpenAI and Anthropic, or the hyperscalers like Amazon, Google, Meta, and Microsoft. That is, you know, the very people trying to sell you a product that they’ve already spent hundreds of billions of dollars developing and deploying despite nobody asking for it and increasing public resistance to it. They want you (and especially your employer) to believe the hype. They want you (and especially your employer) to buy the products. They want to make money.
The secondary answer is an unaware and indifferent media. Claims from CEOs with a clear and obvious vested interested in hyping these products are repeated uncritically and go largely unchallenged. This is the same media that, aside from a small minority of prescient voices, did not foresee the dot com bubble in 2000 nor the housing bubble in 2008. Only a tyrant denigrates the free press, but surely we can recognize their shortcomings.
Now that we know who’s trying to convince us of this claim and why, let’s look at some counter-possibilities. It is easy to forget, because many of these things are eminently forgettable, but plenty of products and services have been touted by big tech CEOs (and the compliant media) as the next big thing. Things like cryptocurrency, blockchain, and the metaverse. Just the other day, in fact, I was chatting with a brilliant historian who, quite rightly, had never even heard of the metaverse. This was a passion project of none other than Meta CEO Mark Zuckerberg, who spent about $100 billion on the project before shutting it down because nobody cared. It made less than $20 billion in revenue for losses in excess of $80 billion. Likewise, fewer than 3% of Americans have ever made a payment in cryptocurrency. It is used primarily by scammers (as a speculative asset in e.g. pump and dump schemes and the related phenomenon of memecoins), or by terrorists and criminals (as a means of concealing their financial transactions). As Nobel Laureate in Economics Paul Krugman has said, cryptocurrency has “essentially no legitimate use”.
Blockchain was developed as a means of enabling bitcoin transactions without a bank. It was long touted as a means of revolutionizing distributed computing. That has never materialized—it is virtually unused outside of cryptocurrency. What has happened is that companies have used it as a means of scamming investors. For example, on December 21, 2017, Long Island Iced Tea Corp changed its name to Long Blockchain Corp, and its share price rose by 500% the day the name change was announced. Three people were later charged with insider trading. Similarly, on March 30, 2026, Allbirds Inc (a shoe company) announced it was “transitioning” to an AI company. The stock price soared 600% following the announcement, despite the fact that said announcement has the rather hilariously tepid wording that the company will “seek to acquire” equipment for a data center (with a $50 million investment from an undisclosed “institutional investor”, which, LOL). You know, eventually, maybe. We haven’t quite finished it, but we’ve almost started working on it.
Is “AI” the next big thing? Or is it just another Silicon Valley tech-broligarch scam? In my opinion, it is the latter and, furthermore, there is a massive “AI” bubble that will burst, just like the dot-com and housing bubbles before it. One can only hope that leads to a much needed reckoning for the Epstein class. The previous bubbles did not. But hope springs eternal.
Part 2: if it’s not a scam, it is something much worse
As has been extensively documented, “AI” is a psychogenic, social, environmental, and geopolitical disaster. Suppose “AI” really is inevitable and that, in the absence of a major social movement, it will be everywhere before we know it and it will stay there for a long time. If that’s the case, then that major social movement is an urgent moral imperative.
Were nuclear weapons inevitable? Again, setting aside debates about historical determinism, it certainly seems that nuclear weapon development was likely given the combination of genuine human curiosity as well as our inclination towards finding evermore creative and destructive ways of killing each other. Nuclear weapons are certainly everywhere these days and, thanks to multivalent failures of US government policy (both in its failure to uphold its commitments under the Budapest Memorandum over the last four years as Russia continues to wage war against Ukraine and in its catastrophically dumb decision to wage a war of choice against Iran), the number of countries seeking and developing nuclear weapons is likely to increase. Is that a good thing? Does that make the world a better place? Does that make us smarter, healthier, safer?
The answer is no. That is to say, even if we think something is inevitable, that does not mean it is good and that does not mean we should passively accept it. Sometimes you have to fight the good fight even if you think you’re going to lose.
The proliferation of “AI” such as it is now will lead to vast amounts of environmental destruction, more atrocities and war crimes, more social ills, all the while causing people to lose skills and suffer cognitive decline as they surrender their thinking to machines. All for the benefit of… what, exactly? Something that does your homework for you? Something that summarizes emails or messages or articles that are—apparently!—not important enough for you to actually read? Fake images of dogs in helmets? Are you fucking kidding me? The benefits are not real benefits and the costs are enormous and morally monstrous.