AI is for Losers
A Manifesto
In the 90s, being called a loser was a dramatic thing. Growing up means having to confront some of these habits of labelling others, and it’s been years if not decades since I called anyone a loser with any seriousness. But I have decided that it is now the best way to describe technocapitalism’s latest and perhaps most insidious way of making the rich even richer: generative AI.
Admittedly, the environmental impact of generative AI is already so detrimental that all living beings on the planet are losers here. It is exacerbating the losses we are already facing in the wake of climate change. Bearing this overarching loss in mind, contend that there are three groups of losers relative to AI: 1) the group of billionaire and wannabe-billionaire losers who have decided to spend their days making other people’s lives worse because of their own personal shortcomings; 2) the group of white collar workers who are afraid to resist generative AI for fear of similar societal rejection, and who by their desperate desire to appear “with it” are paving the way for the destruction of their own industries and livelihoods; and 3) younger folks whose education is now being given over to the first two groups, ensuring that any skills and knowledge they acquire in the systems of education into which they are forced—either by law through primary and secondary school or through economic pressures in college—will be forever beholden to the technocapitalist losers in the first group.
Elon Musk is a loser. He spends his one precious life attempting to invent and spend his way into coolness. His latest desperate attempt was to do everything in his power to elect Donald Trump and usher in what is quite possibly the end of American democracy. The inevitable breakup of this bromance from hell only speaks to the loserdom at the heart of both the Trump regime and the Elon empire: insecure men with too much money who have decided to use the influence that affluence provides to target people whose lives are already difficult: people of color, women, queer folks, immigrants. It is no surprise that Elon was a massive part of OpenAI, the corporate entity responsible for ChatGPT. This, of course, was before he had a falling out with Sam Altman, OpenAI’s CEO. (Are we seeing a pattern here?) Altman is another loser who despite having every advantage possible in the American privilege game, didn’t finish a bachelor’s degree, yet feels qualified to be in charge of a cultural force that is undermining the entire educational enterprise. Altman seems to spend his days ominously guessing what professions his AI technology will render obsolete. At this point, it doesn’t even matter if his predictions are correct; they are enough to steer the discourse toward all sorts of excitement and panic about AI, giving it more power than it deserves. Why should it surprise anyone that these men—incapable of academic success that requires one read things and write coherently about them—are at the forefront of products that are day by day attempting to destroy education?
Enter the next, and perhaps saddest, group of losers: the professional class who did manage to earn degrees but who have decided that it’s worse to be called a luddite than to hasten the end of education. I am most interested in my colleagues in higher education, whose sad attempts to integrate generative AI into the university are so pathetic to watch that I almost feel sorry for them. They see AI as another “tool” for the pedagogical toolbox. They will present at faculty development days, hold and attend workshops, and trot out new AI products to use in the classroom. It’s the 2020s equivalent, in their minds, of when we discovered YouTube as a helpful way to introduce video content into the classroom. Like YouTube, they will argue, generative AI is the newest way to update their disciplines in the technological landscape. The most baffling rhetorical move in these settings is the argument that students will have to use AI in the future, so we somehow have to include it in the curriculum. What, exactly, should I be teaching my students in humanities classes? AI products are already developed in a way that makes them as user-friendly as possible: enter your questions into a chat box not unlike the ones you use everyday, and receive responses that are tailored to tell you information that is presented at best in the most banal way possible and at worst in a way that feeds your ego so that you come back for more, tricking your brain into thinking you’ve had a positive social interaction. There is much to say here about ethics and psychology but I’m more interested in curriculum: what am I supposed to be excited to teach specifically were I to “integrate” AI? These AI-apologists will launch into a paltry defense, claiming that students will need to know how to evaluate AI output. What a ridiculous argument: infuse your entire education (and maybe enter social life) with a product that you are then expected to critically evaluate. When will students learn the skills required here? These skills are not only thwarted by AI use in the classroom, they are actively stunted so that the losers above can make sure people are as addicted as possible, as early as possible, for the maximal profit.
Even more sadly, many of my colleagues are advocating for AI in the other aspect of academic life: research and writing. I have witnessed professional academics—people with actual doctorates in their fields—give academic papers at conferences that include a statement about how they used AI to write it. These people, in my mind, are losers. They are losers insofar as they are actively participating in the decimation of their own profession. By giving academic work—writing, lectures, presentations, and more—over to generative AI, they are materially contributing to the training of large language models that are intended to replace them. It is baffling to me how they cannot see that whatever they gain from using AI (more time for committee work? more banally structured sentences?) means losses in the longterm. They are enthusiastically contributing to the demolition of the entire project of academic life, namely the production of knowledge and the participation in discourse communities. Generative AI cannot produce knowledge (yet); it can only mimic knowledge production. To most lay people (read: non-academics), there is no discernible difference between actual knowledge production and the appearance of knowledge production. Analogously, there is no important difference for many non-artists between AI generated images and art. Like art, the academic life is, at least in the United States, opaque to most outsiders. This opacity results in suspicion, confusion, and even resentment. The current descent into authoritarianism in the United States is driven in no small part by a rhetorical decimation of the category of “expertise.” This came to a deadly climax during the height of the Covid pandemic: alarming numbers of Americans deciding they know more than the nation’s medical establishment about infectious disease. I say climax but really we’re in the midst of a hellish plateau wherein the highest offices of government that should be staffed by professionals with advanced degrees and expertise in their fields are instead occupied by people appointed for their ideological loyalty. The very notion of expertise—arguably the central idea that holds all academic disciplines together—has been gradually eroded and perfectly made way for generative AI.
Decades of growing anti-intellectualism among the American public has now landed squarely into the ChatGPT text box. For years, academic work has been increasingly scrutinized and devalued according to the standards of capitalist production and consumption. This happens in subtle ways during college tours as majors are described to prospective students and their families according to cost-benefit analysis. Not so subtly, departments or whole universities are closed based on budget calculations with little regard for the now-absurd notion that a functioning society might actually need things like art, literature, or history. Boiled down to a cheap commodity or a relic of classical education, academic writing is ripe for imitation. Most of us who write for a living know that AI writing is a cheap imitation. Most people who are not in our bubble, however, do not. Generative AI is simply and technically that: an imitation of human writing. It is the death of the author in the most insidious way. Who cares who wrote it, as long as it sounds “good.”
But does it sound good? I suppose to most people it does. To students, it sounds fantastic. To us, their teachers, it sounds awful. It is like reading the back of a shampoo bottle in the shower. Information heavy and devoid of any whiff of argument, reading AI-generated writing in the context of a classroom is like being punched in the face by a toddler. “Well, they’ll have to learn how to punch people in the face in the work place,” say the losers, “so it’s important that we let them punch us in the face, as long as we are really clear about the fact that we’re asking them to do it!” To have one’s colleagues insist that my vocation is now to read AI-generated writing on texts that I have devoted my life to understanding and discussing with others because billionaires insist on an economy that privileges it is beyond insulting. These losers refuse to admit that they are pawns for the ultra-wealthy. They’ve decided to give their own precious education and knowledge over to products that are designed not only to replace them as laborers but to make them non-entities.
Our students are the newest and least powerful losers. Thanks to the professional class of losers above, they are being robbed of the basic skills that could make them critical thinkers. They are being taught (forced?) to use generative AI at younger and younger ages, as if there is some grand analogy here to learning how to type or how to use a calculator. That’s one of the favorite images of pro-AI losers: AI is tool, just like a calculator! They never seem to mention, of course, that calculators are strategically inserted into childhood education relative to the child’s basic skills of arithmetic. Nor do they seem to notice that calculators are not little data machines whereby everything I input gets used to train the machine for free, making the machine’s investors even more rich with every entry. To date, I also don’t know of anyone who is using a calculator for therapy, romantic relationships, or parenting advice. The anthropomorphic mimicry embedded into the very fabric of generative AI is a matter of values. The people who developed this technology mean to make lots of money off of people mistaking it for human. The younger we can force people into this mistake, the more reliant they will be upon it. The professional class of losers, especially educators, is making the same mistake they always do: assuming that the younger generations are like them. They assume young people will be able to see the flaws in AI, but somehow want young people to do this without any of the education or critical thinking skills that AI is circumventing. They assume AI will be “used” in a way that will be appropriate to a given situation without students’ learning anything about what is or isn’t appropriate.
The sad irony of this youngest group of losers is that the decades of consumerist rhetoric about education had the pretense of being for their future financial success. We have been telling people for years that something is only worth studying if it gets you a high-paying job, thus rendering the process of the degree (ya know, education) incidental to possessing it. It is a gross utilitarianism that finds its logical conclusion in AI-generated content. If you tell kids enough that all that matters is the product, why should we care how the product is produced?
The hard part—maybe the impossible part—is to get people to see why this is a loss. If I get the job I wanted, why does it matter how I got it? The loss, of course, is largely personal and private. Absent the ability to reflect on the purpose of making money in the first place, making money becomes its own end. Consumption does indeed bring pleasure, and that pleasure can stave off the question of life’s meaning for quite a while. But it comes for most of us, usually in the context of personal or communal tragedy. Sometimes in a the midst of a difficult relationship. Or even in those small, deeply honest moments when pleasure ceases to dominate and the little voice that consumption tries to silence whispers in the human heart: What is the point?
The more insidious loss concerns all of us together. A growing dependence on the AI products put forward by technocapitalists leaves us susceptible to propaganda in unprecedented ways. The architects of liberal democracy assumed as a given freedom of thought that expresses itself through assembly, press, religious practice, and election. What they failed to anticipate (who could blame them?) was the development of technologies that would maintain the expressions of thought but circumvent the human ability to form them free of influence from the rich. It is, ironically, the technocapitalists who now own all of the means of (knowledge) production, so much so that whatever people think they may “know” is simply the product of generative AI created from the start to make people dependent on a product that seeks their own demise.


Now that I'm retired and no longer trying to save mankind, I can watch passing events like a disinterested spectator.
My first thought upon encountering AI was "the earliest and main application of Sparky will be to fire employees", which explains the vast amount of money and hype invested in an unproven technology. Right now, AI is mainly a reflection of the humans who train it, witness Elon Musk's NaziGrok/MechaHitler and its insistence on consulting Musk's political ideology before formulating an answer.
Sparky falls on its face hardest when confronted with questions involving hard technology (STEM). It's clear it skipped the science lectures, to the point of being dangerous.
Exhibit A, an exchange on Reddit from someone who asked Chatgpt how to clean an aluminum pan. Sparky said, just put lye in the pan with some water and heat it up. Well, if Sparky hadn't slept through the class on Group III elements, he would know that aluminum and lye react violently. Without waiting for external heat, the lye solution quickly boils, and vast amounts of explosive hydrogen are emitted.
If you are lucky, the bottom won't fall out of the pan, dousing you with boiling lye. (Fun fact: Drug cartels use boiling lye to dispose of corpses.)
Some academics, plagued by Sparky-written term papers and theses, have searched for solutions. AI-sniffing programs are popular and famous for malicious accusations against innocent students. Instead, some professors merely ask questions of students ABOUT THEIR OWN WORK. It appears that students who get Sparky to write their garbage don't bother to read the output before turning it in. Otherwise, why would they use AI at all, if they took their education seriously?
Still watching the advance of AI with morbid curiosity.
This is way more sophisticated than my analysis (I am just a lowly adjunct, having spent most of my life in clinical practice rather than education) but I do try to tell my students some of these things. Sadly, they do not care: All they see is the shortcut.
Students have of course always sought shortcuts but this one is probably the most seductive and destructive I have seen. In 20 years, that may not be the case (think Google, Wikipedia). Who knows? In the meantime, though, the uncritical adoption of every shiny new thing is fraught with the same kind of danger as that shortcut through the woods our parents warned us about.