7 Comments
User's avatar
Dangerous Bill's avatar

Now that I'm retired and no longer trying to save mankind, I can watch passing events like a disinterested spectator.

My first thought upon encountering AI was "the earliest and main application of Sparky will be to fire employees", which explains the vast amount of money and hype invested in an unproven technology. Right now, AI is mainly a reflection of the humans who train it, witness Elon Musk's NaziGrok/MechaHitler and its insistence on consulting Musk's political ideology before formulating an answer.

Sparky falls on its face hardest when confronted with questions involving hard technology (STEM). It's clear it skipped the science lectures, to the point of being dangerous.

Exhibit A, an exchange on Reddit from someone who asked Chatgpt how to clean an aluminum pan. Sparky said, just put lye in the pan with some water and heat it up. Well, if Sparky hadn't slept through the class on Group III elements, he would know that aluminum and lye react violently. Without waiting for external heat, the lye solution quickly boils, and vast amounts of explosive hydrogen are emitted.

If you are lucky, the bottom won't fall out of the pan, dousing you with boiling lye. (Fun fact: Drug cartels use boiling lye to dispose of corpses.)

Some academics, plagued by Sparky-written term papers and theses, have searched for solutions. AI-sniffing programs are popular and famous for malicious accusations against innocent students. Instead, some professors merely ask questions of students ABOUT THEIR OWN WORK. It appears that students who get Sparky to write their garbage don't bother to read the output before turning it in. Otherwise, why would they use AI at all, if they took their education seriously?

Still watching the advance of AI with morbid curiosity.

Expand full comment
Virginia's avatar

This is way more sophisticated than my analysis (I am just a lowly adjunct, having spent most of my life in clinical practice rather than education) but I do try to tell my students some of these things. Sadly, they do not care: All they see is the shortcut.

Students have of course always sought shortcuts but this one is probably the most seductive and destructive I have seen. In 20 years, that may not be the case (think Google, Wikipedia). Who knows? In the meantime, though, the uncritical adoption of every shiny new thing is fraught with the same kind of danger as that shortcut through the woods our parents warned us about.

Expand full comment
Lane Anderson's avatar

Love this. V much in line w my writing and thinking in the subject.

I’m only interested in AI in terms of how I can continue to teach my students to think critically a write without it—the same things I have always taught before AI got in the way.

Expand full comment
Xraider's avatar

The only loser is the author, in the true sense of the word - they have spent years grinding social respect points and certifications in academia, only for it to be useless because some cocaine snorting apes with too much ego in San Francisco invented LLMs, which have rendered their social points and certificates useless (nobody cares about them anymore) and will also likely make their job impossible to do or also obsolete it, thus rendering their livelyhood obsolete as well.

The cocaine snorting apes who use AI will get all the social credibility and income, the authors destiny is the unemployment office unless they adapt to the new reality instead of writing these manifestos. Good luck !

Expand full comment
Dangerous Bill's avatar

Here is a summary of AI healthcare failures, written by Google's AI:

"AI failures in healthcare stem from issues with poor data quality and bias, leading to diagnostic errors and misdiagnoses; lack of transparency and explainability in AI models, hindering error correction; implementation challenges, including outdated infrastructure and costs; and data security vulnerabilities, which can expose patient data or enable malicious attacks on devices like pacemakers. These failures result in misdiagnoses, patient harm, and eroded trust in AI-driven systems, often stemming from inadequate strategy, objectives, and expertise in development and integration."

Explainability?? Perhaps the English translation will have another word for it. Clarity perhaps?

Expand full comment
Tim Wipperman's avatar

Check out www.Humanable.com

One answer to AI in the music arts.

Define a creator as a human.Bots don't have Social Security numbers etc.

Grant them use of a Humanable trademark to use for all their websites, socials, B2B interaction.

I binary differentiation between bot production and human activity.

Expand full comment
Cookie1850's avatar

This is a horrible manifesto, and a poor attempt at a proper analization of why Machine Learning has captivated these so called “losers”, as you mockingly put it. Firstly I would like to address the the opening statement about “technocapitalism”. This form of capitalism, to put it simply, does not exist, but is instead just the natural evolution that capital has to go through with the breakthrough of new technologies. The future devolpement and implementation of machine learning is just another instrument that’ll be used to; support already existing capital, and create new capital. It’s foolish to believe that these “tecnocapitalist”, or some vague rich villain will use machine learning as a way to further the wealthy divide.

The short bit about the environment irks me only because this arguement is used to demonize the individual who just so happens to use generative AI, or those that work in the related field(wage workers), instead of using the real environment concern to maybe lessen the impact that AI has on the earth.

To refute your point about the “losers” that are involved with the intercoruse of machine learning, it’s not just some group of “billionaires” that are “forcing” AI into the work place. Instead, it’s a process that involve: the industrial, or in this case, the tech bourgeois expanding their capital by contracting the second group of people in this process, the petit-bourgeois, (a subset of them who aren’t opposed to having their relation to capital being abolished), who has the means to build and see the overall support of these data centers, and lastly, the proletariat, who uses their labor-power to construct, or to maintain the data center. Your view of the situation is sickly liberal, which is a flaw with the rest of this so call “manifesto”.

I will skip over the pointless rant about certain members of the bourgeois, and focus on the whitewashing of this country’s worthless attempt of democracy. It’s eye-opening to see liberals of all shapes and sizes whine about democracy being dead in a nation that never had a proper one to began with. Do you liberals ignore how elections where handled through the majority of this nations history? Or, how many democracy’s we overthrew, not just in the Americas, but also in other parts of the world? All this talk about how the degenerate Trump “ruined” democracy is just bullshit. This worthless nation never had democracy.

Education wise the use of AI is new and hasn’t had proper time to develop and progress to the point where it can be safely Intregrated within schooling to aid in student learning. This is the only part I agree with, and we as a society should reinforce critical thinking skills to better prepare students for the future ahead.

Expand full comment