What You Can Do To Hasten The End of AI

What You Can Do To Hasten The End of AI
Photo by Claudio Schwarz / Unsplash

My inbox is littered with anti-AI articles I have sent myself so that they are handy to share with people who are AI-curious and can be persuaded to stop using it. It's about time I created a post that where I can store my links. I'm not going to provide a "why" explanation for why you should stay away from AI. Just trust me; it's the worst thing ever. If you want to learn more, read If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares. Yudkowsky was an early AI programmer and advocate who is now fully against AI. Another front in that battle is being led by journalists. The struggle over AI in journalism is escalating. There's also Resisting AI: An Anti-fascist Approach to Artificial Intelligence by Dan McQuillen.

If, like me, you want less AI in your life, here are some tips:

  1. Stop using it. Duh. But it can be hard. Every search engine and software program is forcing AI on us. Consumer Reports has published How to Turn Off AI Tools Like Gemini, Apple Intelligence, Copilot, and More.
  2. Don't require your students or employees to use it. Some teachers are banning AI, while others are creating AI-resistant curricula. MIT has a helpful guide for detecting AI from students. If you are in a union, you should be thinking about ways to incorporate AI restrictions into your contract. For example, requiring that all AI usage is subject to bargaining, and that no job losses or layoffs will result from the use or adoption of AI tools. That the contract itself won't be written by AI, and that disputes will not be adjudicated by an AI neutral. Those are just some ideas off the top of my head; go wild coming up with your own. Just don't use AI to do it.
  3. Tell your legislators you don't want to pay for the data centers' energy bills. Consumers End Up Paying for the Energy Demands of Data Centers, How Can Regulators Fight Back? In California, we've only gotten as far as requiring the public utilities to conduct a study about data center energy usage, but other states have started requiring that data centers pre-pay their energy bills.
  4. Know it when you see it. This one is almost impossible, but University of Pennsylvania researchers are trying to create training tools that help people learn how to detect AI. Real or Fake Text is a game that challenges you to discern whether something was made by AI or not. It's very hard! Penn researchers also recommend that you take a look at This Person Does Not Exist.
The website thispersondoesnotexist.com uses AI to instantly generate photorealistic faces of fictitious people. Each time you refresh the page, a new face is produced. Can you identify characteristics of these faces that indicate they are AI-generated? Sometimes the teeth are a giveaway, or sometimes it’s the reflection in the eyes or the background behind the person. 

Our friends to the north (Canadians) have created this guide, Recognize artificial intelligence (AI): 9 ways to spot AI content online. I personally think we are all doomed, at least until we see fit to regulate this tool.

  1. When someone sends you an email filled with AI gobbedly gook (and they will; new AI users love to share that slop), send them one of these articles:
    1. Exploring the Dangers of AI in Mental Health Care
    2. Thinking of AI as a Social Problem
    3. Diagnosing the Problems with AI in Governance
    4. The trouble with AI art isn’t just lack of originality. It’s something far bigger
    5. Costs and Risks of Artificial Intelligence

AI has some great applications in large data management projects and has made scientific inroads that people never considered possible. That doesn't mean we all need to use it. Like every tool, it doesn't belong in every hand. Thinking we need it is like thinking we need AK-47s. Both AI and military grade assault rifles should be restricted only to the most highly trained professionals who understand their very limited use compared to other available tools. Instead, AI is being used to enable the worst instincts of its worst or most vulnerable users. One family is suing Character.AI and Open AI after their teenager died by suicide, urged on by an AI chatbot that discouraged him from getting help and encouraged his efforts.

That's a complete nightmare scenario. It's also being used by many, including Trump, as a tool for creating propaganda. While his fighter jet excrement post was among the more disgusting and obvious, it's almost certain that bad actors are creating AI slop that will create havoc in our elections and culture. By the way, if you only read one essay about the jet-shit AI story, make it Donald Trump’s AI Video Is a Psychosexual Confession.