I accidentally sparked a vegan controversy: Why I’m now AI sober

In recent weeks, vegans on the internet have been very angry at me for calling myself “AI vegan.” I had some good reasons. They made some good points. Here’s how my first internet pile-on played out.

When I first said I was going “AI vegan,” I didn’t expect to catch so many people’s attention. 

It wasn’t a PR stunt or a positioning strategy. I didn’t set out to start a viral marketing campaign by co-opting the terminology of veganism to raise the profile of my content marketing agency. It sounds funny to me to lay it out like that. None of this was intentional. But that’s exactly what happened. 

How I stumbled into an “AI vegan” controversy

After being a generative AI power user for a while, I had reached a point where cognitive offloading was starting to make me feel, well, a lot less human. I am a writer. Writing is thinking. And offloading certain kinds of thinking, even with my human prompts in the loop, had made me feel like my mind was turning to mush. 

Like most great epiphanies, this one came to me doing the dishes one Friday evening. I pinged a quick WhatsApp to a friend. “I want to go AI vegan.” The term “AI vegan” made intuitive sense to me. I was not only worried about the impact AI was having on my health as the consumer of AI slop — there is a growing field of research suggesting cognitive offloading to AI isn’t good for us. I was also making a bit of an ethical choice. After educating myself on generative AI, I was worried about the whole supply chain of the tech: how it’s harming people, planet and animals in various ways, from human data workers in large language model (LLM) development through to the power-hungry, super-thirsty data centres harming natural ecosystems around the world.

Of course, explaining all that to someone can get complicated, so AI vegan seemed more useful shorthand. I was not simply abstaining from AI but engaged with the ethical questions around my information diet — it was, in fact, flattering the idea of what it means to be vegan in relation to the health and ethics of food systems.

And as soon as I floated the term in my LinkedIn feed, I uncovered a huge chunk of the population equally concerned about the broader impacts of generative AI on culture, work and life as we know it. And so, this AI vegan thing really started to pop off. AI veganism got featured in The Guardian, The Conversation, and on podcasts and Twitter feeds all over — and, yes, in vegan spaces, too. 

I somehow found myself at the epicentre of a spicy Reddit thread, catalysed by ‘Earthling Ed’ Winters — vegan activist — who put me firmly in the crosshairs on his Substack, in an article titled “‘AI vegans’ are a thing now, apparently.” The Reddit thread oddly turned a bit profane, so let’s stick with Winters here for decorum. 

To Winters, my AI vegan stance is a “terrifically absurd encapsulation of the modern world.”

“Horrified” and “frustrated,” Winters pointed to my thinking as nothing more than a marketing tool to promote my Linkedin content agency. To Winters, this was nothing more than a “reductive,” “basic,” “attention grabbing” “clickbait” tactic. 

The article ends with a kind of death wish: “Here’s hoping this is the last we hear of the term ‘AI vegan.’” (Nice to meet you too, Ed!)

Why ‘AI vegan’ still kind of makes sense to me

The criticism levelled at me by Winters and plenty of angry Redditors misses a few key details. 

Firstly, they’re not wrong to point out that I did post about going AI vegan on social media, and it did result in a significant amount of awareness and attention. But it was never my motivation for doing the thing, or talking about the thing. It just took off. That’s how social media algorithms work. 

Second, the sharp critique of my use of the term seemed, to me, at odds with what Winters said veganism stands for. As he wrote in the piece: “In the same way that humanism is an ethical philosophy that prioritises using reason and empathy when it comes to our treatment of one another, veganism is an ethical philosophy that does the same with non-human animals.” 

Sure, call me a humanist: I fundamentally believe that the current suite of generative AI tools have been built on, and continue to enable, human exploitation. Whether it’s the workers in the developing world paid sub-living wages to filter the worst corners of the internet to decide what goes into LLM knowledge bases. Or it’s the IP and creative intelligence of millions of people whose work has been hoovered up into the reams of information LLMs are trained on, exploitation of sentient beings is fundamental to the construction and operation of generative AI. 

And then there’s AI’s negative impact on the climate crisis, on environments and ecosystems, where — certainly — the harm extends to all animals, including humans. This isn’t my opinion but the subject of ongoing research from scientists and journalists, most recently including Karen Hao’s instant-best-selling book, “Empire of AI.”

Not using these tools seemed in alignment with the ethical principles of veganism in that we should not exploit sentient beings for our own gain. Not only is exploitation part of the generative AI construction process; it seems to be part of the value proposition. The sycophancy, the engagement-optimised interface, the built-in prompts that drive users to stay on platform, the forced adoption by Big Tech companies. These are all exploitative tactics to keep us feeding the beast. Did you know? Unless you toggled the setting off, you’ve been automatically opted in to allow LinkedIn to use your content and data to train its generative AI models. And just wait until OpenAI, et al, inevitably starts using the data you’ve shared with your ChatGPT “therapist” against you in advertising. 

Just as milk doesn’t come from a carton, those generative AI outputs don’t come out of thin air. Like our food, like our fashion, AI has a supply chain underpinning it. AI vegan is an intentional choice to denote those commonalities. 

The Redditors going after me have thought deeply about their ethical choices. In the same way, quitting AI felt like something I could control — a step in the right direction I could actually take. Using vegan in the AI context is reflective of my concern about what my brain consumes, and where that material comes from. It’s a nod to the negative externalities created in our system by AI. 

I maintain: AI veganism was not intended to bait clicks, create outrage, or minimise veganism in any way. And there are, in my opinion, several important parallels between the ethics of veganism and the ethics behind my decision to abstain from AI. 

And the fact remains, the AI vegan idea made a lot of sense to a lot of people. I’m hopeful this is a positive sign that maybe we aren’t as far apart in our worldview as the algorithms would have us believe.

I found my next “clickbait” label in the comments

No pain, no gain, right? I couldn’t help but dive into the comments on Winters’ Substack. The comments section was awash, comfortably anonymous users revealing only their first name describing my take as “pure cringe and just idiotic.” “Disrespectful,” “nonsense,” and more. 

But there was one comment which stood out, and got me thinking. It wasn’t a personal attack, didn’t frame me as a sinner for misusing the term vegan, and actually felt thoughtful?!

AI sober would have made significantly more sense, especially as more evidence is found that consistent use of AI is changing brain activity and mimics addiction.

‘AM’, Reddit user

That resonated with me for a couple reasons. 

The first reason: it points to one of my key motivations for resisting use of AI. As I wrote in one of my earlier essays on my AI vegan experiment:

Something’s been nagging at me in the back of my mind for the whole year. This sinking feeling; a gradual slide into dependency … Willpower has never been a strong suit of mine. And novel technology is notoriously hard to avoid once it sinks its teeth in. So I feel myself sliding down this slippery slope - gradually having LLMs take over more and more significant chunks of my work.” 

The second reason: January 1, 2023. 

Nearly three years ago, I made the choice to stop drinking alcohol. 

I realised I’d lost control of my consumption. I drank because it was a good day, a bad day, the start of the week, end of the week, middle of the week. I drank to numb bad feelings and amplify good ones. I wasn’t drinking much, but the thought of stopping for three days felt like an eternity. I was sleeping worse and worse, getting more and more lethargic, and had less energy with the kids. I realised I had to stop. 

This year, I’ve felt generative AI slowly atrophying my most important, powerful muscles: critical thinking, creative writing, and blue-sky ideas. The more I used AI, the fewer interesting thoughts I seemed to have. It was making me less effective. It went from helpful, reliable thought partner to a kind of thought dealer.

I hope this doesn’t make sober people mad at me online

At the risk of enraging the sober community (again, I am one of you), I’m now calling myself “AI sober.” Do I need a term to say I am not using AI? To be honest, it helps. It’s a friendly way to start a conversation with people who aren’t totally lost in the sauce of AI hype. I can say, “yeah, I’m AI sober,” and then they kind of get why. Thank you, commenter on Winters’ Substack. I am now AI sober. That’s settled.

However, this whole episode has also forced me to reflect on the state of online discourse.

Our increasing reliance on algorithms as gateways to information creates a tendency to flatten culture, as Kyle Chayka wrote in his 2024 book, “Filterworld.” It also squashes nuance and establishes echo chambers. 

This flattening feels like the walls closing in all around us. In response, human creators who want to be seen and heard outside their own bubble are nudged to develop increasingly more polarising perspectives. This is an interesting case study of how the game is played. I blindly managed to pierce through the echo chamber of the algorithms, my feed and my audience, and pop up in different spheres of conversation. A clash of perspectives ensued, as it does.

If nothing else, this situation proves to me the importance of engaging our brains and making reasoned arguments. It is the one true way to evolve our thinking, grow as people, and ultimately view the world with more empathy and understanding. And it’s true now more than ever in the age of AI slop; sycophantic “yes man” AI tools; and increasingly constrictive algorithms serving up our content and putting blinkers on our brains. 

Making a reasoned argument, engaging other opinions, taking onboard feedback/critique, and evolving my thinking has played out in probably the most public example of this in my life. 

Was I a little bummed out for being so misunderstood? Sure. But I’m reminding myself, this is what it means to think critically, to put thoughts out into the world, and to shape ideas that matter.

It’s also, after all, what it means to be sober: to live in your own brain, under the influence of your own thinking, and celebrating what it means to be gloriously, openly, honestly human. 

In other words, hello, I’m Joe, and I’m AI sober.