- Solo Success School
- Posts
- I'm going AI vegan
I'm going AI vegan
A 30 Day LLM-free experiment
My LinkedIn Content Agency is going AI Vegan for the next 30 days. Here’s why.
I’ve been hesitant to contribute any thoughts to the frothing sea of opinion, hype, clickbait and doomsday advice about AI.
But something’s been nagging at me in the back of my mind for the whole year. This sinking feeling; a gradual slide into dependency.
I’ve used AI (specifically, a Large Language Model (LLM) called ChatGPT) every single working day since July 2024. Note: In this article, I use the terms AI and LLM interchangeably. This is technically not correct, so sue me.
Since December 2024, LLMs have been a critical piece of infrastructure for my LinkedIn Ghostwriting client workflows, and for my own content creation.
In that time, I’ve noticed a worrying trend: the more I use tools like ChatGPT, the more my most valuable skills atrophy.
Critical thinking
Creative writing
Blue-sky ideation
When I read back, I can see it in my output.
Sanitised, homogenised, increasingly removed of my personality and uniqueness. More indistinguishable and much more ignorable.
So, I'm disconnecting all LLM workflows within our business for the next 30 days. We’re going AI vegan.
At the end of this article you will find 4 key resources that shaped my thinking and understanding of AI, and led me to this experiment.
Some background
I’ve instructed dozens of Solopreneurs on how to use LLMs in their workflows. I’ve built more than a dozen CustomGPTs (also known as AI Agents) to create content for myself and clients. I’ve trained these agents on reams and reams of my own conversations, sales calls, and previous ‘handmade’ content produced for clients.
I’ve seen firsthand the efficiency gains, the speed, the volume of what these LLMs can help us produce. And I’ve proudly espoused these benefits to anyone who’ll listen about the smart ways we’ve deployed this tech inside my agency.
But I’ve also felt myself become completely numb to the words we’re creating.
LLMs help me when I view writing as ‘checking the box’. When the goal is ‘ship x posts this week’. LLMs are fantastic for that.
But I’ve realised that if I want to get people to think, feel or do something as a result of what I put out into the world, there needs to be struggle.
My personal reasons
As a high-functioning lazy person, there are many areas of my life which I seek to make more convenient. We eat frozen meals regularly. We grocery shop with Click and Collect or home delivery. I pay a handyman (or ask my Dad/Father in Law) for odd jobs around the house.
But I’ve come to realise that there are some areas in life, some critical skills, in which the obstacle is the way. The struggle is the point.
Whatever makes that thing hard, is the exact reason that thing is important to do. I’ve realised that, for me at least, writing is one of those skills.
It is F—ING hard to sit down and write an interesting, engaging, high quality, original piece of thought that makes someone else think, feel or do something. And that’s exactly why I should be pushing my brain and my willpower and my concentration to get better at doing just that. That’s exactly why I shouldn’t be trying to outsource my thinking and creation skills to a machine (especially a machine that can’t actually think).
Willpower has never been a strong suit of mine. And novel technology is notoriously hard to avoid once it sinks its teeth in. So I feel myself sliding down this slippery slope - gradually having LLMs take over more and more significant chunks of my work.
I’m putting a stop to that for (at least) the next 30 days.
My commercial reasons
I get paid to create words that will be published on the internet by (or for) clients. AI’s involvement in this process only ends in one way: the complete commodisation of the whole act.
More content, written faster, published more often, cheaper to produce. I’m going to find myself caught in the race to the bottom. I don’t see that as a smart long-term strategy. I believe a market will emerge for ethically, sustainably sourced content & creativity in the same way there’s a market for sustainably sourced food & fashion.
My ‘thought partner’ reasons
Large Language Models don’t think. They work by providing sequences of words in the statistically most probable order, having consumed every word ever published on the internet. They do a fantastic job of imitating thinking - of producing language that looks like thinking. But please remember: your AI Thought Partner (at the time of writing), can’t think. It doesn’t have opinions. It has no concept of the reality you are living in.
I believe that makes it a fundamentally flawed Thought Partner.
So for the next 30 days, and beyond, I’m going to pick up the phone and talk to my human thought partners in life. The people I respect, whose opinions I trust. I’m going to ask for their advice. And I’m going to get back to thinking critically for myself.
I just know, deep down in my bones, that nothing worth having comes easy. And if it feels like a shortcut, if it feels like a hack, if it feels too good to be true, it is.
Below are 4 of the most interesting, well-thought out, and thought-provoking pieces I’ve seen from the world of AI.
Moon (2009 Film) - check out the trailer
This film featuring Sam Rockwell and Kevin Spacey went sorely under-watched and as I reflect on the film I believe the themes and ideas it deals with are an interesting representation of where things could go with AI.
“I'm here to keep you safe, Sam. I want to help you.”
It’s just a little bit Spooky.
What the AI bros won’t tell you - by Alicia McKay
This was the article that tipped me over the edge and into this LLM-free experiment.
It presents a grounded, well thought-out, considered opinion as to why we shouldn’t believe all the hype. McKay also lifts the lid on more of the social and ethical aspects of the AI revolution.
AI 2027 Research Paper
Warning: this paper is a LONG read. It presents an incredibly compelling ‘choose your own adventure’ style which highlights just how high the stakes are. Without Spoilers, this sits on the ‘doomsday’ end of the spectrum and is a very sobering read.
The Many Fallacies of ‘AI Won’t Take Your Job…by Sangeet Paul Chaudary
I love the way Chaudary weaves real world examples with intelligent, crisp writing and well-made logical arguments. The only problem? Chaudary doesn’t really provide any answers, just lots of head-scratching questions.
If you’ve made it this far, I’d love to hear from you:
What’s your perspective on AI (specifically, Large Language Models). Do you use them in every day life? Where on the hype curve do you place yourself?