The 5 stages of AI grief
A few years ago, I was sat at the back of a town hall meeting at 11:FS, as our CEO talked through our strategy for the year and where he saw the major focus points and innovation opportunities.
"...and of course, AI is going to be huge. It's going to revolutionise businesses and propositions!"
Before I could help myself, I shouted out "AI is a load of crap!"
(Actually, I said something rather more fruity and Anglo-Saxon than that, but this is a family blog and I don't want to bring the tone down)
I'm not normally one to interrupt the boss mid-flow, but something about the near-constant hype of AI with very little evidence at that point in time that it actually might be a world-changing technology, flipped a switch in my brain and I couldn't stop myself from speaking up.
Denial: It's artificial, but it's not intelligent
At the time (circa 2022-ish), AI was on the roadmap of every startup, every fintech and even traditional banking platforms. We were promised clever AI features to help us budget our money even better, AI agents would scour the web for the best deals and automatically switch our Direct Debits, and AI assistants would write all of our reports and presentations while matching our writing style seamlessly.
None of this was shipping in 2022. We had early versions of ChatGPT which could perform some tasks very well but were often prone to 'hallucinations' where the results you got could be wildly inaccurate or just plain wrong.
At the root of my rebellion against the AI onslaught was the fact that AI is a terrible term for the technology. What we colloquially refer to as "AI" is in fact just a very, very large language model that has been trained on huge swathes of data to make a reasonably accurate guess at an answer to a given question.
It's definitely artificial, but it's certainly not intelligent.
Anger: It's just guessing!
AI models can't actually think or reason their way to a solution. Simply put, they're just guessing the answer, like a student who's stuck on a multiple-choice question and just blindly chooses whichever option looks the most likely.
Sometimes they can be frighteningly accurate at guessing, so much so that it can make you believe there's an intelligence behind it, but there isn't. They're just really, really good at guessing.
It made me very cross that people were lauding AI as a revolutionary new technology made by Silicon Valley geniuses that would change our lives, when it felt like so much smoke and mirrors to me. None of the revolutionary 'powered by AI' features that tech companies, banks and fintechs shipped in the subsequent years actually appeared to do anything useful.
In the case of banking apps, it very much felt like product teams had integrated some form of LLM that did very little, slapped a "Now with even more AI!" strapline on their Tube ads and then got back to making colourful debit cards and rendering our bank balances in lovely typefaces with emojis.
Bargaining: If only everyone would see it's all a magic trick
I resolutely ignored the progression of AI models and the rise of companies like OpenAI and Anthropic, even as their offerings became ever more capable.
"If only the world could see that AI is not in fact intelligent at all", I thought, "but just a clever magic trick, then maybe it would all go away and we could get back to building 'proper' products and technologies."
I thought that if I considered AI as the equivalent of Web 3.0, an inaccurate term to describe something that didn't really exist, then the noise and hype might all blow over in a year or two.
Depression
IwisheveryonewouldshutupaboutAI.
Acceptance
At the end of 2024, I made a big change in my professional career. I had spent the previous 7 years as the lead engineer for 11:FS Pulse, building the platform and product up from scratch into one of the leading financial insight and competitor analysis tools in the world. Post-Covid, I'd delivered feature after huge feature in a relentless blur and I was beginning to burn out.
In another corner of the 11:FS office, FoundryOS was just starting to gather momentum and they were looking for someone to head up Product Engineering and help to build out the platform and tools. They offered me the chance to join the team and I jumped at the opportunity. This seemed like the perfect time to reset and take on a fresh challenge. I'd have a six-month transition period from my old role into the new FoundryOS position; plenty of time to hand over my old role and ramp up into my new job, I thought. This will be fine. This will be easy!
I was very wrong. Leaving behind a product I'd owned for the last 7 years and a role I'd grown into, hiring a replacement, onboarding said replacement and handing over 7 years' worth of knowledge while simultaneously onboarding to FoundryOS and learning new languages, frameworks and concepts was a ridiculous challenge. I found myself unable to cope with the constant context switching and mountains of new information to take in.
In a fit of desperation, I thought, "What if I use an AI bot to help me?" In the intervening years between my outburst at a town hall and joining the FoundryOS team, AI models had come a long way. Vibe-coding had become a thing, although it probably shouldn't have. Friends of mine were using AI to write their emails, make their presentations, even write their code for them. Why not give it a try?
I signed up for a Copilot plan on GitHub, set it up in VSCode and started asking it questions. The answers were largely good, too. I used it as a memory aid during my transition period between roles, where I was effectively working two jobs at once, each with its own entirely different tech stack. I used it to sanity-check my first commits for FoundryOS; it suggested useful tweaks and fixes. I even used it to write the framework for my transition plan from 11:FS to FoundryOS!
Another tool in the toolbox
Since then, I've accepted that AI models have a useful part to play in modern life. I am still sceptical of the wild claims that Silicon Valley companies make for their AI products, and I worry about the energy usage of data centres packed full of GPUs, inhaling vast amounts of power and water. I'm concerned for people whose roles will be replaced by AI and I'm not convinced that doing so will lead to the hoped-for efficiencies.
My feelings on AI have evolved, then. I now consider it another tool to be used to help solve problems, but with some caveats. I think it's critical that you don't use AI to do all of your work for you but instead use it as a virtual colleague that you can brainstorm ideas with and ask questions to. A co-pilot, you might say. (sorry!)
There still needs to be human knowledge and experience applied to the output from these AI models. They're still not intelligent. From the perspective of a software engineer, you have to understand the code that they return and challenge and correct it when the model inevitably makes mistakes or hallucinates an API that doesn't exist.
With all that said, I wouldn't have been able to do my job effectively this year without using AI tools, and back in 2022 I never thought I'd say such a thing. AI is here to stay, and I'm a reluctant convert.