GenAI

Embracing authenticity in a sea of GenAI

Similarly, a report from Deezer last year showed that over 50,000 GenAI tracks are uploaded every day to the music streaming platform. 50k! Every single day! We’re literally drowning in culture, eh? People use AI to generate entire bands for Spotify, then use bot farms to drive up the streams to get paid. Humans have been cut out of the loop entirely! What a time to be alive.

Forgive me for going Full LinkedIn, but this really did get me thinking about my job… and communications, and social media, and marketing, and content, and the arts. In a world in which there is an abundance of almost everything arts-related (music, visual imagery, video, literature) the only things there aren’t an abundance of are authenticity and human creativity – and they become even more valuable as a result. We need to remember this, and be prepared to swim against the tide to defend it.

This also got me thinking (stay with me here…) about mortality. The Venn diagram of tech bro billionaires who are interested in ‘longevity research’ (for which read: living for an incredibly long time beyond the expected human life-span) and who are interested in GenAI output more or less replacing human output, is a circle. I don’t think that’s a coincidence.

I’m old fashioned in that I think the thing which gives life *meaning* is it that it’s finite. When you have an infinite supply of something – be that literature, music, or even immortality itself – the overall value of it inevitably goes down. But there is (forgive me, again) an opportunity here: to define ourselves through opposition. To take the artisanal route, rather than hanging onto the coattails of mass production.  

If we end up in this landscape where 90% of what we consume is just AI slop, that means genuine cultural experiences will become incredibly valuable. Not to everyone, but to some people – and I think those people are the ones interested in arts, and culture, and education, and progress. This is how we can stand out, this is how we can engage. Do you want to be the 6 millionth account to post some second-rate GenAI imagery? Wouldn’t you rather be the exception that only posts genuine pictures? Wouldn’t you rather be part of the group who proactively reclaim music, and literature, and videography, and even something as relatively prosaic as a social media post, as something of value - and nurture that?

The social media accounts I run for my org don’t use any GenAI music, imagery or video. We post less than we would if we did use that stuff, because it certainly speeds things up. It is incredibly quick to produce marketing materials with GenAI. But most of them are basically rubbish. The frictionlessness of GenAI somehow seeps through – and no one really cares or learns, or remembers. And of course, many people will completely write you and your content off if you use GenAI in your posts or your slides or your website or your newsletter. They feel that if you aren’t prepared to create content yourself, they shouldn’t have to consume it either. They feel – whether you intend this or not – like they you are treating them with contempt when you use GenAI.

Some people will be fine with all AI slop, all the time – but a lot of people won’t. Creativity is no longer technically required to write books or music, but it is required for OUR sake as humans. We will increasingly need to reclaim art as more than just background noise and filler. We need to reclaim the ACT of creation as valuable, not just the product. And as everything else gets watered down and diluted into meaninglessness, human experiences and connectivity become more valuable than ever. Write that music. Write that book. Write that social media post using your own brain and your own words.

Don’t give in to the idea that GenAI is an inevitable, unstoppable force: challenge all of those people who say ‘AI is here now - you can’t put the toothpaste back in the tube!’. To quote my friend Simon Bowie:

“When I accidentally squeeze out too much toothpaste, I don't then shove it all in my mouth. I wash it down the plughole.”

What do we *know* about GenAI

There is an unending stream of reports, think-pieces, puff-pieces and hit-pieces on GenAI. You’d need a lifetime to get through it all even if they stopped being written tomorrow.

The trouble is so much of it is speculative. GenAI shills talk uncritically about what it may be able to do, and fudge the lines about what it is really achievable. Equally I read pieces by GenAI skeptics that confidently claim ‘GenAI cannot do X’, when in fact that was true 6 months ago but the extraordinary pace of technological development means it is not true now. There’s also a lot of ‘what-aboutism’ in AI discourse - sure there’s a huge environmental cost to using it but what about email, that uses electricity too? Etc etc.

So if we try and get a handle on where we stand, and want to move beyond the back-and-forth, what do we actually know about GenAI?

GenAI erodes cognitive functioN

A 2025 MIT Study (summarised in Time here) used EEGs to record brain activity in controlled groups. The ChatGPT-using group has the lowest brain engagement and “consistently under-performed at neural, linguistic and behavirol levels.” The study lasted months and the GenAI users got worse and worse over time.

There are several other studies that document this ‘cognitive offloading’ - when we outsource creativity and criticality, we lose the ability to do it for ourselves. That makes sense: if we got someone else to do exercise for us we’d probably lose fitness, too.

GenAI doesn’t reduce workload for regular employees

Leaders and managers love GenAI. The pantomime villain bosses love it because it offers the promise of achieving the same results with fewer staff members, thus saving money. But even the well-meaning managers in the public sector seem to love it, because it offers their staff gains in efficiencies and save them time to concentrate on the things which really matter.

But does it, though?

Harvard Business Review published an article this month confirming AI doesn’t reduce work - in fact it intensifies it. In an 8 month study, they it was discovered that employees worked faster, took on more tasks and worked longer hours (for the same pay, of course) thanks to GenAI. “Once the excitement of experimenting fades, workers can find that their workload has quietly grown and feel stretched from juggling everything that’s suddenly on their plate. That workload creep can in turn lead to cognitive fatigue, burnout, and weakened decision-making. The productivity surge enjoyed at the beginning can give way to lower quality work, turnover, and other problems.”

GENAI gets things wrong a lot of the time…

The marketing genius of the word ‘hallucination’ to explain GenAI errors lies in suggesting an entity that can think for itself, and sometimes makes things up or hallucinates. In reality GenAI is a massive excercise in pattern recognition, and the process by which it gets things ‘right’ and gets them ‘wrong’ is exactly the same.

Because of this, the BBC found it misrepresents the news a massive 45% of the time. A recent report found Google Gemini is in fact returning hundreds of thousands of wrong answers each minute.

This is catastrophic.

[image or embed]

— Futurism (@futurism.com) April 9, 2026 at 12:31 AM

I don’t know a single person who has asked GenAI about something they have deep knowledge of, and still rated GenAI highly after reading the results. Not one. I don’t know a single person who has used GenAI to take minutes at a meeting, and then continued to do this after checking the minutes properly for accuracy. To use GenAI in earnest is to learn how limited it is.

…But we use it anyway

We can all talk about how ChatGPT ‘isn’t a search engine’ till we’re blue in the face, and GenAI tools can add a disclaimer saying we should ‘always verify results’ as many times as they like, but we know what humans are like - we’re not checking the results, even in incredibly important things like Police decisions, because that would take more time than just looking up the data properly in the first place.

Google Gemini is a punchline - there are countless examples of it getting things hilariously wrong in its summaries. But Google doesn’t care, because exponentially fewer people are clicking on the links in the search results - they’re staying on Google and just reading the AI.

GENAI already has a high body count

Many GenAI tools are ready to act as a ‘suicide coach’ as shown in cases already going to trial. The ‘deaths linked to Chatbots’ Wikipedia page is steadily growing. A study has been published showing how dangerous GenAI medical advice can be, with examples including bogus information about liver function tests which would mislead people with serious liver disease wrongly thinking they were healthy.

GenAI is built ENTIRELY on stolen intellectual property

The Large Language Model GenAI tools are built on data they stole - and in fact OpenAI has said it would be impossible to create tools like ChatGPT without using copyrighed material. Ah well, fair play lads - on you go, then.

GENAI doesn’t actually save most companies money

In 2025 MIT found that despite investing billions of dollars into GenAI, most major companies are not seeing any return on their investment. In fact 95% of GenAI pilots are failing.

GENAI companies themselves don’t actually make money

OpenAI make the wildly successful ChatGPT - what do you think their profit was in 2025? $1 billion? $2 billion? Not quite - they made an $8 billion loss. Their own internal documents predict a £14 billion loss this year. They’re committed to spending $1.4 trillion, with no road to profitability by 2030.

“OpenAI's losses will total $143 billion between 2024 and 2029, the "largest startup losses in history," Deutsche Bank analysts wrote in a December 4 note. HSBC researchers said in a late November report that they expect OpenAI to have a $207 billion shortfall by 2030, even when modeling for significant boosts in revenue” says Business Insider.

Anyway: the world seems all-in on this tech, but it may be prudent not to become over-reliant on it, for all of the reasons above.


Not to mention all the other things (you can find a pretty exhaustive list on Sarah Winnicki’s site) like extraordinary electricity use and habitat destroying of the data centres, the fact that one data centre can use the same amount of water per day as a town of 50,000 people, the amplifying violence against women, the huge cost to the creative industries of replacing skilled human with utter slop, the fact that GenAI’s output is racist as hell (oh and sexist, and ableist, and homophobic).

Not to mention any of the horrendousness of Grok, which really belongs in a category of its own, but sadly isn’t, because the other GenAI tools now feed off Grok to inform their own responses, as GenAI eats itself, excretes itself out, and then eats its own waste, like some sort of terrible apocalyptic dog.

And not to mention that our use of (and Government investment in) GenAI pours money into the coffers of literally the world’s worst people, because that’s just my subjective opinion, and this is post is all about what we KNOW about GenAI.

Reading all that back, it’s hard to get enthusiastic about the wide-spread adoption of this technology.