What do we *know* about GenAI

There is an unending stream of reports, think-pieces, puff-pieces and hit-pieces on GenAI. You’d need a lifetime to get through it all even if they stopped being written tomorrow.

The trouble is so much of it is speculative. GenAI shills talk uncritically about what it may be able to do, and fudge the lines about what it is really achievable. Equally I read pieces by GenAI skeptics that confidently claim ‘GenAI cannot do X’, when in fact that was true 6 months ago but the extraordinary pace of technological development means it is not true now. There’s also a lot of ‘what-aboutism’ in AI discourse - sure there’s a huge environmental cost to using it but what about email, that uses electricity too? Etc etc.

So if we try and get a handle on where we stand, and want to move beyond the back-and-forth, what do we actually know about GenAI?

GenAI erodes cognitive functioN

A 2025 MIT Study (summarised in Time here) used EEGs to record brain activity in controlled groups. The ChatGPT-using group has the lowest brain engagement and “consistently under-performed at neural, linguistic and behavirol levels.” The study lasted months and the GenAI users got worse and worse over time.

There are several other studies that document this ‘cognitive offloading’ - when we outsource creativity and criticality, we lose the ability to do it for ourselves. That makes sense: if we got someone else to do exercise for us we’d probably lose fitness, too.

GenAI doesn’t reduce workload for regular empoyees

Leaders and managers love GenAI. The pantomime villain bosses love it because it offers the promise of achieving the same results with fewer staff members, thus saving money. But even the well-meaning managers in the public sector seem to love it, because it offers their staff gains in efficiencies and save them time to concentrate on the things which really matter.

But does it, though?

Harvard Business Review published an article this month confirming AI doesn’t reduce work - in fact it intensifies it. In an 8 month study, they it was discovered that employees worked faster, took on more tasks and worked longer hours (for the same pay, of course) thanks to GenAI. “Once the excitement of experimenting fades, workers can find that their workload has quietly grown and feel stretched from juggling everything that’s suddenly on their plate. That workload creep can in turn lead to cognitive fatigue, burnout, and weakened decision-making. The productivity surge enjoyed at the beginning can give way to lower quality work, turnover, and other problems.”

GENAI gets things wrong a lot of the time…

The marketing genius of the word ‘hallucination’ to explain GenAI errors lies in suggesting an entity that can think for itself, and sometimes makes things up or hallucinates. In reality GenAI is a massive excercise in pattern recognition, and the process by which it gets things ‘right’ and gets them ‘wrong’ is exactly the same.

Because of this, the BBC found it misrepresents the news a massive 45% of the time.

I don’t know a single person who has asked GenAI about something they have deep knowledge of, and still rated GenAI highly after reading the results. Not one. I don’t know a single person who has used GenAI to take minutes at a meeting, and then continued to do this after checking the minutes properly for accuracy. To use GenAI in earnest is to learn how limited it is.

…But we use it anyway

We can all talk about how ChatGTP ‘isn’t a search engine’ till we’re blue in the face, and GenAI tools can add a disclaimer saying we should ‘always verify results’ as many times as they like, but we know what humans are like - we’re not checking the results, even in incredibly important things like Police decisions, because that would take more time than just looking up the data properly in the first place.

Google Gemini is a punchline - there are countless examples of it getting things hilariously wrong in its summaries. But Google doesn’t care, because exponentially fewer people are clicking on the links in the search results - they’re staying on Google and just reading the AI.

GENAI already has a high body count

Many GenAI tools are ready to act as a ‘suicide coach’ as shown in cases already going to trial. The ‘deaths linked to Chatbots’ Wikipedia page is steadily growing. A study has been published showing how dangerous GenAI medical advice can be, with examples including bogus information about liver function tests which would mislead people with serious liver disease wrongly thinking they were healthy.

GenAI is built ENTIRELY on stolen intellectual property

The Large Language Model GenAI tools are built on data they stole - and in fact OpenAI has said it would be impossible to create tools like ChatGTP without using copyrighed material. Ah well, fair play lads - on you go, then.

GENAI doesn’t actually save most companies money

In 2025 MIT found that despite investing billions of dollars into GenAI, most major companies are not seeing any return on their investment. In fact 95% of GenAI pilots are failing.

GENAI companies themselves don’t actually make money

OpenAI make the wildly successful ChatGTP - what do you think their profit was in 2025? $1 billion? $2 billion? Not quite - they made an $8 billion loss. Their own internal documents predict a £14 billion loss this year. They’re committed to spending $1.4 trillion, with no road to profitability by 2030.

“OpenAI's losses will total $143 billion between 2024 and 2029, the "largest startup losses in history," Deutsche Bank analysts wrote in a December 4 note. HSBC researchers said in a late November report that they expect OpenAI to have a $207 billion shortfall by 2030, even when modeling for significant boosts in revenue” says Business Insider.

Anyway: the world seems all-in on this tech, but it may be prudent not to become over-reliant on it, for all of the reasons above.


Not to mention all the other things (you can find a pretty exhaustive list on Sarah Winnicki’s site) like extraordinary electricity use and habitat destroying of the data centres, the fact that one data centre can use the same amount of water per day as a town of 50,000 people, the amplifying violence against women, the huge cost to the creative industries of replacing skilled human with utter slop, the fact that GenAI’s output is racist as hell (oh and sexist, and ableist, and homophobic).

Not to mention any of the horrendousness of Grok, which really belongs in a category of its own, but sadly isn’t, because the other GenAI tools now feed off Grok to inform their own responses, as GenAI eats itself, excretes itself out, and then eats its own waste, like some sort of terrible apocalyptic dog.

And not to mention that our use of (and Government investment in) GenAI pours money into the coffers of literally the world’s worst people, because that’s just my subjective opinion, and this is post is all about what we KNOW about GenAI.

Reading all that back, it’s hard to get enthusiastic about the wide-spread adoption of this technology.