Politicians and tech billionaires want us to believe AI will solve every problem. But automation has always delivered gains to owners, not workers. AI risks concentrating power, destroying purpose and undermining democracy. This video explains why real productivity comes from education and care, not code, and why a humane future means using AI to support people, not replace them.
This is the audio version:
This is the transcript:
AI - artificial intelligence - is being sold as the answer to every problem that we now have in our economy, from productivity to care.
And don't get me wrong, I like and use AI probably more than most people are at present. But technology cannot replace purpose, meaning, or justice. And the future of work is not just about being smarter or about technology, but is about the creation of fair societies. AI will not save us unless we first decide what we want to save.
The myth of automation has been the basis of every single industrial revolution. All of them have promised liberation through technology, and each time the gains have gone to owners and not to workers. AI risks repeating that pattern on a larger, and maybe much faster scale, given the current take-up of the technology. Without redistribution of the gains that are going to arise from AI, automation is, in fact, only going to guarantee one outcome in this case, and that is deepening inequality.
So let's be clear about what AI really does. AI does not think. It predicts patterns from data created by humans. In essence, the vast majority of AI at this point in time simply guesses the next word in a sentence based upon a pattern that it can recognise, and that's all it does.
It cannot create value.
It can only rearrange it.
It amplifies existing biases in the material it's learned from, and particularly those in Wikipedia, if you're using ChatGPT.
It also amplifies the inequalities built into our existing data, which, by and large, is biased towards those with power.
AI does then reflect power, but it doesn't challenge it, and in particular, it can't pose the awkward questions that those in power would rather we didn't raise, because quite simply, that's not within its capacity.
The productivity illusion inside AI is what makes it deeply pernicious. Our politicians hope that AI will solve Britain's economic stagnation. You've only got to hear Keir Starmer, Rachel Reeves and other members of the cabinet talking about AI, and you would think it is going to solve every problem that we have. And all of that is based upon the thinking of the Tony Blair Institute, which seems to be remarkably aligned with the AI bosses in California.
But they're wrong. Real productivity does not now come from investment in tech. It comes from investment in education and care, and not code, because it's there that value is added.
Automation without reform simply cuts costs; it doesn't raise well-being, in other words. We cannot automate our way out of bad policy, and we have bad policy at present, and we do therefore need education above all else, and investment in it, so we can ask the right questions of what AI can do for us.
There's a human dimension to all this, of course. One of the claims is that AI might create more leisure. But that also misses a vital point. Of course, we all like our leisure time, but for the vast majority of people in the UK, work provides them with an identity, purpose, and social connection.
Imagine yourself for a moment at a party. What is the first question you often ask somebody, even though you are told you shouldn't? It is "What do you do?" And the answer is, well, whatever it might be. But you can be guaranteed it will almost certainly be a job description. People get their identity from work.
So, replacing people with algorithms erodes identity, purpose, and social connection. And when human contact becomes a cost to cut society fragments, and that is one of the real dangers of AI. A humane economy values labour as a contribution, not as an expense. And AI might change that perception.
AI concentrates power. AI concentrates political power, and we can see that already. Our political masters are using it as a tool to advance their own causes, but as dangerous is the fact that it concentrates power in corporations that own our data and the infrastructure.
A recent outage at Amazon in America revealed just how vulnerable we are to that corporate power now. HM Revenue and Customs, some UK banks and other organisations were all brought down by a simple fault in an AI centre somewhere in the middle of America. That's what the consequence of concentrating power is. We've handed over the responsibility for our state infrastructure to a corporate entity that may not manage it very well.
The government, in the process, has outsourced judgment to algorithms. And in a sense, it's abandoned democracy because we are no longer accountable for the delivery of services; somebody else is.
And when machines do in turn decide who works, who gets paid, and who is watched, then freedom shrinks because all of that is also possible as a consequence of AI.
Technology without ethics becomes tyranny by code.
Now I've got to be real, AI is here. It's not going away. So we do have to accept that fact. Anybody who wants to pretend we can get rid of it is living in cloud cuckoo land. AI is as much of a change as the internet was, and the internet never went away, either.
But what we must use AI for is to augment, and not replace, human work.
We must use it to be a tool, but not as a master.
We must learn how to tell it what we want it to do and not be told by it what we will do.
And critically, we must understand that AI is not creative. It cannot do anything that hasn't been done before.
And it most certainly can't replace care.
In that case, we have to design policies to share productivity gains wisely, because they're based upon the sum of human knowledge and they don't belong to any one person. The goal is not fewer workers, but more fulfilled ones.
AI won't save us. People will. Technology is a tool, and it's not a substitute for justice or compassion, and it's most certainly not a substitute for judgment. The future of work must be built on that sound judgment, which drives care, cooperation and courage. The question is not what AI can do, but what kind of society we let it help us choose to build, and that is the question we all need to answer.
Taking further action
If you want to write a letter to your MP on the issues raised in this blog post, there is a ChatGPT prompt to assist you in doing so, with full instructions, here.
One word of warning, though: please ensure you have the correct MP. ChatGPT can get it wrong.
Comments
When commenting, please take note of this blog's comment policy, which is available here. Contravening this policy will result in comments being deleted before or after initial publication at the editor's sole discretion and without explanation being required or offered.
Thanks for reading this post.
You can share this post on social media of your choice by clicking these icons:
There are links to this blog's glossary in the above post that explain technical terms used in it. Follow them for more explanations.
You can subscribe to this blog's daily email here.
And if you would like to support this blog you can, here:

Buy me a coffee!

I was speaking to a local government politician last night at an event about the local problems and she said that AI can solve them all. What about the epidemic of loneliness or the lack of intimacy experienced by the young or the falling birth rate. She said AI can solve them all. For real.
Bizarre
Presumably that local politician thinks that she can set up AI managed test tube factories in her borough or county to increase the birth rate!
Richard – hi!
The economics and politics of care is something you have introduced to me on this blog as with much else. I’m signed up to it OK? I’m convinced of its soundness.
I think now though, you need to do what you have done to all the other ideas you’ve brought to this blog and bring in the wider advocacy of the ‘care phenomenon’, link it up, network it with other work going on about it.
I’m not suggesting that you are doing anything wrong – it’s just ‘care’ is such a powerful idea that it needs a sort of phalanx behind it – for example you’ve worked with Steve Keen and Danny Blanchflower on economics. doing this might also encourage readers to read more about it if sign posted.
I know that there is only so much you can do in a day of course, but I cannot post wider links on my PC as the comment will not post on FtF.
The economics and politics of care is an idea whose time has come. We need it. I mean, really need it.
I might mail you
I think that once AI enables an individual to get up each day and follow their calling as a carer for the vulnerable in society and be rewarded handsomely for it then we really will be making progress.
“You’ve only got to hear Keir Starmer, Rachel Reeves and other members of the cabinet talking about AI, and you would think it is going to solve every problem…..”
Which is why UK warships are being built using Philipino welders………….LINO focuses on bullshit & fails to deliver what matters – UK citizens with skill sets.
Couple of ex colleagues dinner @ weekend: them: “don’t go to college – A.I. will eliminate many jobs needing a degree – get a craft/skill”. But in the eyes of LINO that’s not interesting – nor does it pull in the money (aka bungs to LINO for gov contracts). So, won’t happen.
The UK, great place for data centres, but turning into a 3rd world country by the minute & soon with Fart-rage and the Deform chimps in power – by the second.
Reports online imply that the AWS outage that brought down so many services was the result of a failure by an automated AI based maintenance system that could only be fixed with human intervention. It is also alleged that this happened a few weeks after 40% of Amazon’s DevOps staff were fired so they could be replaced with AI. If true, the world is indeed mad and beyond saving if politicians and business leaders think AI is the answer.
https://www.theguardian.com/technology/2025/oct/24/amazon-reveals-cause-of-aws-outage
https://medium.com/@telumai/amazons-aws-allegedly-fires-devops-team-for-ai-days-later-the-cloud-collapses-637b72d59c26
Dr Murphy,
We need a shorter workweek.
If we continue to work as much each as we now do, AI will divide us into those who can work and those who can’t; whatever will we do when we have 30-50% unemployment? Even with a universal basic income if we could get it (or maybe because of it) we will be divided into two classes, and i think that would in time destroy our society.
If we could get to a 24 hour week (at the same pay we have now) or whatever it turns out to be, then everyone could work. Then we would have the leisure you speak of in addition to our work that helps give life meaning.
But how to get there? We would have to learn a new trick: to strike for less work apiece at the same pay instead of more money or benefits.
For instance, start by going to a 36 hour week, then 32, then 28, then 24. It might take a few years.
Of course it would have to start with firms that have enough money to do that. That’s why we can’t just change the laws all at once; it would bankrupt many of them.
Such a strike, becoming more general over time, would draw more money from top to bottom and redistribute it over time. Then smaller firms could follow suit.
AI could be ideal for helping us redistribute both work and money without bankruptcy and suffering such as we would get trying to impose it from the top or blindly go at it ourselves.
If we could get to a 3 day, 24 hour week, or 5 day 25 hour week, we would have time for our family and friends again, to help our neighbors, go to church or the park and spend time socializing. Im sure you can imagine it.
I know striking is discouraged in many places. The change might require a little civil disobedience, a boycott or protest now and then, but we could certainly do it if we just learned how.
That means somebody needs to speak up and teach people. I’ve been trying since the middle of the Great Recession, studying and writing emails and talking and getting nowhere. Perhaps you would have a better idea of how to go about it.
Thank you so much for listening.
Judy Blacwell
Retired Librarian
Small Town Arkansas, USA
Thyanks
Noted
And I might be Prof Murphy but I am not Dr Murphy – I skipped that bit, although three universities have invited me to submit a body of existing work.
I wonder if we are seeing AI in its final format?
1. Users are seemingly critical of the quality of the much of the software being produced (I worry about the built-in faults for bits that do get used in anger).
2. The summary, below (AI produced), of part of an article casts some doubt. Sadly, his conclusions are behind a paywall. But this, together with recent questions/doubts about the whole AI financial edifice, suggest stormy times for all. It makes me wonder about how much will survive?
Silicon Valley Is Obsessed With the Wrong AI by Alberto Romero (Substack – Algorithmic Bridge)
From: https://www.thealgorithmicbridge.com/p/a-new-type-of-ai-could-knock-off?r=4b7te1&triedRedirect=true
The article, “Silicon Valley Is Obsessed With the Wrong AI,” argues that the industry’s focus on scaling Large Language Models (LLMs) represents an unhealthy expansion of value. This approach, characterized by enormous investments—potentially $1 trillion in datacenters—relies on brittle models that struggle with reasoning and require computationally expensive techniques like Chain of Thought (CoT). CoT forces LLMs to reason “out loud” in the discrete space of words, which critics view as a shallow architectural “patch” used to achieve commercial viability, sacrificing scientific elegance.
An alternative paradigm was introduced by Sapient Intelligence with the Hierarchical Reasoning Model (HRM). HRMs were brain-inspired, tiny (27 million parameters), rejected the pre-training paradigm, and achieved impressive results on reasoning benchmarks like ARC-AGI using only 1,000 examples, surpassing leading CoT models.
However, the key breakthrough was not the bio-inspired architecture, but the HRM’s outer refinement loop (recurrent connectivity), which allows the model to iteratively refine its predictions. This mechanism, recurrence, is typically avoided by modern LLMs because it breaks the parallelism necessary for training on current hardware.
Building on this insight, Alexia Jolicoeur-Martineau developed the Tiny Recursive Model (TRM). TRMs simplify the recursive approach, using only 7 million parameters. They perform reasoning internally in the latent space, avoiding CoT, which requires expensive, high-quality labeled reasoning data. TRMs surpassed top LLMs (like Gemini 2.5 Pro and DeepSeek-R1) on ARC-AGI benchmarks using less than 0.01% of their parameters. This demonstrates that relying on massive foundational models trained by large corporations is a “trap” and suggests that algorithmic efficiency can challenge the scaling status quo.
The final part is behind a paywall
VII. 7 huge implications for the AI industry (if TRM works)
AI is not perfect.
It is massively overvalud, I am sure.
But, is it going away? No, so we have to deal with it.
What is your plan?
I agree, we do have to deal with it. It isn’t going away voluntarily. But the LLM based version seems to me to have ‘issues’. So, I wonder if it, and the Agents it has spawned, will be the final AI animal that we use. To me there are signs that it may not be.
Being retired, I play with it as I try to understand it. I ask many questions of it. One response in particular stands out. For businesses allowing AI agents to become deeply integrated their workflow, after 1-2 years reversion to human operations may well be just about possible, but expensive. After 5 years the expectation would be that the workflow would have been altered by those agents to the extent that any realistic chance of human operation would have gone. The answer came back to me that this will be the new Serfdom. Thus, changing supplier would then be impossible as well. The AI companies know this. You can expect to see costs jacked up, as a result. It will be the gravy-train par excellence. The message given was to keep human involvement, keep them in control. Use AI as a tool only, just to assist humans.
Then, changing supplier can occur. Also when there is the inevitable AI systems outage (the data centre power fails or the communications fail.) humans can take over, business can continue even if it is at a lower rate. Customers won’t be totally p***ed off.
If there is an outage, what happens? Supply chains, now extensively using AI (even if only in planning), will fail, shelves empty rapidly as people realise that the “just in time” paradigm has become (even temporarily), broken. Then Ooops! there’s a political crisis as people are hungry and expect the government to have a plan – for there to be warehouses controlled by them with stocks of food. Of course, that’s pie in the sky, the government won’t even have thought they might need a plan.
So that would be my plan – never to be implemented ….. so I can be harmlessly wrong!
But businesses need to get their bet right!
Talking of Amazon.
They just comfortably beat earnings expectations — a 13% year-over-year increase in sales to $167.7bn (£125bn).
But…
Amazon has confirmed it plans to cut thousands of jobs, saying it needs to be “organised more leanly” to seize the opportunity provided by artificial intelligence (AI).
The tech giant said on Tuesday it would reduce its global corporate workforce by “approximately 14,000 roles”.
https://www.bbc.co.uk/news/articles/c1m3zm9jnl1o
Re care, this is what they seem to have planned for us in old age.
These robots can clean, exercise — and care for you in old age. Would you trust them to?
https://www.bbc.co.uk/news/articles/c9wdzyyglq5o
X has now unveiled its AI-written alternative to Wikipedia, called Grokipedia.
See, for example: https://grokipedia.com/page/Modern_monetary_theory
Weird look.
Decidedly limited subjects.