We are told that artificial intelligence can replace human judgment. It cannot.
In this video, I explain why AI does not care, why it cannot exercise judgment, and why deploying it at scale embeds neoliberal values into decision-making by design.
Algorithms prioritise efficiency, cost reduction and rule-following. Judgment requires care, context, responsibility and democratic accountability.
This is not a technical debate. It is a political choice about the kind of economy and society we want to live in.
This is the audio version:
This is the transcript:
AI does not care. What it does do is reinforce neoliberalism, and that's what this video is about.
Let me be clear at the outset. AI cannot, in my opinion, exercise judgment, whatever Big Tech claims. And that's important because, if it cannot judge, it cannot care, and when deployed in today's economy, it does something worse still. It hard-codes neoliberalism into decision-making as if that neoliberal thought reflects sound judgment, and it does not. This is the political danger that's implicit in AI.
We are told that AI can replicate human judgment; that it can decide more objectively and more efficiently, and even without the bias that we as human beings bring to our decision-making. That claim is now being used to justify removing people from decisions that shape people's lives, but that is not progress; it is ideology disguised as technology.
Judgment is not the same as optimisation. It involves weighing competing values. It involves context, ambiguity, relationships, and responsibility. Above all, judgment involves care for people and not just outcomes.
What AI does is something quite different. It doesn't judge. It uses algorithms. That is always going to be inevitable. It is a large language model that uses the structure of language itself to decide how pre-specified objectives can be achieved by the models that it uses. In other words, algorithms rule the roost, and those algorithms are not programmed - particularly in the use to which AI is going to be put - to question the objectives that are being set for it. As a result, AI cannot care who is harmed by it, and that is precisely why AI is dangerous. It decides without understanding meaning.
Algorithms are designed by people. Let's be clear about that. We're not talking about something that is completely remote from us humans, but the trouble is that the algorithms that are likely to be used by AI encode assumptions about efficiency, and cost minimisation, and risk mitigation and productivity - which implies reducing labour costs - and compliance with the algorithm, and not with the overarching judgment that a human being brings to their decision-making. These are not neutral values. These are codes that will inevitably reinforce the values of neoliberal economics.
And to make it clear, when AI is deployed at scale, it will prioritise efficiency over well-being.
It will treat people as data points.
It will replace discretion with rule-following.
And it will frame social problems as technical ones.
And this, in my opinion, is neoliberalism automated, and not challenged.
And let's be candid: neoliberalism has always sought to strip care out of decision-making, to replace judgment with rules, and to deny responsibility by invoking "the market" as the arbiter of what is of value. AI just completes this project by allowing decision-makers to say, "It wasn't us: the system decided." The robots will be put in charge by choice, in other words, at the command of those who pick the algorithm.
And AI is already embedded in things like social security eligibility, benefits sanctions, decision-making, healthcare triage, recruitment and performance management assessment, and policing and surveillance. These are exactly the areas where judgment and care are indispensable, and yet the politics of care is being retreated from by the use of AI in these areas.
When a human being makes a bad decision, we can challenge it, we can appeal it, we can hold someone responsible.
When AI makes a bad decision in the future, responsibility will be diffused, accountability will be denied, and democracy will be weakened. That suits neoliberalism perfectly.
AI does not then merely reflect existing power structures. It stabilises them. It normalises them. It makes neoliberal decision-making appear inevitable, objective and unavoidable. That, I think, is the real political function of AI, and it is deeply dangerous to humankind that this is happening. A political economy of care requires human judgment, ethical responsibility, democratic oversight, and institutions designed for well-being.
AI could assist humans to do that. Let's not be unrealistic. It is a valuable tool, but it can only achieve that goal if humans remain responsible for the decisions because they accept accountability for the outcomes. Those who are currently promoting AI are challenging that hierarchy of power. They are saying, "Pass the decision-making over to the computer and get rid of the human involvement."
My conclusions from this are unequivocal. AI cannot exercise judgment. It does not understand what it means to be human, and it never will, and that is because it cannot judge, and so it cannot care. AI systems embed and reinforce neoliberal values by design. Delegating decisions to algorithms entrenches inequality and removes accountability. A caring economy requires human-led, democratically accountable decision-making, and this is something that AI can't deliver. I think that's a matter of fact.
This is not a technical debate. It's about political choice, and we must choose care. AI can't. Those who program AI can. Those who direct how AI is used can. But the danger is, AI is going to have neoliberalism embedded within it, and those who choose will say that will dictate the outcomes, and its decisions are what we must live with, and I don't want to live in that world.
What do you think? There's a poll down below.
Poll
Thanks for reading this post.
You can share this post on social media of your choice by clicking these icons:
There are links to this blog's glossary in the above post that explain technical terms used in it. Follow them for more explanations.
You can subscribe to this blog's daily email here.
And if you would like to support this blog you can, here:

Buy me a coffee!

Every time Google Maps directs someone down the A359 when there is another possible route, – the Frome to Yeovil road which is quite grim, and thats before you talk about the destination it demonstrates that AI knows nothing – its not a road you want to use if you can avoid it.
Might it be that AI is, at bottom, a human construction using human created and/or discovered information?
If so, might AI be an instrument of organisation and not decision making?
Interesting take, and largely true
But the organisation could be used against us
I agree with all you say above about AI. Almost equally important, to my mind, is that AI destroys trust. Internet and other social media can be (are being?) taken over by fakes, bots, dishonest actors, with no way of identifying them. We know already that a lot of scientific/medical papers online can’t be trusted. The only people you can trust are the ones you physically know. And the implications of this are also revolutionary.
Interesting that FaceBook (where many community organisations have presence, used mostly by over 40s) has reams of posts that even a cursory glance will reveal are fake, and a deeper dive will show originates in Vietnam, India, Turkey etc. Almost all the ‘nostalgia for Britain in the past’ posts are AI and fake (and badly made) but of course the intended impact is solely the emotional one for the over 40s.
Haven’t we already been down this road with FOREX and other trading exchanges, where computers started making the decisions a few years ago?
Could someone with City experience comment?
The Israelis have been using algorithms to decide how many civilians (collateral damage) to kill when they attack one of their Gaza “targets” (a doctor, a patamedic or journnalist in their apartment). They claim to use human oversight, but as the human (an IDF soldier) may not regard the collateral damage as human, merely “Palestinian”, or Lebanese, or Syrian, they seem to err on the side of killing the target, rather than sparing the lives of his wife and children and neighbours, so down comes the whole block.
Gaza shows us what algorithms are capable of, in the wrong hands, and how much they “care”.
AI needs a clear set of short, sharp, internationally adopted rules, with serious consequences if they are broken. 5, 7, 8 and 10 of the Ten Commandments might be a good place to start.
Negotiating 101: done by example – if you are telling someone they can’t have a bank loan do not say that your manager made the decision as they can demand to see him. Say that the loan committee made the decision an entity that they cannot demand to see. AI will do that lots, alas. People will suffer.
Do not confuse automation with AI. You want the robot to bolt these two panels together in the same way every time and not suddenly get inventive about it!
The world champion Go player does not know it is playing Go. How close are we to that? We are running along the bottom of the exponential curve for AI and it changes so little that it is not measureable. When it hits the upward curve it will be so fast there will not be time to react.
I am terrified.
And you are probably right to be so.
Maybe it could look something like this:
1. AI shall serve humanity, not govern it.
2. Human authority shall always prevail over machine output.
3. AI shall not decide matters of life, liberty, or rights without human judgement.
4. Responsibility for AI use shall rest with humans, never with machines.
5. People shall be told when AI affects decisions about their lives.
6. Every person shall have the right to human review and appeal.
7. AI shall not be used to deceive, manipulate, or impersonate.
8. AI shall not enable unjust surveillance or profiling of populations.
9. Public good shall take precedence over profit or power in AI deployment.
10. AI shall remain a tool and never an authority.
I like them.
Bit how would we know whether they were enforced?
Start with a non-binding set of principles for the UK, but written in global language, not banging on about British values.
Then seek wider international support, via bodies including the UN. Again, non-binding at this stage.
Then put in place UK legislation to give the principles legal force here. Set an example for the World.
Then seek international support for a global legal framework.
The main danger doesn’t come from AI’s intelligence, but from humans seeking to avoid accountability and do bad things.
Now all we need is a Champion.
It’s a start.
No harm in trying
I’ve sent a letter to the FT including the principles and a brief explanation. Perhaps they’ll publish it, and perhaps I’ll never know as I don’t subscribe!
Thanks
We have already seen Rishi Sunak blame the computer.
https://www.theguardian.com/politics/2022/may/13/rishi-sunak-says-technical-problems-stopped-him-raising-benefits-more
AI algorithms will turn the UK into a massive “computer says no” bear pit.
Another thoughtful and prescient article from one of journalism’s deeply influential and inspiring writers-her critique of Ai
What does matter is that we are beset with the ideology of maximising having while minimising doing. This has long been capitalism’s narrative and is now also technology’s. It is an ideology that steals from us relationships and connections and eventually our selves.
What technology takes from us – and how to take it back
https://www.theguardian.com/news/ng-interactive/2026/jan/29/what-technology-takes-from-us-and-how-to-take-it-back?CMP=Share_iOSApp_Other
Like the tech bros themselves, AI lacks wisdom. Wisdom requires understanding what is involved in the flourishing – and suffering – of people, society and the planet. That includes care. As you say.
Thanks