AI does not care – and it is hard-coding neoliberalism

Posted on

We are told that artificial intelligence can replace human judgment. It cannot.

In this video, I explain why AI does not care, why it cannot exercise judgment, and why deploying it at scale embeds neoliberal values into decision-making by design.

Algorithms prioritise efficiency, cost reduction and rule-following. Judgment requires care, context, responsibility and democratic accountability.

This is not a technical debate. It is a political choice about the kind of economy and society we want to live in.

This is the audio version:

This is the transcript:


AI does not care. What it does do is reinforce neoliberalism, and that's what this video is about.

Let me be clear at the outset.   AI cannot, in my opinion, exercise judgment, whatever Big Tech claims. And that's important  because, if it cannot judge, it cannot care, and when   deployed in today's economy, it does something worse still.  It hard-codes neoliberalism into decision-making as if that neoliberal thought reflects sound judgment,   and it does not. This is the political danger that's implicit in AI.

We are told that AI can replicate human judgment; that it can decide more objectively and more efficiently, and even without the bias that we as human beings bring to our decision-making. That claim is now being used  to justify removing people from decisions that shape people's lives, but   that is not progress; it is ideology disguised as technology.

Judgment is not the same as optimisation. It involves weighing competing values. It involves context, ambiguity, relationships, and responsibility. Above all, judgment involves care for people and not just outcomes.

What AI does is something quite different.  It doesn't judge. It uses algorithms. That is always going to be inevitable.   It is a large language model that uses the structure of language itself to decide how pre-specified objectives can be achieved by the models that it uses. In other words, algorithms rule the roost, and those algorithms are not programmed - particularly in the use to which AI is going to be put - to question the objectives that are being set for it. As a result, AI cannot care who is harmed by it, and that is precisely why  AI is dangerous. It decides without understanding meaning.

Algorithms   are designed by people. Let's be clear about that. We're not talking about something that is completely remote from us humans, but the trouble is that the algorithms that are likely to be used by AI encode assumptions about efficiency, and cost minimisation, and risk mitigation and productivity - which implies reducing labour costs - and compliance with the algorithm, and not with the overarching judgment that a human being brings to their decision-making. These are not neutral values.  These are codes that will inevitably reinforce the values of neoliberal economics.

And   to make it clear, when AI is deployed at scale, it will prioritise efficiency over well-being.

It will treat people as data points.

It will replace discretion with rule-following.

And it will frame social problems as technical ones.

And this, in my opinion, is neoliberalism automated, and not challenged.

And let's be candid: neoliberalism has always sought to strip care out of decision-making, to replace judgment with rules, and  to deny responsibility by invoking "the market" as the arbiter of what is of value. AI just   completes this project by allowing decision-makers to say, "It wasn't us: the system decided." The robots will be put in charge by choice, in other words, at the command of those who pick the algorithm.

And AI is already embedded in things like social security eligibility, benefits sanctions, decision-making, healthcare triage, recruitment and performance management assessment, and  policing and surveillance. These are exactly the   areas where judgment and care are indispensable, and yet the politics of care is being retreated from by the use of AI in these areas.

When a human being makes a bad decision, we can challenge it, we can appeal it, we can hold someone responsible.

When AI makes a bad decision in the future, responsibility will be diffused, accountability will be denied,   and democracy will be weakened. That suits neoliberalism perfectly.

AI does not then merely reflect existing power structures. It stabilises them. It normalises them. It makes neoliberal decision-making appear inevitable, objective and unavoidable. That, I think, is the real political function of AI, and it is deeply dangerous to humankind that this is happening.  A political economy of care requires human judgment, ethical responsibility, democratic oversight, and institutions designed for well-being.

AI could   assist humans to do that. Let's not be unrealistic. It is a valuable tool, but it can only achieve that goal if humans remain responsible for the decisions because they accept accountability for the outcomes. Those who are currently promoting AI are challenging that hierarchy of power. They are saying, "Pass the decision-making over to the computer and get rid of the human involvement."

My conclusions from this are unequivocal. AI cannot exercise judgment. It does not understand what it means to be human, and it never will, and that is because it cannot judge, and so it cannot care. AI systems embed and reinforce neoliberal values by design. Delegating decisions to algorithms entrenches inequality and removes accountability. A caring economy requires human-led, democratically accountable decision-making, and this is something that AI can't deliver. I think that's a matter of fact.

This is not a technical debate. It's about political choice, and we must choose care. AI can't. Those who program AI can. Those who direct how AI is used can. But the danger is, AI is going to have neoliberalism embedded within it, and those who choose will say that will dictate the outcomes, and its decisions are what we must live with, and I don't want to live in that world.

What do you think? There's a poll down below.


Poll

Should AI ever make decisions about people’s lives?

View Results

Loading ... Loading ...

PDF of article


Thanks for reading this post.
You can share this post on social media of your choice by clicking these icons:

There are links to this blog's glossary in the above post that explain technical terms used in it. Follow them for more explanations.

You can subscribe to this blog's daily email here.

And if you would like to support this blog you can, here:

  • Richard Murphy

    Read more about me

  • Support This Site

    If you like what I do please support me on Ko-fi using credit or debit card or PayPal

  • Archives

  • Categories

  • Taxing wealth report 2024

  • Newsletter signup

    Get a daily email of my blog posts.

    Please wait...

    Thank you for sign up!

  • Podcast

  • Follow me

    LinkedIn

    LinkedIn

    Mastodon

    @RichardJMurphy

    BlueSky

    @richardjmurphy.bsky.social