{"id":89495,"date":"2026-01-29T07:04:12","date_gmt":"2026-01-29T07:04:12","guid":{"rendered":"https:\/\/www.taxresearch.org.uk\/Blog\/?p=89495"},"modified":"2026-01-29T07:04:12","modified_gmt":"2026-01-29T07:04:12","slug":"ai-does-not-care-and-it-is-hard-coding-neoliberalism","status":"publish","type":"post","link":"https:\/\/www.taxresearch.org.uk\/Blog\/2026\/01\/29\/ai-does-not-care-and-it-is-hard-coding-neoliberalism\/","title":{"rendered":"AI does not care \u2013 and it is hard-coding neoliberalism"},"content":{"rendered":"<p class=\"p3\">We are told that artificial intelligence can replace human judgment. It cannot.<\/p>\n<p class=\"p3\">In this video, I explain why AI does not care, why it cannot exercise judgment, and why deploying it at scale embeds neoliberal values into decision-making by design.<\/p>\n<p class=\"p3\">Algorithms prioritise efficiency, cost reduction and rule-following. Judgment requires care, context, responsibility and democratic accountability.<\/p>\n<p class=\"p3\">This is not a technical debate. It is a political choice about the kind of economy and society we want to live in.<\/p>\n<p><iframe loading=\"lazy\" title=\"YouTube video player\" src=\"https:\/\/www.youtube.com\/embed\/KbG4URN12K8?si=nBKEeD-kIjja725U\" width=\"560\" height=\"315\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"><\/iframe><\/p>\n<p>This is the audio version:<\/p>\n<p><iframe loading=\"lazy\" style=\"border: none; min-width: min(100%, 430px); height: 150px;\" title=\"AI does not care\" src=\"https:\/\/www.podbean.com\/player-v2\/?i=w7uzf-1a2f26a-pb&amp;from=pb6admin&amp;share=1&amp;download=1&amp;rtl=0&amp;fonts=Arial&amp;skin=f6f6f6&amp;font-color=auto&amp;logo_link=episode_page&amp;btn-skin=c73a3a\" width=\"100%\" height=\"150\" scrolling=\"no\" data-name=\"pb-iframe-player\"><\/iframe><\/p>\n<p>This is the transcript:<\/p>\n<hr \/>\n<p>AI does not care. What it does do is reinforce neoliberalism, and that's what this video is about.<\/p>\n<p>Let me be clear at the outset. \u200a AI cannot, in my opinion, exercise judgment, whatever Big Tech claims. And that's important \u200abecause, if it cannot judge, it cannot care, and when \u200a deployed in today's economy, it does something worse still. \u200aIt hard-codes neoliberalism into decision-making as if that neoliberal thought reflects sound judgment, \u200a and it does not. This is the political danger that's implicit in AI.<\/p>\n<p>We are told that AI can replicate human judgment; that it can decide more objectively and more efficiently, and even without the bias that we as human beings bring to our decision-making. That claim is now being used \u200ato justify removing people from decisions that shape people's lives, but \u200a that is not progress; it is ideology disguised as technology.<\/p>\n<p>Judgment is not the same as optimisation. It involves weighing competing values. It involves context, ambiguity, relationships, and responsibility. Above all, judgment involves care for people and not just outcomes.<\/p>\n<p>What AI does is something quite different. \u200aIt doesn't judge. It uses algorithms. That is always going to be inevitable. \u200a It is a large language model that uses the structure of language itself to decide how pre-specified objectives can be achieved by the models that it uses. In other words, algorithms rule the roost, and those algorithms are not programmed - particularly in the use to which AI is going to be put - to question the objectives that are being set for it. As a result, AI cannot care who is harmed by it, and that is precisely why \u200aAI is dangerous. It decides without understanding meaning.<\/p>\n<p>Algorithms \u200a are designed by people. Let's be clear about that. We're not talking about something that is completely remote from us humans, but the trouble is that the algorithms that are likely to be used by AI encode assumptions about efficiency, and cost minimisation, and risk mitigation and productivity - which implies reducing labour costs - and compliance with the algorithm, and not with the overarching judgment that a human being brings to their decision-making. These are not neutral values. \u200aThese are codes that will inevitably reinforce the values of neoliberal economics.<\/p>\n<p>And \u200a to make it clear, when AI is deployed at scale, it will prioritise efficiency over well-being.<\/p>\n<p>It will treat people as data points.<\/p>\n<p>It will replace discretion with rule-following.<\/p>\n<p>And it will frame social problems as technical ones.<\/p>\n<p>And this, in my opinion, is neoliberalism automated, and not challenged.<\/p>\n<p>And let's be candid: neoliberalism has always sought to strip care out of decision-making, to replace judgment with rules, and \u200ato deny responsibility by invoking \"the market\" as the arbiter of what is of value. AI just \u200a completes this project by allowing decision-makers to say, \"It wasn't us: the system decided.\" The robots will be put in charge by choice, in other words, at the command of those who pick the algorithm.<\/p>\n<p>And AI is already embedded in things like social security eligibility, benefits sanctions, decision-making, healthcare triage, recruitment and performance management assessment, and \u200apolicing and surveillance. These are exactly the \u200a areas where judgment and care are indispensable, and yet the politics of care is being retreated from by the use of AI in these areas.<\/p>\n<p>When a human being makes a bad decision, we can challenge it, we can appeal it, we can hold someone responsible.<\/p>\n<p>When AI makes a bad decision in the future, responsibility will be diffused, accountability will be denied, \u200a and democracy will be weakened. That suits neoliberalism perfectly.<\/p>\n<p>AI does not then merely reflect existing power structures. It stabilises them. It normalises them. It makes neoliberal decision-making appear inevitable, objective and unavoidable. That, I think, is the real political function of AI, and it is deeply dangerous to humankind that this is happening. \u200aA political economy of care requires human judgment, ethical responsibility, democratic oversight, and institutions designed for well-being.<\/p>\n<p>AI could \u200a assist humans to do that. Let's not be unrealistic. It is a valuable tool, but it can only achieve that goal if humans remain responsible for the decisions because they accept accountability for the outcomes. Those who are currently promoting AI are challenging that hierarchy of power. They are saying, \"Pass the decision-making over to the computer and get rid of the human involvement.\"<\/p>\n<p>My conclusions from this are unequivocal. AI cannot exercise judgment. It does not understand what it means to be human, and it never will, and that is because it cannot judge, and so it cannot care. AI systems embed and reinforce neoliberal values by design. Delegating decisions to algorithms entrenches inequality and removes accountability. A caring economy requires human-led, democratically accountable decision-making, and this is something that AI can't deliver. I think that's a matter of fact.<\/p>\n<p>This is not a technical debate. It's about political choice, and we must choose care. AI can't. Those who program AI can. Those who direct how AI is used can. But the danger is, AI is going to have neoliberalism embedded within it, and those who choose will say that will dictate the outcomes, and its decisions are what we must live with, and I don't want to live in that world.<\/p>\n<p>What do you think? There's a poll down below.<\/p>\n<hr \/>\n<p>Poll<\/p>\n<div id=\"polls-305\" class=\"wp-polls\">\n\t<form id=\"polls_form_305\" class=\"wp-polls-form\" action=\"\/Blog\/index.php\" method=\"post\">\n\t\t<p style=\"display: none;\"><input type=\"hidden\" id=\"poll_305_nonce\" name=\"wp-polls-nonce\" value=\"2a703401f0\" \/><\/p>\n\t\t<p style=\"display: none;\"><input type=\"hidden\" name=\"poll_id\" value=\"305\" \/><\/p>\n\t\t<p style=\"text-align: center;\"><strong>Should AI ever make decisions about people\u2019s lives?<\/strong><\/p><div id=\"polls-305-ans\" class=\"wp-polls-ans\"><ul class=\"wp-polls-ul\">\n\t\t<li><input type=\"radio\" id=\"poll-answer-1360\" name=\"poll_305\" value=\"1360\" \/> <label for=\"poll-answer-1360\">Never<\/label><\/li>\n\t\t<li><input type=\"radio\" id=\"poll-answer-1361\" name=\"poll_305\" value=\"1361\" \/> <label for=\"poll-answer-1361\">Only with human oversight<\/label><\/li>\n\t\t<li><input type=\"radio\" id=\"poll-answer-1362\" name=\"poll_305\" value=\"1362\" \/> <label for=\"poll-answer-1362\">Yes, for efficiency<\/label><\/li>\n\t\t<li><input type=\"radio\" id=\"poll-answer-1363\" name=\"poll_305\" value=\"1363\" \/> <label for=\"poll-answer-1363\">I\u2019m undecided<\/label><\/li>\n\t\t<\/ul><p style=\"text-align: center;\"><input type=\"button\" name=\"vote\" value=\"   Vote   \" class=\"Buttons\" onclick=\"poll_vote(305);\" \/><\/p><p style=\"text-align: center;\"><a href=\"#ViewPollResults\" onclick=\"poll_result(305); return false;\" title=\"View Results Of This Poll\">View Results<\/a><\/p><\/div>\n\t<\/form>\n<\/div>\n<div id=\"polls-305-loading\" class=\"wp-polls-loading\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.taxresearch.org.uk\/Blog\/wp-content\/plugins\/wp-polls\/images\/loading.gif\" width=\"16\" height=\"16\" alt=\"Loading ...\" title=\"Loading ...\" class=\"wp-polls-image\" \/>&nbsp;Loading ...<\/div>\n\n","protected":false},"excerpt":{"rendered":"<p>We are told that artificial intelligence can replace human judgment. It cannot. In this video, I explain why AI does not care, why it cannot<br \/><a class=\"moretag\" href=\"https:\/\/www.taxresearch.org.uk\/Blog\/2026\/01\/29\/ai-does-not-care-and-it-is-hard-coding-neoliberalism\/\"><em> Read the full article&#8230;<\/em><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[231,204,35,16,224,106],"tags":[],"class_list":["post-89495","post","type-post","status-publish","format-standard","hentry","category-ai","category-economic-justice","category-economics","category-ethics","category-neoliberalism","category-politics"],"_links":{"self":[{"href":"https:\/\/www.taxresearch.org.uk\/Blog\/wp-json\/wp\/v2\/posts\/89495","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.taxresearch.org.uk\/Blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.taxresearch.org.uk\/Blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.taxresearch.org.uk\/Blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.taxresearch.org.uk\/Blog\/wp-json\/wp\/v2\/comments?post=89495"}],"version-history":[{"count":6,"href":"https:\/\/www.taxresearch.org.uk\/Blog\/wp-json\/wp\/v2\/posts\/89495\/revisions"}],"predecessor-version":[{"id":89547,"href":"https:\/\/www.taxresearch.org.uk\/Blog\/wp-json\/wp\/v2\/posts\/89495\/revisions\/89547"}],"wp:attachment":[{"href":"https:\/\/www.taxresearch.org.uk\/Blog\/wp-json\/wp\/v2\/media?parent=89495"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.taxresearch.org.uk\/Blog\/wp-json\/wp\/v2\/categories?post=89495"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.taxresearch.org.uk\/Blog\/wp-json\/wp\/v2\/tags?post=89495"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}