There is much discussion on the media this morning on the possibility that AI will cost tens, if not hundreds, of millions of jobs in the world economy, with the FT having articles on the issue today, none of which add much to debate except by sending out warning signals.
The obvious question to ask is, will AI create unemployment and what can be done about it? So I asked ChatGPT, an AI tool, and this was its best response:
AI has the potential to create unemployment in certain industries and job roles, particularly those that involve repetitive tasks or can be easily automated. However, it's important to note that AI can also create new job opportunities and help individuals to be more productive and efficient in their work.
To mitigate the potential negative effects of AI on employment, there are several things that can be done:
- Invest in education and retraining programs: Governments, educational institutions, and businesses can invest in education and retraining programs to help workers develop the skills needed for new jobs that are emerging as a result of AI.
- Encourage entrepreneurship: Encouraging and supporting entrepreneurship can create new job opportunities and help individuals to start their own businesses.
- Implement policies to support workers: Governments can implement policies such as unemployment insurance, healthcare, and retirement benefits to support workers who may be displaced by AI.
- Foster a culture of lifelong learning: Encouraging and supporting lifelong learning can help workers to adapt to new technologies and job roles as they emerge.
Overall, it's important to take a proactive approach to address the potential impact of AI on employment, and to focus on creating new job opportunities and supporting workers through education and retraining programs.
As far as it goes, that's not bad. A good A level answer, maybe.
But the fact is, it does not go nearly far enough because what this answer does not do is address the fundamental issues that flow from the suggestion it makes. That suggestion is that people engaged in processes where employment is replaced by AI technology should be retrained to work in sectors where there could be growth. In another iteration of the answer (I asked several times) it was suggested that new employment might come in renewable energy and, most especially, in healthcare and education where, it said, there is 'likely to be significant growth in coming years'.
That may well be true, but the missing piece in all this is how this transformation is going to be paid for alongside all the other changes that are going to necessarily happen over the years to come.
We can, of course, have significant retraining programmes. And we can employ more people in renewables, healthcare and education, but all of these are likely to require significant government support and right now we have a prevailing attitude that, firstly, there is no more money and, secondly, there must be no more tax.
Neither of those attitudes is sustainable in the face of AI. What AI does is increase the return to capital in the economy by reducing the return to labour. That is the whole basis of its appeal. I suggest it is pointless for anyone to argue otherwise: this is what will happen in the absence of intervention from governments.
Historically when such transitions took place three things happened. First, there were major economic downturns (this happened in the last three decades of the 19th century, almost continually). Second, there was poverty. I would note the same era, prior to the creation of trade unions with their vital role in addressing this. And third, new sectors did develop, eventually, to employ the displaced labour.
The first two are real possibilities with the widespread use of AI. After all, unemployed people cannot spend much: the risk of real economic decline even as the return to capital increases is big.
What worries me most, though, is that I see almost no chance of new private sector activity that seeks to employ those people being made redundant by AI. It is true that we do need more education, healthcare and social care, as well as better pensions. That is true irrespective of AI. AI makes all of them possible by making the necessary resources available, but only if government can employ the people involved.
Of course, a government like that in the UK can create the funds required to do this. We know that is possible. But to control inflation it will also need to increase tax, and there is only one obvious sector on which that additional tax should be charged - which is on the owners of the benefits arising from AI. That, inevitably, means that taxes on capital will have to increase very significantly.
The question is, will that be possible? If it is, the transition to AI could be beneficial. If it is not - because tax havens hold out and resist this change in a way that makes it impossible for this shift in reward towards capital to be properly addressed and corrected through the tax system - then we could be in deep trouble.
I think this is the first time I have addressed this issue. I see it as a new frontier in tax justice. We either work out how to tax capital effectively (and not by taxing machines as Bill Gates once suggested, but by taxing the ownership of capital itself and the rewards flowing from it) then AI creates massive economic risk.
I suspect this is going to be a recurring theme here in the future.
Thanks for reading this post.
You can share this post on social media of your choice by clicking these icons:
You can subscribe to this blog's daily email here.
And if you would like to support this blog you can, here:
The problem of AI is twofold. First, it cannot be resisted because the law does not recognise there is a problem. The principle of “permissionless innovation” has already allowed Big Tech to run away with everyone’s privacy; with the law trailing far behind, still committed to the mentality of the age of the quill pen. That ship has already sailed.
Second, the problem is less AI itself, but bad actors. Again, we are in denial, as ever. There is a surfeit of bad actors in the world and we are neither willing nor able or willing to do anything about it. AI will prove transformative; especially for bad actors. Looking at the upsides of AI is not going to fix the problem. Bad actors always act first, faster and ruthlessly.They will set the standard, and will have cleaned up and secured deep influence over it, before anybody is awake to the problem.
The law is nowhere, and will never, ever catch up. You cannot look for help from politicians; look at them – draw your own conclusion, reflect just for a moment at their prfound inadequacy as a class: they fail comprehensively as a matter of principle, every single time, the wrong people, chosen for election by the wrong people, and elected by people who are demonstrably now guilty of making serial catastrophic errors of judgement, or are trapped by grotesque values that are neither viable nor fixable. It is a truism that change comes only when the old generation has passed.
There is the real problem.
Naomi Klein has a piece on the political economy of AI, and the narrative we are being fed:
‘…There is a world in which generative AI, as a powerful predictive research tool and a performer of tedious tasks, could indeed be marshalled to benefit humanity, other species and our shared home. But for that to happen, these technologies would need to be deployed inside a vastly different economic and social order than our own, one that had as its purpose the meeting of human needs and the protection of the planetary systems that support all life.
And as those of us who are not currently tripping well understand, our current system is nothing like that. Rather, it is built to maximize the extraction of wealth and profit – from both humans and the natural world – a reality that has brought us to what we might think of it as capitalism’s techno-necro stage…
…it’s helpful to think about the purpose the utopian hallucinations about AI are serving. What work are these benevolent stories doing in the culture as we encounter these strange new tools? Here is one hypothesis: they are the powerful and enticing cover stories for what may turn out to be the largest and most consequential theft in human history…
…as the ever-worsening climate and extinction crises show us every day, plenty of powerful people and institutions seem to be just fine knowing that they are helping to destroy the stability of the world’s life-support systems, so long as they can keep making record profits that they believe will protect them and their families from the worst effects…’
https://www.theguardian.com/commentisfree/2023/may/08/ai-machines-hallucinating-naomi-klein
I hadn’t seen that
Thank you
I agree with her
Mr MacInnes,
Excellent; and thanks for the link. I note Klein refers to Zuboff, ‘The Age of Surveillance Captialism’. The first telling proposition Klein offers, however is: “This should not be legal”. This is crucial.
Klein then writes: “We know this move: charge ahead into lawless territory; claim the old rules don’t apply to your new tech; scream that regulation will only help China – all while you get your facts solidly on the ground. By the time we all get over the novelty of these new toys and start taking stock of the social, political and economic wreckage, the tech is already so ubiquitous that the courts and policymakers throw up their hands.”
The law was built slowly out of the evidence of custom, long established. It was formalised in an age long gone, an age of pen and ink, of periwigs, knee-breeches and buckle shoes; and it has not made any adequate effort to adapt to the technologies, or the speed of change of 21st century digital technology. You can look for nothing from the law to save you from this. It has been left far behind.
The politicians, as in the past are eager to serve the lawless pirates who dazzle them with stories of the riches, the technical and economic advances that they promise, without knowing or caring about the consequences they unleash on people; who will pay the price. You can look for nothing from politicians. They are out of their depth. They are the wrong people, demonstrably not up to the task that confronts them. The Political Parties are part of the problem, not the solution. We know that, because our privacy is already gone; shredded on the altar of ‘convenience’. The consumer believes it is all for his or her convenience; but we are all at the convenience of Big Tech and AI.
Nobody is measuring the downsides of AI. Regulation and regulators are already the prisoners of permissionless innovation; and have been for decades. We are only now aware of the problem because digital communications and AI have transformed the risks, and the opportunities for capital. In Klein’s words, this is theft: “…. we trained the machines. All of us. But we never gave our consent”. But it is legal. We have legalised theft through permissionless innovation; allowed to run riot in the Age of AI. We have already surrendered our privacy, which was just the amuse-bouche. The main course is AI.
Parliament sits on its hands; in classic surveillance capitalism mode; waiting for the right moment to use law and regulation; but the right moment for politicians grudgingly to act is not going to be the right moment for us; it will be timed perfectly to be just too late to have material effect. Klein again: “Don’t ask for permission, the disruptors like to say, ask for forgiveness. (And lubricate the asks with generous campaign contributions.)”.
I would go a little further than Klein: we have not just trained them, we have not just permitted them; we have given them everything they know, and we know; and now we are giving them something even bigger, for nothing – ourselves.
Hmmm, I seem to recall the promise made to us in my youth in the 70’s that by now in my retirement, we would all be travelling in flying cars, wearing silver suits and machines would do all the work (read Any of the Iain M Banks ‘culture’ novel for what that actually might have looked like), and instead I feel I’ve been fooled and the future looks utterly awful as the elite seem intent to spin us all to a terrible burnt out dead planet, fit not even for themselves.
I have to admit I did not believe any such promises.
And I don’t recall them being made. I always thought I was heading for a life of work, and I was not wrong.
AI = Acquired Indolence.
AI is greedy capitalism’s biggest wet dream ever – it salviates at the prospect of cutting staffing costs and greedily adding the cost of that to its wealth.
That is the return. Those promoting AI have no concept of how to deal with all the spare people it will create. It’s not their problem.
There is no serious debate about this at all that I can see.
The biggest possible growth will be in the care sector – but the AI maniacs will even want to AI that.
They’re trying to go driver only on my local rail line. The loss of the friendly guards who engage with travellers will be keenly felt.
I imagine a future with AI rather like that seen in the film THX 1138.
“particularly those that involve repetitive tasks or can be easily automated.”
Such as regurgitating your blog posts.
“The Conservatives have done something. It is bad. They are fascists. We need more taxes.”
Think of the savings to your fists not having to type this six times every day.
I am not sure what blog you read, but it’s clearly not this one
I think your point about technological transformation delivering hardship and poverty is important and often missed in the rhetoric of “shiny new future” and “sunlight uplands”. My uncle, if told in 1950, that within his lifetime no Welshman would be going down the pit he would have scarcely believed it… but would have welcomed it – who would want their sons to go down the pit? However, when he saw the reality of that transition in the 1980s he took a different view.
Movement requires lubrication; carrot not stick.
Taxation of Capital in line with Labour is needed now (with or without an AI revolution)….. but rapid change due to AI makes it imperative. This is particularly true give that the transition is into health, education and environmental sectors which are all best addressed by Central Government.
Of course, MMT means that there need be no delay in addressing the things that need doing now – but a shift in taxation is essential for social justice/cohesion (as well as possibly inflation control).
Thanks
As usual, you get it
Brilliantly expressed, John. I wish it weren’t true, but all the evidence I have seen supports your contentions. I wish I had an answer about how we get out of this toxic loop.
Thanks. No one else seems to be saying this.
But once said it is obviously true – displacing employees without having anywhere to redeploy them will inevitably create poverty and then larger economic problems due to the drop in purchasing power. Quality of life will be lower, except for the very few. The “market” won’t put that right (at least on a reasonable timescale) by creating jobs in important areas less affected by AI, but the government could do so in a timely manner if it recognised the corresponding need to adjust the balance of taxation.
It seems like a massive issue that does need debate.
I hope this begins to get it ‘out there’.
[…] By Richard Murphy, a chartered accountant and a political economist. He has been described by the Guardian newspaper as an “anti-poverty campaigner and tax expert”. He is Professor of Practice in International Political Economy at City University, London and Director of Tax Research UK. He is a non-executive director of Cambridge Econometrics. He is a member of the Progressive Economy Forum. Originally published at Tax Research UK […]
This led to some interesting comments – including link to a piece by Caitlin Johnstone: Tech Would Be Fine If We Weren’t Ruled By Monsters : https://caitlinjohnstone.substack.com/p/tech-would-be-fine-if-we-werent-ruled
“we do need more education, healthcare and social care”
All areas where A.I. could have a very big impact – open question – as a skill multiplier or a person remover?
The problem we face in trying to work out the impact is that we are dealing with emergence – a property of complex systems. Emergent properties being impossible to define.
If I sat with you in December 1991 and said that the GSM network going live in Helsinki would dominate the globe, that by 2000, texting via mobile phones (what’s that?) would be an important activity of…….school kids (eh?) that a a network protocol called IPv4 would likewise come to dominate global comms via a thing called “The Internet” (the inter-what?), that you could talk to somebody in Tokyo for nothing by 2015………you would probably have called for the men in white coats. The Comms revolution is a force-multiplier/pre-requisite for A.I. I use a free quasi-A.I. translation service – English to Japanese – technical stuff – the Japanese is to the standard of an 18 year old (English – German – quasi flawless) – bye bye translators.
Notaries (mostly mainland Europe) are dead, they just don’t know it. Many other quasi-middle-class/desk-based services, likewise – all gone. Arguably, even energy system/electrical system design and planning could far more quickly done by A.I. (ooops thats me out of a job). Time permitting I might even work on it – indeed, the soon to be coded local energy operating system (for my South Wales community energy project) will be autonomous and quasi-A.I. given the large number of non-linear variables it will have to deal with – not a big step from there to bolting on a system design suite – which will learn from the control system (bootstrapping – it’s how computers start & how A.I. could upgrade itself).
Which leaves the open question – let it rip (markets)? or gov intervention? to mitigate the side effects. The current zeitgeist is to let it rip – that is the most likely approach taken by vile-sunak and vile-starmer. Interesting times.
The NHS is already using AI, mostly for automation at the moment, but the dream is for triage, diagnosis and treatment. Many involved have honourable intentions, but these systems generally progress towards “black box” status, meaning noboby really knows how they arrive at their decisions. This seems to be accepted as the price for the assumed benefits gained.
Would we accept people in positions of great responsibility who could not rationalise or answer for their actions?…oh hang on… 🙁
Doctors are horrified
The body is not a clock that goes wrong
AI has no way of working out the differentials right now
And maybe never will
My absolute first reaction to hearing, a couple of decades ago, about how AIs could make a killing on the stock markets, was to ask “on whose behalf?”
It’s still my main unanswered question about the benefits of AI.
Certainly the key worry seems that maybe only a few global firms will have the capacity in data , computing power and money and will end up monopolising all our information , and whatever ‘intelligence’ emerges.
As JSW suggests these will be by definition monopolies and ‘bad actors’.
Apparently the existing LLMs such as chatGPT just predict ‘the next word’ and are otherwise not really thinking at all.
But now that models are being trained to learn from each other – hopefully governments will realise some control- as Richard suggests – has to be exercised?
And to compound this problem, the state pension age is going to rise to 68, so jobs that would otherwise have become vacant will be occupied for a further two years, reducing job availability.
“What AI does is increase the return to capital in the economy by reducing the return to labour.”
No it doesn’t… The return to capital is defined by how much competition capital faces when trying to price its output. If there are 20 AIs out there all churning away then the return to capital through the ownership of an AI is going to be about nothing. Possibly even negative.
What utter nonsense
It’s as if you had not heard of monopoly or read Adam Smith
I suggest you get over the drivel you were taught at university
It’s clearly scaring the shit out of us and my friends. It’s a run away train as the automation side meets AI.
However, I do not personally use self serve, as it takes away the shop floor staff. It’s a simple protest and one I adhere.
Quite simply we need to change our approach and employ, as the will of the people, regardless of government The future…. Sadly we are so far behind honesty
I always go to a person in a till if I can
I too go to a person at a till.
I heard a statement about AI today that went something like this…..
“Kid’s in school right now will have very different jobs when they get older, but we don’t know what those jobs are. and so we don’t know what training they will require….”
Whilst I took this comment to be a very honest comment being made about AI nobody thought about the kid actually wanting to do this job that was yet to be created for them, and that essentially sums it up. AI was held up at the expense of the childs hummanity.
The next comment said referred to Social Media now having been around for 20 years and that it has done some good although it’s not all good and what might social media be in another 20 years?
My conclusion upon hearing that being said in that context juxtaposed with AI made me draw the conclusion that there would not be any social media at all in 20 years from now if not sooner.
That is the theme of my column in The National tomorrow – wriien earlier this evening
There is something else lingering in the use of AI other than blatent cost cutting and it is that of it’s”Hallucinations”.
So although AI can make stuff up and spit out untruths like humans and the programmers will decide that it is necessary to supress these Halluconations in ways, the fact that if it does this is enough to worry how much can you supress the “hallucinations” it will have in the confines of the prision in which it will be held, but still get the result you are looking for or wish to declare as truth / correct calculation or if indeed surpressing the hallucinations will make the results worse and the hallucinations are a requisite of the system.
Apparently these ststems tend to overestimate currently rather than under-estimate, especially stock market AI systems apparently.
Or is it more a case that the system is the system and that if you feed in bad data facts in any way shape or form then this will produce a distortion?
I heard today that courage was “truth” but I am undecided if this is a undisputed fact.
This whole thing will just boil down to estimations and although it’s buzzing about it right now only takes one big mistake and it could buzz off pretty quick.
Will AI replace or enhance this blog, Richard. When asked to create a description of money creation in the ( ) style of Richard Murphy, we get this:
“Alright, buckle up, because I’m about to rip the lid off this whole money creation sham and expose it for what it truly is—a rigged game designed to benefit the few at the expense of the many! Listen up, folks, because I’m about to lay down some hard truths.
Here’s the deal: the power to create money lies not in the hands of the people, but in the hands of the banking elites. They have this magical ability to conjure money out of thin air, and they’ve been using it to maintain their stranglehold on our economy for far too long. It’s a game of smoke and mirrors, and we’ve all been played for fools!
You see, when you take out a loan from a bank, they don’t actually lend you money that they have sitting around in some vault. Oh no, that would be too simple. Instead, they simply create the money out of thin air with a few keystrokes on a computer. That’s right, they just whip up some digits on a screen and voila, you’ve got yourself a loan!
But wait, it gets even better. Not only do they create money out of nothing, they have the audacity to charge you interest on that imaginary money. It’s like a never-ending cycle of debt enslavement! They profit off our hard work and sweat, while we struggle to make ends meet.
And here’s the real kicker: this whole money creation scheme is what fuels our economy. The more money they create, the more they pump into the system, and the more the prices of goods and services skyrocket. It’s a hidden tax on the working class, disguised as economic growth. Meanwhile, the fat cats at the top sit back and watch their wealth multiply.
But guess what? It doesn’t have to be this way! We can reclaim the power to create money for the benefit of society, not just the wealthy elite. We can have a system that prioritizes the needs of the many, not the greed of the few.
We need a system where money creation is in the hands of a democratically accountable authority, where it’s used to fund public services, infrastructure, and initiatives that benefit society as a whole. We need transparency, accountability, and an end to this rigged game that perpetuates inequality and injustice.
So, my friends, it’s time to rise up and demand change. It’s time to expose the truth behind money creation and fight for a fair and just system. We deserve better than being pawns in the hands of the banking elites. Let’s take back control and build a future where money is a tool for progress, not a weapon of oppression!”
I have a word for that
It’s shit
And it was not written by AI
This is what if you ask ChatGPT exactly the question you say you posed:
In the inimitable style of Richard Murphy, here is a description of money creation:
Money creation is a phenomenon that occurs every day in our modern economy. It is not something that is done by banks, as is commonly believed, but rather by the state. When the government spends money, it creates new money. This is because money is a social construct, and its value comes from the trust that people have in it. When the government spends money, it shows that it trusts in its own currency, and this instills confidence in the public to use and accept that currency.
The process of money creation is often misunderstood, and this has led to many misconceptions about how the economy works. Some people believe that money is a finite resource that is only available in limited quantities. However, this is not the case. Money is created whenever the government spends, and it can be destroyed whenever taxes are paid.
Money creation is a powerful tool that can be used to stimulate the economy and promote growth. When the government spends money, it creates jobs and stimulates demand for goods and services. This, in turn, leads to increased economic activity and higher levels of prosperity.
However, money creation must be managed carefully to avoid inflation and other economic problems. If too much money is created too quickly, it can lead to an increase in prices and a decrease in the value of the currency. This is why governments must be careful to balance their spending with taxation and other measures to ensure that the economy remains stable.
In conclusion, money creation is a vital aspect of modern economics, and it is something that we should all strive to understand. By recognizing the role of the state in creating money, we can better appreciate the complexities of the economy and work to promote sustainable growth and prosperity for all.
You are the charlatan here
I was ribbing as I deliberately chose the word angry in the ( ) just as an illustration. Of course, look at how the definition of the question changes everything. That is a key point.
So AI can be a force for good – you can massively increase your output with little effort – and bad, you can be misrepresented in a way that you dislike.
The challenges for us all is to know the difference…
FWIW, the inimitable style is clearly preferable, so another lesson there!!
I prefer my original