Artificial intelligence is often presented as a growth driver. I argue the opposite risk is emerging.
AI-driven cost-cutting threatens jobs and demand, with the risk of a GDP decline, while shortages of chips, energy, water, and grid capacity threaten higher prices and inflation across the whole economy. Exciting economic policy tools can't resolve either problem.
This video shows why relying on central banks to solve our crises won't work this time, while ignoring unemployment, inequality, and infrastructure will deepen recessionary pressures rather than prevent them.
This is the audio version:
This is the transcript:
I've talked about artificial intelligence many times on this channel.
I've talked about the fact that I think it is going to cause an economic crash several times.
And every time I look at this issue, I come to the same conclusion, which is that things are worse than I previously thought. And that's not a mistake; that's the result of rational analysis.
Most commentary that I read on AI is technologically enthusiastic, and maybe that is appropriate. I'm not arguing that there are going to be no uses for AI; indeed, I've found some. My point is that the analysis of AI in economic terms remains naive. The scale of the risks now accumulating is so significant that I have to make yet another video on this subject, and I genuinely think it's worth your while to hear this one through, because for the first time, I'm going to pull four strands of thought together in a way that suggests AI is really dangerous to our economic well-being right now. All these themes interact. Together, they point to something serious. AI is becoming a source of serious systemic economic risk, is my point.
One of those themes is that financial instability is going to arise from the asset bubble that we know is going on right now, and this is something I've discussed before. There is nothing new in that, but I think that the risk is getting bigger.
The second risk I want to talk about is unemployment risk, and again, this is something I've mentioned before, but it is getting more serious.
The third is a risk to inflation. And I don't think I have discussed this as part of these linked themes before, and so I want to add that into the narrative.
And of course, the fourth is the infrastructure constraints, which I discussed in a video recently, but which now need to be wedded into the common themes that I'm developing here.
So let's look at those four claims and recognise that they do not exist in isolation. They all exist within the political economy of technological change, which, when combined mean that they reinforce each other. That's why we're ending up with systemic risk. They're not isolated. They are literally threatening to drag down the whole of our economy.
So, strand one of my concerns is about asset bubbles. The global equity markets remain absurdly high. They are at record levels. We've already seen them go beyond the peaks at the end of 2025, in the first week or so of 2026, and so far, therefore, there has been no global reaction to the overinflated value of AI company shares. But, it remains the case that these valuations assume perfect outcomes for all the investment that is being made in AI, and there is simply very little evidence to suggest that this is going to happen.
The investment that is pouring into data centres, into chips, and into energy-intensive infrastructure is not sustainable. I'm going to be exploring that a little later in this video, but the point is, at some point, this is going to be realised, and when it is, it is going to become apparent that much of the current spending on these issues is likely to be wasted. And when we're talking about waste, we're not talking about the odd dollar or pound, here or there. We are talking potentially about trillions of pounds being wasted, sums so vast that we are discussing the entire income of whole countries going to waste as a consequence of what is happening. This is why I believe the share crash will happen.
But worse than that, what we now know, and it's becoming increasingly apparent, is that there is a very high degree of risk of contagion when that share crash happens. People are borrowing to buy the shares in AI companies. If the value of those shares falls, the collateral that has been provided for the loans that have been made available to the people who are buying the shares in question will collapse, and that means that banks will not be able to recover their money, and there will be a market crash and not just an AI share price crash. And this is just now a question of when and not if.
What is more, it is a matter of when the shadow banking system will also collapse because the shadow banking system has provided half of the money that is likely to have been invested in these shares. And shadow banking is the parallel banking system that exists outside regulation, and it is private equity funds, hedge funds, and so on, who are lenders, but without being subject to banking regulation. They, too, are at enormous risk, and if anything, at greater risk than the banking system itself, because the margins for error there are simply higher because of the lack of regulation. That means that there is massive recessionary pressure likely to arise because of this crash.
But, even without a crash, my point is that this recessionary pressure is going to exist. So far, all commentators agree that AI is creating increased profit potential for large companies, but not by creating new products. The profit potential comes from cutting costs, and cutting costs means, primarily, reducing the number of staff who are employed or the pay rates that are paid to them, because the staff in question will be of lower grade than those who will be sacked.
The point is that key people in our society will be made redundant, and that is going to have massive knock-on effects.
It's going to have a knock-on effect with regard to confidence, and that means that people will save rather than spend.
It's going to have a knock-on effect because people quite literally will not be in employment, and rising unemployment is a trend we are already seeing in the UK and maybe in the USA, although there, the data is of lower quality right now because of the government shutdown in late 2025.
What we do know is that overall household demand is falling, and the level of credit to support that demand is rising because incomes are already under threat.
As a consequence, the demand on government spending is going to rise because there will be unemployment, and the multiplier effects within the economy are going to weaken. In other words, there will be less money coming into the economy from people in employment, and therefore there will be less for businesses to receive by way of income and to therefore spend onwards to their own employees and to invest. All of this creates a downward spiral of decline within the economy. And cost-cutting always results in recessionary pressure, whether it's from AI or anything else that is the outcome, and that is going to give rise to forecasts of stagnation.
We're already hearing of these. People are talking about it. Job losses are expected in the UK now. I've talked about this already this year, and there are serious merchant banks who are claiming that this is the inevitable outcome for our economy in 2026. That AI is risking, deepening recession is really something that we now have to accept as inevitable.
But this has another knock-on effect, and that is that if we do get a recession, government policy should react. The trouble is that government policy on these issues has been outsourced to the Bank of England, and they now have a terrible track record in dealing with these crises.
They missed 2008.
They've been lousy since 2021, and since interest rates rose at that time to supposedly counter the threat of inflation, which had nothing to do with interest rates at all.
And this time I'm expecting them to miss the signs that there will be an increase in pressure on employment, and they will instead look at the pressure of increased borrowing and say that it is exuberance in the market that they're worried about with regard to AI share prices rising rather than the threat to the real economy, which is reflected in increasing unemployment. As a consequence, they're not going to cut interest rates fast enough.
This is where a real policy failure looms. History suggests the Bank of England always reacts too slowly, and I believe they will maintain interest rates far too high for too long, which will deepen any downturn we're going to get.
This is monetary policy failure in action in other words. Monetary policy is unable to deal with this sort of crisis. It is up to the government to react to a threat to employment, not the Bank of England, but at present, because we're outsourcing that policy to the Bank of England, we are going to make things worse rather than better as a result of deliberate policy choice.
That, therefore, leads me to my fourth concern, which is the other reason why I think the Bank of England will get this wrong, which is that what we know is that AI is going to stoke inflationary pressure in 2026. This will be seen by the Bank of England in direct contrast to the threats arising from lower employment rates, and they will prefer to tackle inflation even though there is nothing they can do with interest rates to deal with the fact that a shortage of AI chips around the world is expected to push up the cost of IT this year by up to 20%. This is a new inflationary pressure imported into the UK, because we do not make many of these chips, most of them are imported, and the result will be that prices will rise for consumers in a way that the Bank of England will react to, but which will be absolutely adverse for the economy itself, by keeping interest rates too high.
Couple to that, there will of course be the environmental pressure on both energy and water, and both of those are going to keep prices high as well, wholly inappropriately, because somebody's got to pay for the new grid that is going to be required to pay for the increase in electricity demand, which may well be by more than 10% per annum in the UK, but which will fall on consumers and not the AI companies.
Water usage will also threaten increased water prices, and we're already suffering them.
The result is that we will see inflationary pressure from these sources as well, and the consequence will be twofold. First of all, we will see increased income inequality in the country as more and more people are forced to spend more and more of their disposable income on essential items and have less and less to spend on anything else, resulting in that effective reduction in the multiplier effect within the economy as discretionary expenditure falls, therefore leading to recession. Whilst at the same time, the increase in the price of technology will increase the digital divide in our society because lower-income households will be excluded from access to markets and to government services as a result, and there is no sign that the government is planning any form of response to this increase in inequality of a practical nature, which will arise as a consequence.
We are therefore having to face a really important question. Why is AI important enough for us to have to go through a recession, an economic crash, an increase in inequality, and the abuse of our planet? What is the justification for that to happen? And I genuinely don't know the answer to that question.
As I've said, I use AI; it's useful, but is it that useful? Is it worth spending this much money on something that we do not even know what it will supply as yet?
Shouldn't we be going slower?
Shouldn't we be working out why we need this thing before we actually throw such money at it?
Should the subscription models that are being used for AI be thoroughly tested, because they currently look fragile, before we hang our hat on them and presume the whole economy can be run on the basis of what they are offering?
The fact is, we don't know the answer to any of those questions.
Like all technologies, AI will reshape part of the economy without doubt, but almost certainly far more slowly than the promoters claim.
My suggestion is simple: let's go slowly then. Let's walk quietly in the direction of AI, but let's not crash the economy on the way to doing that because the costs are arriving now, but the benefits aren't, and that's critical. The costs will fall unevenly, and they will fall hardest on those least able to bear them, and that is not accidental. It does, of course, reflect power and its distribution within our society, but this means that we are facing a crisis that is being created because of a failure to sequence events inside the political economy. , Technology is arising before the benefits. The costs are arising before the income. The crisis will happen before we see any upside.
This is a standard story of an economic crisis. It's a story of economic risk gone out of control. It's systemic, and it demands urgent attention because if we don't give it that urgent attention, this coincidence of these four strands of failure, and economic crisis as a result of a stock market bubble, a resulting banking crash, an unemployment crisis, and an affordability crisis, as well as on top of that, an environmental crisis: all of those things will literally lead us to some form of major meltdown. This isn't an incidental; this is an absolute certainty, and it's going to happen, and we need to take action. But where is the talk of this in government planning? Nowhere, and it's that which worries me.
Do our politicians care is my question. Do they care more about AI and the promise of growth than they do about us and the risk to our well-being? It looks as though they do, and that's another reason why our current economic system has to be radically transformed. Our priorities are all wrong. We have to put people first.
What do you think? There's a poll down below.
Poll
Taking further action
If you want to write a letter to your MP on the issues raised in this blog post, there is a ChatGPT prompt to assist you in doing so, with full instructions, here.
One word of warning, though: please ensure you have the correct MP. ChatGPT can get it wrong.
Comments
When commenting, please take note of this blog's comment policy, which is available here. Contravening this policy will result in comments being deleted before or after initial publication at the editor's sole discretion and without explanation being required or offered.
Thanks for reading this post.
You can share this post on social media of your choice by clicking these icons:
There are links to this blog's glossary in the above post that explain technical terms used in it. Follow them for more explanations.
You can subscribe to this blog's daily email here.
And if you would like to support this blog you can, here:

Buy me a coffee!

I am afraid that I cant vote for all 4 on the poll so had to choose one option.
Dusting off my PhD in the bleeding obvious though who on earth is differently sane enough to borrow money to buy shares – or offer to lend it.
Complete Madness
The inflation costs you speak of are being widely reported elsewhere – for example the photographic community who use chips for memory etc.
I agree that really, the introduction of AI is advancing too quickly and this increases risk (too much risk also always puts up the costs, the two go hand in hand). We need to slow down. The markets have not thought it through.
I worry because while AI is quietly (and all too rapidly) replacing jobs, it is also reshaping the physical environment and infrastructure. Amazon’s warehouses, for example, are becoming people-free zones, kept cold and light-free. AI-driven automation and robotics are becoming a sizeable chunk of the workforce. Jobs lost will never come back. Our critical infrastructure, and our lives, are being reorganised to include Tech, and to exclude people. Because AI designs these systems, the pace of change will skyrocket. We won’t be able to adapt fast enough.
Louis Halpern’s book “You are all fired by an algorithm” crunches numbers which are beyond frightening: see https://halpern.co/
We will too rapidly become dependent on smartphones/AI. We won’t see it coming. And because Austerity has hollowed out pre-digital institutions, we won’t be able to go back. It feels like a trap to me.
So I fear a point of no return. If we knew how close to this we are, I think we would be backing off as quickly as possible.
So I agree. I also believe we should be treading very, very slowly. Even pause.
Related: I also fear the rapid decline of critical thinking skills. Even highly skilled and experienced doctors were significantly worse at detection of cancer after using AI detection tools for 6 months: https://www.medscape.com/viewarticle/ai-use-causing-endoscopists-lose-their-skills-2025a1000mcn
We really don’t know how it will impact us at all. And our world leaders’ silence on this issue is frankly terrifying. What’s our goal? Where’s the plan? Where are the contingency plans?
I’d like it to be paused. So we can take stock, predict and plan. Engage brains!
Thanks
ISTM that AI has the potential to adversely affect everything that we should be protecting right now.
My omnibus summary of your post about the potential effects (when not if) is:
-A widespread financial crash when AI bubble bursts threatening banks as well as investors – so affecting all of us.
-Energy & water demand, from 2 national grids and corporate shills that can’t supply it.
-Unemployment, that reduces the spending power of the unemployed or re-employed but underpaid public causing further recession.
-Increased carbon footprint – a BIG increase.
-Specific resource shortages causing inflation in chips needed for OTHER things.
We’ve deliberately created a UK financial and services economy (banks and barristas) -both vulnerable to the above. We can and should be using government’s spending power NOW, to shift towards sustainable production (stuff we need) and supporting lower income people’s spending power (money).
In my little corner, I’ve switched off the AI element of DuckDuckGo, my search engine of choice, so it doesn’t put an AI summary at the top of every set of search results. Maye it saves an ounce or two of carbon to add to the 12.79 tonnes my (Chinese) solar panels & battery have saved in the last 5 years.
Do either Starmer or Reform UK Ltd. have a plan? Of course not. But we do.
Thanks
Very well put Richard – I think that it really is time to step back from the markets (if you are in them) and wait and see (as hopeless as market timing can be). I’d rather have 3% of something rather than 100% of nothing. And we have the news about the prosecution of Powell to digest as well.
“The investment that is pouring into data centres, into chips, and into energy-intensive infrastructure is not sustainable.”
It is also not realisable. Example: Texas: proposed data centres just in Texas – will requre 200GW of new power generation. Putting this into perspective, peak elec demand in the UK in winter is +/- 49GW. Trump seems to be well on track to ban renewables, which leaves fossil (forget about nukes – can’t build em fast enough) = gas turbines. The current CMOS-based paradigm for A.I. is coming to an end – if only for the reasons given in the previous sentence – not enough power. Which leaves quantum computing.
The situation now is not so different to that in 1890 and steam engines. The ones powering ships tended to shake themselves to pieces after a relatively short time. Parsons solved that problem. I have no doubt that the equivalent to Parsons will solve the “how do we maintain a quantum state”. In the late spring you can see the quantum state maintained in every leaf you look at (photons liberating electrons to power the leaf’s processes). Perhaps quantum biology will supply the answers, which in turn begs the question: then what?
Much to think about.
Thanks.
Given you believe, and I agree with you, that there will be an AI-caused economic crash, and that AI contributes very negatively to climate change challenges regarding water and power generation, do you think it is appropriate for you to recommend the use of ChatGPT (which by your own admission above is unreliable)?
I don’t believe you should be promoting the use of AI from any vendor.
To quote/paraphrase Cory Doctorow: “AI can’t do your job, but an AI-snake oil salesman can convince your boss it can.” and those left with jobs will become “human crumple-zones” to be scapegoated when AI messes up.
I don’t agree AI has no uses.
Saying don’t use it is aking to saying in the past don’t use IT. or word processors, or spell chckers or the web, and so on.
The choice is how to use it – and stay in charge.
Maybe there are useful edge cases for AI. But ChatGPT is a LLM, basically, all they do is guess (based on probability) what the next word will be in a sentence. They’re guessing machines, and it’s on them the hype and bubble is founded.
Any useful AI around after the AI bubble pops (Ed Zitron has written much on their insane financials) won’t be of the ChatGPT kind. There will be a lot of chips that can be bought for pennies though.
I’ve written code for >25 years, I’ve seen web-hype before (I could see NFTs were a Ponzi Scheme; where are they now?).
I admire your writing, but, respectfully, I feel advocating the use of ChatGPT is contributing to, not ameliorating the problems AI is bringing forth.
Noted.
And we will have to disagree.
Condemning a tool in all situations to me seems very unwise – and ludicrous. It is not going away.
Your discussion, and the poll, leaves aside my big concern with AI (which is not to diminish the points you have covered, of course), and that is the damaging effect it is having on knowledge, thought, and truth. The hallucinations offered at the top of Google searches are a prime example: they commonly include a blatant untruth which is visible even before the drop-down in opened; there is no obvious process for assessing the credibility of the sources used (the “BBC approach” of giving equal balance to serious research and conspiracy theory is apparent); and there is now so much slop auto-generated on the Internet that I fear the AIs are actively feeding off each others’ imaginings.
The result is an assault on truth, original thinking, and reliable criticism which is already matching that of social media algorithms.
Paul
Here’s a little more perspective on what’s happening in the AI field from the Guardian:-
https://www.theguardian.com/business/2026/jan/12/jd-sports-shoppers-buy-ai-platforms-chatgpt-microsoft-copilot-us
AI will “do it all”.
The final sentence of that Guardian article is the most important one.
If AI had any intelligence it would be telling its bosses to set it up in places where the heat generated can be put to good use, and not be thrown away in cooling water. Keeping people warm with district heating might be the best bit of the enterprise !!
🙂
I’ve heard that open ai buying so much memory wasn’t actually to use it but to prevent competitions form using it. It’s deliberately creating shortages and stocking inflation.
A lot of the memory and GPUs being bought don’t have anywhere to go yet.
Given investors generally want above inflation returns this seems entirely self defeating.
Do you have a source?