This has been reported by the FT this morning:
US companies have sold $1.7tn of investment-grade bonds in 2025, a near-record sum stoked by a rush of borrowing to fund AI infrastructure that has spurred concerns over a debt glut.
This year's issuance has come as companies took advantage of relatively low borrowing costs to refinance their debt. But the debt sales, tracked by trade body Sifma through to the end of November, also reflect an AI borrowing boom as Big Tech groups tapped bond markets to fund data centres and the energy systems needed to power them.
They added:
What lies ahead: AI-related borrowing now accounts for around 30 per cent of net investment-grade issuance, according to Goldman Sachs, and is expected to grow in 2026, despite concerns over the level of debt being taken on by the AI “hyperscalers”.
There are four things to note.
First, pension funds require "investment grade" bonds. They're meant to be really safe, just as mortgage-backed securities were deemed to be in 2008. They may not be the equivalent of government bonds, but the risk of default is deemed to be low, and so the interest rate is at the lower end of the scale as well.
Second, your pension fund will have little way of avoiding bonds issued by AI companies as a result of them now representing 30% of investment-grade issuance.
Third, no one in their right mind thinks that AI companies are low risk, unless, of course, they now have "too big to fail" status, meaning that governments would always be required to bail them out.
Fourth, your pension is now at risk from AI.
I make clear:
- I use AI.
- It's useful, but no more than that.
- Like just about everyone else, I genuinely struggle to see how it is transformative.
- AI may not be sustainable.
- It could even be deeply destructive for:
- Jobs
- Communities
- Water supplies
- Energy supplies
- The planet
- Long-term skill development
- Human well-being
The jury is out, in other words. But whether that is the case or not, and despite the fact that no one has found a genuinely profitable use for AI as yet, your pension is probably linked to its success, which is not what I would choose - which comment is not financial advice; it is just not what I would choose.
Are you comfortable with that?
That's a question worth asking in a world where financial sense appears to be increasingly disconnected from reality in ways last seen in the run-up to 2008.
This is worth watching in 2026: the risk of a crash is far from over as yet.
Comments
When commenting, please take note of this blog's comment policy, which is available here. Contravening this policy will result in comments being deleted before or after initial publication at the editor's sole discretion and without explanation being required or offered.
Thanks for reading this post.
You can share this post on social media of your choice by clicking these icons:
There are links to this blog's glossary in the above post that explain technical terms used in it. Follow them for more explanations.
You can subscribe to this blog's daily email here.
And if you would like to support this blog you can, here:

Buy me a coffee!

Might A.I. also be vulnerable to subversion/corruption, either now or in the longer term?
Yes
Especially when it “learns” off itself.
A friend who has been looking into AI for a while feels that much of the scary rhetoric around AI is designed to panic governments into bailing out the insane over investment and over borrowing by the hyperscalers. Making AI like the Atom Bomb ‘We have to have it’. He says the ‘data centres’ planned for Scotland will obliterate all of our renewable energy gains and destroy our water cycle just when we need ecological stability more than ever.
Also, the pressure to embed AI into government software such as at HMRC is specifically designed to make the hyperscalers providing the AI with their data centres not only ‘too big to fail’ but so integral to government administration that we have to bail them out no matter what…… paranoid? Sadly strikes me as fairly plausible. I am told that for cooling reasons, once they are up and running, the data centres must stay switched on and eating energy and water no matter what…. Water and energy austerity for the rest of us?
Probably
Canals and railways were transformative but most investors lost money. The internet was, too.
AI might be amazing but still a bad investment.
Massive deficit spending plus enormous credit creation leaves too much money chasing too few assets, so the bubble keeps inflating….. until it doesn’t.
Clive
Not quite true of all railways.
Some of our railways companies were massiveluy successful.
But I agree, many weren’t.
Studying all that as a teenager is what got me interested in all this…
I read hundreds of books on railwway hostory at the time, and still do.
Richard
I bow to your railway expertise. Clearly, some railways were very successful and I suspect a basket of different equities might have been great; 9 might lose everything but the 10th one made up for it and more. Bond holders would have been absolutely hosed as 9 went bust and the 10th only returned capital plus interest.
I (almost) understand the case for AI equity….. but all these bonds? No.
Agreed, entirely. The bonds are really irrational.
There is an AI bubble that will burst eventually. Like the dot.com boom, there will be some survivors or successors who come to dominate (Amazon, Facebook, Google, etc) and many who fall by the wayside. Most of the AI companies will not succeed but it will be embedded everywhere.
It is going to be interesting to see how education adapts to pervasive AI. We still haven’t quite adapted maths education to deal with the pocket calculator, spending inordinate time on teaching and testing longhand arithmetic when computers do that so much better and quicker than most humans. Perhaps it is knowing how to use the tools that is more important than knowledge and recall of facts, or their expression in a continuous narrative.
Education is eventually not about knowledge, it is about understanding.
AI has already been used in many places profitably – insurance, telephone systems, smart home assistants, medical imaging, document handling, pharmaceuticals, mechanical simulation and more.
People confuse AI with specifically Generative AI and/or the pursuit of Artificial General Intelligence.
There are a whole range of profitable uses of AI already making money. That doesn’t mean that a lot or even most of the current AI hype companies will fail, just as most dotcom boom companies failed.
If that period has any lessons, most of the infrastructure (LLM providers here) may burn billions and only a few will make a profit. Many will add AI too their system expecting it to be a panacea, which it isn’t. Some will have a specific user case where AI is but a means to an end, not the end itself, where the user experience and/efficiency is the goal, finding a niche with some exclusivity (mainly by aiming to have unique data to beat me-too imitation).
There going to be plenty of failures, but there will be profitable unicorn companies built along the way, too
And what will they do?
Would a crash be a corrective to the AI bubble? As in, it’s basically necessary to return stock values to their ‘real’ value? When something like this happens… purely theoretically and assuming we have a political system motivated to do this… would there be a way to use this ‘moment’ to apply anti monopoly or anti trust laws to the gigantic tech companies involved? To make whatever bailout we will presumably apply conditional on breaking them up into multiple individual companies resident in discrete nation states and taxable as such according to local tax laws?
In purely economic terms, yes, a crash can act as a corrective, but it is a very blunt and socially costly one.
Asset bubbles, including the current AI-related surge, are driven by expectations, market power and excess liquidity rather than by underlying productive value. When those expectations unwind, prices fall back towards something closer to what the assets can actually earn. In that narrow sense, crashes do “reset” valuations. But they do so by destroying paper wealth indiscriminately, freezing credit, cutting investment and often pushing costs onto workers, pension funds and the public sector rather than onto those who inflated the bubble in the first place.
That is why crashes are not necessary in any natural-law sense. They are the consequence of policy failure: allowing speculative concentration and monopoly power to grow unchecked.
Your second question is the more important one. A crisis moment does create political leverage – if there is the will to use it. History shows this clearly. Antitrust in the US was strengthened after financial panics. Banking regulation followed crashes. The post-war settlement followed catastrophe.
So yes, in theory, a state could say:
– bailouts are conditional, not unconditional
– market power must be dismantled
– platforms must be structurally separated
– data, compute and infrastructure must be regulated as utilities
– firms operating across borders must be taxed where activity occurs
Breaking up tech monopolies, forcing functional separation (platform vs service vs infrastructure), and re-establishing national tax residence are all legally and economically feasible. What is lacking is not tools, but political courage.
The real danger is that we repeat the post-2008 pattern: socialise losses, protect incumbents, entrench monopoly power, and then tell the public “there was no alternative”. A crash does not guarantee reform. It merely opens a window. Whether it becomes a reset or a reinforcement of oligarchy depends entirely on who sets the conditions.
Thanks for those details. The major objection I anticipate hearing is that any action to ‘force’ the tech giants to restructure will contravene WTO and trade treaty regulations and obligations. Do we know a way we could do this without contravening any current treaties? For example if the bailout involved insolvent tech companies floating shares that individual governments co-ordinate together to buy, such that the value of those shares entails a more than 50% ownership with appropriate voting rights? So that governments could vote the changes through at shareholder level without having to go through lengthy democratic processes to change any currently existing international rules?
I think this is speculation beyond what is useful right now. Sorry.
I always have to laugh at how much the ‘corrections’ are shared socially – as if we are all in on it somehow – even the poor on our housing estates and temporary accommodation.
When you think about how much money is taken out of these bubbles and who retains those gains after the wreckage – you clearly see the key role of fascism and identity politics in obfuscating who these bubbles are for and what they actually do – a mass transfer of wealth from the bottom to the top.
The whole thing is reprehensible.
Looking back at the Railways
Only one ever repaid its investors and that was the Wantage Tramway
BUT what the railways did do like canals, roads, electricity phones etc was that it allowed business’s to make profits
Mines were able to sell their coal, seaside resorts expanded or were built from scratch, large industries were set up to supply them etc etc
I suggest that if nothing else AI might help business profitability even if it isnt profitable itself
I am not convinced only one ever repaid.
Many did. The LT&SR was sold at a considerable gain (more than 100% return on subscribed price) in 1912, for example.
Why do you say so only for the Wantage?