Project Syndicate published an article by Simon Johnson and Piero Novelli about a month ago in which they discussed AI.
This is a recurring theme of mine right now, because it is becoming increasingly apparent how destructive this technology is going to be, at least in the short term, and maybe way beyond that.
Among their various arguments, one stood out to me. It was this:
How exactly will this technology be used? Conversations with senior executives of large-cap corporations across traditional sectors – companies commonly presumed to provide high demand for AI solutions – confirm that while all expect to achieve significant savings and efficiencies from AI, almost none can highlight with confidence additional sources of revenue (such as new lines of business).
This is a staggering suggestion. What it says, in itself, is that there is nothing positive to be gained from the use of AI. No one knows that it will add value by creating new worth. All they can say is that it might cut costs.
Even then, the phraseology is careful: the actual benefit will be in "efficiencies", which means increased productivity. The IT of these companies might become more costly. Their fuel bills might soar. So might any costs relating to their use of water. But none of that will matter because they will be more efficient, which term only implies they will cut their labour costs. They will, in other words, shed people. That is the only thing that can be said with confidence about AI.
The authors admit it. They note:
If the people who are displaced by AI can quickly find new, productive, and (ideally) high-paying jobs, then we are on our way to an acceleration of productivity growth – with beneficial effects for living standards and public finances.
But, as they then conceded, the historical precedents for this are not strong. They might as well have said — although they did not — that the chance of this happening in the case of AI is low, because if no one has any clear idea how the $7 trillion of investment is going to create new products or revenue, and therefore genuine growth potential, and given that the scale of investment in AI will be so large that it crowds out investment elsewhere, then the likelihood that people displaced by AI will find new jobs, let alone quickly, looks to be remarkably low.
Despite that, the authors conclude:
[N]o country, company, or citizen anywhere will benefit from sitting on the sidelines. It might feel safer to do nothing now and wait for better versions of the technology to emerge, but that is no way to build skills for the future and create more good jobs.
The conclusion clearly does not flow from the arguments deeply implicit in the article that those concluding have just written. Simon Johnson might be a Nobel laureate in economics, but there is no joining up the dots here. Join those dots, and what we see is:
- A tech stock market bubble that will, inevitably, burst.
- Massive planned investment, much of which will never make a return because no one knows how it will create revenue.
- Significant growth in unemployment, leading to a recession.
- Simultaneous inflation as the cost of chips is forecast to increase by 20% in 2026, wth massive spillover effects for all consumers.
- Follow on inflation in the cost of electricity and water as AI seeks to consume all that is available, and much more besides, leading to crises in the supply of both, and price hikes that will leave the lives of many in peril, not least because of physical supply shortages.
And all this for the only identifiable goal of concentrating wealth further, whilst destroying human capital, societal capital, and environmental capital.
The question that should have been asked is, why is this risk worth taking? This did not seem to occur to the authors in question. Their answer was:
The path of technology can be shaped, and the path of the AI revolution is being shaped now. From canals and railways through to the internet age, a hard but simple lesson stands out: If you, your company, or your country sits it out and waits for the dust to settle, you may not get what you want and need from the technology.
What they did not say was, stop this madness now, when it has no proven worth.
When economists can stand back, think, and look at the big picture and say just that, they might add value.
When they stand in the bylines, presuming that markets are uncontrollable when that is not the case, they add nothing while watching the destruction.
AI on the scale now envisaged is heading to create a recession on a scale hard to imagine, whilst simultaneously destroying much of real value. Why is it so hard for so many to spot that when it is the only obvious conclusion to draw from the evidence now available?
Taking further action
If you want to write a letter to your MP on the issues raised in this blog post, there is a ChatGPT prompt to assist you in doing so, with full instructions, here.
One word of warning, though: please ensure you have the correct MP. ChatGPT can get it wrong.
Comments
When commenting, please take note of this blog's comment policy, which is available here. Contravening this policy will result in comments being deleted before or after initial publication at the editor's sole discretion and without explanation being required or offered.
Thanks for reading this post.
You can share this post on social media of your choice by clicking these icons:
There are links to this blog's glossary in the above post that explain technical terms used in it. Follow them for more explanations.
You can subscribe to this blog's daily email here.
And if you would like to support this blog you can, here:

Buy me a coffee!

Have used AI for stocks forecasts – or more correctly, what needs to change in a given operating environment for the stock to do well. That said, background knowledge was needed to keep the AI on the straight & narrow. I can see it driving analysts out of business. Probably already happening, but with the caveat, rubbish in – rubbish out.
In my own area of interest – local power systems, the forecasting and machine learning powers of AI (with respect to weather) are very promising & will allow the correct “temporal positioning” of resources.
I could see AI driving notaries out of business & other mostly admin-related stuff. That said, one will always need a human to deal with exceptions. The problem being that most companies will try to get away with none. Overall, I don’t see a happy future, humans get over excited wrt a given tech and go bonkers – e.g. 1830 & 1840s railway mania in the UK.
Is the past any guide? Did scribes find new and more productive careers after the invention of the printing press? How about agricultural workers after enclosure and the mechanisation of agriculture? Hand loom operators after the mechanisation of textiles, or carters and wheelwrights and ostlers after the invention of the steam locomotive or the automobile? Or did most of them end up slaving for poverty wages in factories and dying in urban slums?
Perhaps there are new careers we cannot imagine before they exist, like social media influencers and AI prompt engineers and search engine optimisers. Can we avoid wealth being sucked up by an increasingly small percentage of the elite, with everyone else left in penury? Who is actually going to buy the products of these “more efficient” companies if no one has a job?
I am sure there will be new jobs. Of course there will be. When I graduated I could not gave imagined doing what I do niw.
But the transition is slow, and costly.
And the problem with AI is existential. What is it for? Railways were for travel. Electricity for power. Looms fur clothing. What is the need AI meant to answer? I don’t know.
AI is, undoubtedly, a bubble. Partly that is due to it’s inappropriate name. There is nothing intelligent about LLM’s (large language models) that are often called AI. I say that despite the seeming intelligence demonstrated by an interaction with an AI. Essentially LLMs generate a sequence of words, one at a time, based on a sequence of input words. Once it has generated one word it adds this to it’s input sequence and repeats the process one word at a time. In this sense it is like a, hugely more sophisticated, repeated word completion.
The value of AI is that it uses a huge training set to generate it’s output. In this way it essentially “remembers” vast amounts of the internet on which it is trained. So it is great at natural language searches of the internet. I find this a significant time saver. And it’s good at summarising text, something noted in this blog. It is helpful in programming. A recent UK government study found it increased productivity (reduced the time taken) programming by 10%. Personally I suspect this is an underestimate.
Programming is probably a key application for AI. That’s because there is an enormous amount of code on the internet. That makes it useful but is also an Achilles heel. The code on which it is trained is of variable quality, some not very good. An AI trained using that code cannot be assumed to generate high quality code. Therefore it is useful as a assistant, a guide, for an experienced programmer. But I certainly wouldn’t trust it to generate code on its own.
AI is here to stay. It’s just an algorithm. It’s not really intelligent. It will be useful. I think it does have proven value, less than is claimed, and coupled to significant downsides. It will cause disruption and job losses, particularly for the young. I expect these losses will prove that be a mistake, they will prevent young people gaining experience and will later be reversed. But people will be hurt in the mean time. It is vastly over hyped. It does need regulation (as the pornographic output from Grok demonstrates). It is crowding out more useful investment. But I doubt it will be as transformative as is claimed.
Excellent summary, Tim. Machine learning will never be intelligent.
Garbage in – Garbage out.
Try listening to the AI generated Nero Wolfe stories provided on You Tube. The limitations of the “machine learning “ are worrying. There are no “new” plots. The circle of names gets ever smaller. The AI cannot maintain a plot on repeats itself over and over ( three endings one after the other ). It needs human critical oversight to stop it generating garbage.
Only one thing seems certain, which is that AI will accelerate job losses overall and that the disruptions to their respective populations will have to by addressed by governments in ‘macro’ terms. In addition to consideration of ‘human right’ type questions around UBI/job guarantees, the core question they need to ask is ‘What human skills are likely to be needed to augment AI?’
In certain sectors, disruption will move more slowly e.g construction. While fixed manufacturing will swiftly be ‘populated’ by AI trained and maintained robots, the site-based/mobile nature of construction and house renovation means that the cost of robotics is not justified. Factor in the need for major housebuilding programmes and one medium-term focus of training/employment policy seems evident. The health and social care sector is another obvious field where skills/training budgets should be directed.
In my sector, education, it is likely that the pastoral/social element of a teacher’s job will be foregrounded. AIs will shape/individualise a student’s learning while educators’ jobs (and hence the focus of their training) will involve the prompting of reflection; social interaction and development; group problem-solving and mediation etc
In one of the fields I am interested in, international communication, AI is already ending the jobs of interpreters and translators. It can produce, for example, simultaneous translation and sub-titles far more quickly than people can do it. But it makes mistakes, either by ‘mishearing’ the original quote or by mis-selecting the appropriate translation. However human interpreters also made mistakes and it may be no worse.
For written copy I think it is worse than human provision – it often seems to mistake the ‘tone’ of a piece, producing too formal or too informal a translation and it can be dreadful when it misunderstands technical or professional terms.
One effect of this that worries me hugely is the likely reduction in children learning foreign languages when, to my mind we should teach foreign languages as soon as a child meets an educator. A child’s innate ability to learn language is extended into later life if it is exercised as frequently as possible. Direct communication with others must be a Good Thing.
‘What is the need AI meant to answer? I don’t know’.
Correct 10 out of 10. Also Mike Parr’s GIGO example (Garbage In/Garbage Out).
In terms of customer service environments, John Seddon argues that only human beings can deal with the infinite variety in the lives of other human beings, not AI customer relationship management systems alone.
My view has always been that AI is all about reducing the costs of an operation, insisted upon by owners of companies who have no idea what their actual business acquisitions are except the bottom line return (and maybe the actuaries of pension schemes maintaining the value of their funds). The lack of domain knowledge by business owners will help AI. For some orgs’, AI investment will be coated with ‘invest to save’ garlands.
AI also potentially changes labour relations and gives companies immense economic power to chuck humans on the scrapheap for someone else to deal with. The hope is an age old problem will be solved – who to pay and how many – governments could be held to ransom. I think that what will happen initially is that AI will be used as a panacea, which means that it will be applied inappropriately in first instance in an attempt by capital to claim more of the economic output.
In short, AI was aimed at answering a question clothed in greed. The developers knew what they were doing and its consequences, and has been admitted before by major industry players when they knew that their systems were naughtily collecting behavioral data on users – ‘they did it anyway’ (watch ‘The Social Dilemma’ on Netflix, and ‘Screened Out’ on Amazon).
Why did the developers do it? Their own greed? They fell in love with what they were doing? They were confronted with immense power and decided to err on the side of the bad – many of whom would have been under immense pressure from funders for ROI? They were idealistic? Short of money? Geeky and not really socialized? It could be all these things and much more. And some are already having regrets at what they have created……………..
Thanks
Apologies for the delay
Hey, no problem. ‘Hope you had a nice day though.
It was good – apart from Trump gatecrashing all discussion.
With people losing jobs on the scale that the internet did to travel agents and banks, I’m wondering if there is anything the government could do to make it easier for small businesses to be created and to take people on. Perhaps a devolved minimum wage, lower employment taxes and an easier planning system than now. There will be other features of previous societies which could be looked at when earlier technologies released labour into the market, but I doubt this government is capable of seeing that.
I sent my MP an enquiry about this using ChatGPT as recommended, and he replied using AI. The cheek of it.
None of those things will help small business. They create no real impediments now.
Much of the idea of AI – and the term itself – is overblown and not new. Big businesses have had waves of ‘efficiencies’ and layoffs anyway such as the closure of many bank branches owing to the use of pretty mundane digital technologies.
The supposed new AI is ‘generative’ – producing new material from the mass of human-generated material and the increase in computing power. This seems to have scope as we see in the creative/knowledge area but we’ve used supercomputers for years to crunch scientific data and no doubt things will just get faster and smarter in areas such as drug discovery.
So your conclusion that this is a bubble and chasing evolution not revolution is accurate it seems to me although it may just correct and not crash because the world is changing anyway owing to other technologies, hopefully your care economics concerns, and cultural changes and AI is not necessarily the be all and end all.
Try asking AI a question about Italian grammar. It sort of gets some of it right but invents stuff rather than saying it hasn’t got a clue. Then when you call it out it says something along the lines of “you’re right i’ll know better next time”.
You ask what is it for. To confuse people, maybe.
Absolutely, Richard, we have to stop this madness now.
Blogs such as Gil Duran’s “The Nerd Reich” chart the descent of the Tech billionaires into a ketamine deranged fantasy world of sci-fi fascism in which they cast themselves as the saviours of humanity.
Adam Becker’s book: “More Everything Forever: AI Overlords, Space Empires and Silicon Valley’s Crusade to Control Humanity” brings all the strands of their ketamine fuelled project together in one volume.
The purpose of AI is to create ‘Artificial General Intelligence’, which will require orders of magnitude more energy and cooling. But the hope of the Tech broliopoly is that it will allow them to cheat death by merging their own consciousness with machines. For them this represents the highest of all altruistic endeavours: the end of the world, and thus humanity is inevitable unless a future is created for it within cyber-space. But doing that will not only require using up all the resources of the planet, but a continuing process of burning its way through the solar system, the galaxy and so on.
This has spawned the creation of a philosophical justification: “Effective Altruism” which they have been their enormous wealth to spread through the philosophy departments of the world’s most prestigious universities: see for example “The Centre for Effective Altruism” at Oxford University.
Sorry if this sounds “bat-shit” crazy, but they are. The only spin-offs generative AI offers are the destruction of jobs, improvements in medical technologies that can be used for good or the most dreadful evil, and the opportunity to make money from giving misogynists the tools to bring their fantasies to life without actually torturing or killing any one.
They have to be stopped.
Noted. Thanks.