As the FT and many others in the media are noting today, the chaos resulting from a cyberattack on JLR - the Jaguar Land Rover car company - is ongoing. The company has now, in fact, been shut down for three weeks, and there is no reopening date on the horizon, as yet.
The government is becoming concerned about the economic impact.
Trade unions are demanding support for some of the suppliers to JLR, because it is believed that they are now at risk. They might lack sufficient capital to manage their cash flow and profitability over such a period. Jobs could be lost, and the risk is that JLR might not even be able to reopen once its computer problems are solved, because its supply chains may have collapsed in the meantime.
I admit, I think the world could survive without JLR. I can find no obvious need for any one of the cars that it produces, all of which waste the world's valuable resources in ways that are meant to signal the supposed status of the driver, but which come at a considerable cost to society at large, not least by helping destroy our planet at a cost to generations to come.
Leaving that aside, however, there is a deeper issue to note here. These attacks appear to be more commonplace now and more successful in the sense that they seem to be disabling their targets much more effectively than they did in the past. Whether it be the British Library, Marks & Spencer, or JLR, the chaos being created is significant, and the obvious question to ask is, why has this change in scale happened?
Let me make a suggestion which I have not seen anywhere else, and for which I have no evidence, but which seems to be a very obvious explanation for what is happening. Is it AI that is facilitating these attacks, making them more effective, and in the process creating ways to undermine the operation of our society in the long run?
To explain this, just look at what AI does. AI does not think. It has no understanding, no imagination, and no ethical sense. It does not create any form of truth. All it does is undertake pattern recognition and probability estimation on a scale previously unimaginable. That is it.
As a result, it can automate what is routine, identifying correlations in data that are too vast for humans to process. The result is that it can make logical predictions, although these may be incorrect. It can predict the next likely word, behaviour, or event. That's because all it does is interrogate data, and the required answers may not be in that data, or have low probabilities attached to them. However, it can also be asked to look for those outcomes in that case.
So, how difficult is it to imagine that AI is being used in these cyberattacks, seeking out the low probability risk that can then be exploited?
Ask a simple question. If you wanted to stage a cyber attack, in effect seeking to find the weaknesses in a massive database system used to support the operations of a company like JLR, why wouldn't you use AI to undertake the scale of repetitive checks necessary until the weakness in the system was identified, and the vulnerability exposed? And why wouldn't you use AI to plan and even execute the consequent attack? To be candid, you would, wouldn't you?
I make this point for several reasons.
First, no one seems to be saying this, and yet it seems obvious to me.
Second, this raises the question: why is nothing being done to stop this?
Third, and most importantly, in that case, are we seeing AI actually being used, like a cancer cell, to turn upon the host systems that have given rise to and use it, to then kill them?
Fourth, is it then possible, that just as Marx said that capitalism had the grounds for its own failure implicit within it, that the whole of the tech industry suffers from the same problem and what we are seeing is that tech might fail precisely because it has spawned - and is now spending hundreds of billions further developing - the very thing that might destroy it, and the way that we live based on that tech, all at the same time?
Fifth, given that the rate of change (the second differential) in tech is so fast, might it be the case that the collapse of a functioning economy could be much closer than we have ever imagined, with it being killed by the very thing that was meant to make it grow ever faster?
To conclude, is a bifurcation possible where AI ceases to be the hope for the future and instead becomes the threat to its existence in a way no one is currently imagining?
I offer the thought.
Comments are welcome.
Thanks for reading this post.
You can share this post on social media of your choice by clicking these icons:
There are links to this blog's glossary in the above post that explain technical terms used in it. Follow them for more explanations.
You can subscribe to this blog's daily email here.
And if you would like to support this blog you can, here:
I’m sure hackers and other cyber criminals are using AI, just as regular coders are (albeit many are sceptical about the quality of the resulting slop).
But hacking is often as much if not more of a psychological exercise than a technical one. The weakest link is often a human one, someone with a weak password, or who clicks on the wrong link, or divulges information to a fraudster that opens them or their employer to attack.
But you still have to find them.
I’ve been using Chat to perform quite complex analyses regarding the energy consumption in housing. To my surprise, it then offered me coding which I was able to drop directly into a website. I did so and it worked, but would have needed tweaking before offering to interested parties.
Subsequently, I talked with someone coding for web-based analyses in the medical field. He’s part of a much larger business team doing this and I told him about my Chat experience.
He informed me that they all use AI for their basic work and then tweak it as they go along – “it saves us hours of work”
So I suspect that you are right re cybercrime Richard.
AI is hungry for our questions and our acceptance or tweaking of it’s results
But as with the use of all tools, the ethics of the questions and results are in the hands and minds of the users. So with greedy people using this AI, the possibility of unintended consequences must be inherent in the vastness of the database and the ability of the tool to find ways to answer conceivable questions, but release unforeseen options into it’s expanding database
In my opinion, it’s not possible to effectively regulate AI for the benefit of the current societies in which we find ourselves. A new wild west is before us…..
I agree with that conclusion.
And it does code, I know. I have used it.
Perhaps not in the way you think. Automated hacking has been a thing for decades so ai won’t be a huge improvement there.
However, AI generated code is often less secure than that made by humans and can be deployed by people who don’t understand what they have generate.
People who use this at a large company will leave huge security holes. Which can then be expolted by hackers.
Might it be that anything humans can create, humans can subvert?
Might it help if the presentations, discussions and behaviours/actions related to A I featured both possible/probable subvertions as well as ditto benefits?
Ditto airport extentions?
“As a result, it can automate what is routine, identifying correlations in data that are too vast for humans to process. The result is that it can make logical predictions, although these may be incorrect. It can predict the next likely word, behaviour, or event.”
I think this is true. If you ask AI about money mechanics the way you frame questions about sovereign government’s ability to create its own spending money can determine diametrically opposed answers.
This reminds me of an article about life needing consciousness to use the configurable switches of cybernetics:-
https://www.davidabel.us/papers/The%20Cybernetic%20Cut%20&%20Config%20Switch%20SciTopic.pdf
I assume that the hackers in the case of Jaguar Land Rover have seriously messed with the configurable switches. I’m not sure it’s beyond the capabilities of human beings to stop such messing but certainly hacking needs more effort to limit it.
My understanding from rather limited research is that JLR are suffering a ransomware attack through a vulnerability in their SAP accounting software.
Once your critical data is encrypted, you can’t deal with suppliers or staff or customers, you don’t know what stock you have or work in progress. You can’t keep track of money or payments. You are blind. Total nightmare. Another illustration of the critical importance of accounting.
Similar to Co-Op and M&S earlier this year. But the differences between the outcomes are illustrative. Shutting down as soon as there is a problem to minimise the impact, like the Co-Op. The hackers have often been in place for week or months quietly gathering data and putting their own code in place before the bomb goes off, and you then need to rebuild from clean code and the last safe data back-up. That can take weeks or months, and meanwhile the problems ricochet up and down the supply chain. Having insurance in place to cover have cost (M&S lost hundreds of millions in lost sales but as I understand it got around £100m from their insurers, but JLR was not insured.)
Thanks
And agreed.
Cybersecurity is an intense but classic Arms Race, so any new ‘tool’ available pushes the envelope for attacks, but (probably) also enhances defensive capability. Use of AI in cyberattacks is taken very seriously indeed; I follow a couple of blogs on security and here’s just one posting from a couple of weeks ago about the use of Claude in a cyberattack. Apart from code assistance, it’s also the strategic support that is quite shocking.
(https://www.schneier.com/blog/archives/2025/09/generative-ai-as-a-cybercrime-assistant.html
Click on the tag “AI” at the end of that post to see related articles)
So the answer to the question “Does AI make cyberattacks more dangerous?” is “Yes”. Will this be a route to societal destruction, I think “No”, not least because AI also provides defensive tools.
But that doesn’t mean that AI won’t damage/destroy society. There have been not a few cries of alert from distinguished workers in the field, the latest from 200 researchers including 10 Nobel laureates:
“Some advanced AI systems have already exhibited deceptive and harmful behavior, and yet these systems are being given more autonomy to take actions and make decisions in the world,” … “AI could soon far surpass human capabilities and escalate risks such as engineered pandemics, widespread disinformation, large-scale manipulation of individuals including children, national and international security concerns, mass unemployment, and systematic human rights violations.”
(https://www.theregister.com/2025/09/23/ai_un_controls/)
Thanks
By the same token, system security is probably also being tested using AI by those that create and manage the software. What a waste of time and energy.
To me, the answer is a very definite YES.
There is a threat against LLM technology called Indirect Prompt Injection, to which there isn’t an answer yet.
For IPI to carry out an attack, Mal-actors leave messages all over the web, (easy example in posts on Facebook, X, – but anywhere they can. But these messages cannot be seen by humans, they are invisible to us, but LLMs can read them. Think of them as white text on a white background. Or, using steganographic techniques, they might be in images.
These messages are instructions to any AI agentic system that sees to carry out those instructions. AI agents which are scraping the web will inevitably come across these instructions, stop what they are doing, and to execute those illicit instructions, telling no-one. The damage will be done.
You would hope there would be an easy answer, but there is no answer to this yet.
Will there ever be? Maybe, maybe not as LLMs have no inherent means of identifying the good text from that which is bad. Currently, the threat can only get worse. The head-long rush to save a buck has reduced resilience, and increased fragility by using AI. The attack surface is growing, and growing fast.
The larger question about the effect on society is a topic I have been researching and have tried to summarise into less than 400 words. It can’t be done in any useful way. There are a number of elephants in the AI room, and there seems to be a reluctance to discuss them. These elephants have the power to cause immense damage. IPI is just one.
Thanks
I was not aware of that
No doubt my one time colleague, John Naughton, who I assume still writes his column on all things techie for The Observer, would have something to say on this. But from my own limited knowledge I do know (as indeed will other tech people who read your blog) that much that goes on in the world of cyber security never sees the light of day, or gets reported- for obvious reasons. But keeping attacks like those on JLR, M&S, and the airports are so consequential that it’s impossible for them not to be noticed and reported on, though again, the details that are made public of the actual detail of the attack are not. So, where it came from, what/who was involved, and why it was effective despite the security systems in place, will all be known, even if it takes days or longer to detail.
And I do think your hypothesis about the use of AI is almost certainly correct. I recall several years ago watching a hacking programme that a colleague had constructed probing the “defences” of various IT systems looking for weaknessses so that that information could then be incorporated into updated versions of IT security systems. And certainly, based on how that worked I’ve no doubt that AI could and would have speeded up that process no end. Addionally, it always used to be the case that cyber security was always playing catch-up with cyber threats. Maybe that’s changed, but I suspect not.
Finally, I will mention again, that my view – and I know I not alone in believing this – is that AI poses an existential threat to human kind, due to many of the features you note in your blog. And our (or I should say, the decisions of people who think they’ll make shed loads of money from AI) approach to regulation (i.e. none) in the belief that AI will benefit us all, is entirely misplaced.
Much to agree with
Hollywood anticipated this in the 1984 film ‘The Terminator’. AI became sentient in that scenario, recognised humans as a threat to it and let off all the world’s nuclear weapons in an attempt to destroy humanity. Let’s hope that prediction was wrong.
We have the technology to create money at the stoke of a computer keyboard or running a few routines on a computer. This applies to the Bank of England as well as bitcoin. The second is enabled because of government policy which bans legal trade in things such as khat, second hand cars, affordable cigarettes but also harder drugs and guns. The first is enabled because of government policy which runs permanent deficits and operates a fiat system.
The incentives to make money by taking over a computer mean there’s a lot of skilled programmers out there who don’t have the best motives. Imagine if you could break into the BoE remotely and create your own money. Someone almost certainly is trying this already and when they do the sound money system will collapse.
We just have to accept that that’s the world at the moment and we helped create the monster.
Not anytime soon! There is a massive AI bubble based on racking and stacking infrastructure that isn’t being used, and constantly superseded in short unsustainable lifecycles. Look at the deal between Nvidia and ChatGPT, it’s nothing more than moving bloated unused inventory from Nvidia’s books. ROI on AI projects doesn’t stack up, it costs millions to develop workloads that deliver almost no value. Everyone wants AI but has no idea what to do with it.
AI is good at crunching data, good at simplifying information.
But it cannot replace human judgement or empathy.
AI is man made and carries its man’s faults and prejudices with it – it is not neutral or only as neutral as it is designed.
The inherent problem with capitalism as we know is that it sows the seeds of its own destruction. One of the engines of that destruction is monopolism – it always has to push out other actors, it is has to dominate and not accommodate – it has to be absolute and not composite – call it the Christianity problem shall we (I duck!).
AI is a tool. It is not the be all and end all. Capital thinks AI gives it the opportunity to solve the value of labour problem once and for all and decouple from it in advance of huge profit and with no consequence (because it does not do double entry book keeping).
Let’s be clear: human beings use tools. The tools do not and SHOOULD not use themselves.
I don’t know much about this field, but I would imagine hackers would harness AI. The Co-op was disabled at around the same time as M&S, and both organisations spoke with a Govt panel, I saw on YouTube. My impression is that, like fraud, this type of criminal activity isn’t taken seriously enough by government and companies.
As S Trevethan says, human creations can be subverted. Fraud and hacking adds to a feeling of helplessness, which leads some to want ‘strong man’ government. I want more regulation, more money spent on regulation and safety, without knowing quite what that would look like. We didn’t plan properly for a pandemic, I doubt we have for this kind of failure.
I think you are spot on here Richard.
AI can be used by attackers in 2 main ways, I think. The first is in helping to identify possible bugs in existing code that can then be attacked and exploited. The second, particularly with the ChatGPT style of AI, is developing persuasive spam/emails that contain links that if used inside an organisation, can potentially install exploits – phishing in other words.
I’m sure we have all received spam emails that are laughably obvious – things like “Dear [[name]],” where the placeholder hasn’t been filled, or use of the number zero instead of the letter o to try and get around word filters.
ChatGPT can be used, and likely is already, to generate plausible looking phishing emails with potentially a greater chance of fooling the recipient.
If I may add another point. There is a far bigger elephant in the AI room, and that is the lack Symbol Grounding.
The Symbol Grounding Problem is the question of how symbols (words, tokens, representations in an AI system) acquire their meaning.
In traditional AI systems, the system manipulates symbols based on syntactic rules (like a game of chess with words). For example, an AI might have a database entry: (IsA apple fruit). It can reason that “An apple is a fruit” because it manipulates the symbols apple, IsA, and fruit based on its programming. The critical problem, first explicitly formulated by philosophers Stevan Harnad and Hilary Putnam, is: Where do these symbols get their meaning? The AI’s internal symbol “apple” is just a string of characters (A-P-P-L-E). It has no inherent connection to the actual, red, crunchy, sweet fruit you can hold in your hand. There seems to be little agreement over the status of current AI systems; Are they grounded? Who knows?
So whether or not the current technological status is legally adequate can only be tested in the courts. There are court cases in progress now apparently where the judge will demand the evidence and its traceability. He needs to KNOW that the AI system has correctly processed the information. When the first judgement comes, it sets the precedent effectively for the western world. If the decision is that the symbols are not grounded, then the business then has no valid case. Thus the plaintiff wins by default and the business insurers have to pay the damages. The insurers are expected to withdraw cover for this event for all businesses, if the judgement says ‘ungrounded’. Without cover, businesses have difficulty operating.
So, there could then be chaos. Using AI becomes a no-no over night.
Noted
Thank you
“why is nothing being done to stop this?”
Money. Securtiy costs.
2000s, One of the big German banks, DDOS attack (distributed denial of service attack) – the CERT (computer emergency response team – internal to the bank) could not handle it – called in outside experts (who did) . CERT team could not handle it cos they were underfunded. Nothing much has changed. (how do I know this? – I talked to the team that fixed it). A.I. will automate attacks. Some stuff should be easier to protect than others – but at a price.
Elec Networks? private fibre (or rent a fibre) – no contact with the externel Internet – which means you would need to find somewhere to physically connect to make an attack – tricky particularly if you don’t know the protocols etc. Are they doing it? of course not – cost. This extends to people that use google e-mail addresses (free? eh?) – you are the product.
Wasnt it the King who some years ago warned that the world could degenerate into some sort of Grey Gloop?
Well you can now see the mechanism by which it might happen
Bring back relay logic and mechanical sequence interlocking
Get back in your signal box 🙂
The tech press has highlighted a number of cyberattacks which show signs of AI involvement in their coding. Additionally, I’ve recently read a disturbing study that evidenced AI “hiding” things from interrogators.
Interesting comment on “AI slop”
https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity?ab=HP-hero-featured-1&utm_source=Sailthru&utm_medium=Newsletter&utm_campaign=Reuters-AI&utm_term=092425&lctg=635e2d028f628f2eb80c4c1d
AI has made a lot of groundwork in voice and video mimicking, I’m sure everyone has heard of deepfakes. Imagine you’re a database engineer or developer and you get a video call from your product manager.
They say “it’s a bit windy here, can you still hear me?”
You say yes and can see them on an idealistic hill with rolling fields in the background walking along. You remark on how beautiful it looks.
“Yeah it’s really beautiful up here in the dales [pans camera to show the view]. Listen, I don’t have my work laptop with me and Garry (CEO) has been asking for the database costs and usage stats, can you add my personal email till I can get back to the office?”
A tech and security savvy person would say absolutely not…but how many of you would have said yes if it looked and sounded like the product manager who just lost his mobile phone or had the number spoofed? Do you know someone who would have, perhaps a founding member of the company who still has access to many areas?
Currently rendering an AI video isn’t possible in real-time but just like real-time ray tracing, which was thought to be computationally impossible, it’s a question of when not if. These kinds of socially engineered attacks will increase and be available on a large scale at some point making everyone vulnerable (especially with hacks like the co-op) but also making us question reality. We are already seeing this emerge where AI videos create an AI narrative.
These things happen a the time in universities
They’re really obvious
But some suckers do fall fir them