Is AI going to kill society as we know it?

Posted on

As the FT and many others in the media are noting today, the chaos resulting from a cyberattack on JLR - the Jaguar Land Rover car company - is ongoing. The company has now, in fact, been shut down for three weeks, and there is no reopening date on the horizon, as yet.

The government is becoming concerned about the economic impact.

Trade unions are demanding support for some of the suppliers to JLR, because it is believed that they are now at risk. They might lack sufficient capital to manage their cash flow and profitability over such a period. Jobs could be lost, and the risk is that JLR might not even be able to reopen once its computer problems are solved, because its supply chains may have collapsed in the meantime.

I admit, I think the world could survive without JLR. I can find no obvious need for any one of the cars that it produces, all of which waste the world's valuable resources in ways that are meant to signal the supposed status of the driver, but which come at a considerable cost to society at large, not least by helping destroy our planet at a cost to generations to come.

Leaving that aside, however, there is a deeper issue to note here. These attacks appear to be more commonplace now and more successful in the sense that they seem to be disabling their targets much more effectively than they did in the past. Whether it be the British Library, Marks & Spencer, or JLR, the chaos being created is significant, and the obvious question to ask is, why has this change in scale happened?

Let me make a suggestion which I have not seen anywhere else, and for which I have no evidence, but which seems to be a very obvious explanation for what is happening. Is it AI that is facilitating these attacks, making them more effective, and in the process creating ways to undermine the operation of our society in the long run?

To explain this, just look at what AI does. AI does not think. It has no understanding, no imagination, and no ethical sense. It does not create any form of truth. All it does is undertake pattern recognition and probability estimation on a scale previously unimaginable. That is it.

As a result, it can automate what is routine, identifying correlations in data that are too vast for humans to process. The result is that it can make logical predictions, although these may be incorrect. It can predict the next likely word, behaviour, or event. That's because all it does is interrogate data, and the required answers may not be in that data, or have low probabilities attached to them. However, it can also be asked to look for those outcomes in that case.

So, how difficult is it to imagine that AI is being used in these cyberattacks, seeking out the low probability risk that can then be exploited?

Ask a simple question. If you wanted to stage a cyber attack, in effect seeking to find the weaknesses in a massive database system used to support the operations of a company like JLR, why wouldn't you use AI to undertake the scale of repetitive checks necessary until the weakness in the system was identified, and the vulnerability exposed? And why wouldn't you use AI to plan and even execute the consequent attack? To be candid, you would, wouldn't you?

I make this point for several reasons.

First, no one seems to be saying this, and yet it seems obvious to me.

Second, this raises the question: why is nothing being done to stop this?

Third, and most importantly, in that case, are we seeing AI actually being used, like a cancer cell, to turn upon the host systems that have given rise to and use it, to then kill them?

Fourth, is it then possible, that just as Marx said that capitalism had the grounds for its own failure implicit within it, that the whole of the tech industry suffers from the same problem and what we are seeing is that tech might fail precisely because it has spawned - and is now spending hundreds of billions further developing - the very thing that might destroy it, and the way that we live based on that tech, all at the same time?

Fifth, given that the rate of change (the second differential) in tech is so fast, might it be the case that the collapse of a functioning economy could be much closer than we have ever imagined, with it being killed by the very thing that was meant to make it grow ever faster?

To conclude, is a bifurcation possible where AI ceases to be the hope for the future and instead becomes the threat to its existence in a way no one is currently imagining?

I offer the thought.

Comments are welcome.


Thanks for reading this post.
You can share this post on social media of your choice by clicking these icons:

There are links to this blog's glossary in the above post that explain technical terms used in it. Follow them for more explanations.

You can subscribe to this blog's daily email here.

And if you would like to support this blog you can, here:

  • Richard Murphy

    Read more about me

  • Support This Site

    If you like what I do please support me on Ko-fi using credit or debit card or PayPal

  • Archives

  • Categories

  • Taxing wealth report 2024

  • Newsletter signup

    Get a daily email of my blog posts.

    Please wait...

    Thank you for sign up!

  • Podcast

  • Follow me

    LinkedIn

    LinkedIn

    Mastodon

    @RichardJMurphy

    BlueSky

    @richardjmurphy.bsky.social