As I noted in an article I wrote for The National yesterday, I responded to Keir Starmer's enthusiasm for AI by asking ChatGPT to do this:
Suggest reasons why AI can deliver growth for the UK economy. Use bullet points when providing those reasons. Please use no more than six of them. Use no more than 20 words in any paragraph.
This was its response:
-
Boosts Productivity: AI streamlines operations and automates tasks, reducing costs and improving efficiency across industries.
-
Enhances Innovation: AI enables advanced research, product development, and problem-solving, fostering innovation in key sectors like healthcare and technology.
-
Attracts Investment: The UK's AI leadership can draw global investments, supporting startups and large-scale AI-focused projects.
-
Upskills Workforce: AI-driven tools empower workers with new skills, increasing employability and addressing skill gaps in the economy.
-
Improves Public Services: AI enhances decision-making, resource allocation, and service delivery in healthcare, education, and government services.
-
Supports Exports: AI solutions and expertise can be exported, strengthening the UK's position as a global tech leader.
So, we learned that all the Prime Minister's arguments made yesterday could have been written for him by ChatGPT.
Then I asked what I thought was the obvious follow-up question, which I did not hear discussed very much yesterday, which was:
What are the preconditions for the above goals being achieved?
This was the response, with Chat GPT still using the conditions I attached to the first request:
-
Strong AI Ecosystem: Development of a robust AI ecosystem with collaborations between academia, industry, and government.
-
Skilled Workforce: Investment in education, upskilling, and reskilling programs to prepare workers for AI-driven roles.
-
Clear Regulations: Establishing ethical, transparent, and supportive AI policies to foster innovation while addressing societal concerns.
-
Sufficient Investment: Public and private sector funding to support AI research, startups, and infrastructure development.
-
Digital Infrastructure: Nationwide access to high-speed internet and advanced computing resources to enable widespread AI adoption.
-
Public Trust: Building trust in AI through transparent use cases and addressing biases, privacy, and accountability concerns.
I think that response is fair.
Do I see these conditions being likely to be met? Not at all.
The first is beyond the wit of a government that is intent on alienating its own employees and gutting academia.
The second assumes that there will be more training than any UK employer ever wants to provide to their staff. Unless that is given even the most basic gains from AI are not going to be secured. As my own research into it shows, writing decent requests for what is required of it is not a straightforward task.
Regulation will fall behind the progress of AI, I have no doubt. The news on copyright already does not look good. I think we can safely assume there will be a failure here.
The fourth and fifth requirements involve more money than Labour will commit.
And as to the sixth, why is the public going to trust something that looks very like it is being created to destroy jobs? People already hate automated responses, call centre scripts, and the obviously fake images now so commonly created by AI. All that improvement in these will do is increase the sense of mistrust. Not knowing whether a human or a machine is responding will lead to alienation.
I am not saying AI has no uses. Very obviously, it has. But government looks pretty silly when it says locating potholes in one of them: we can all do that.
It's also slightly absurd to suggest AI will stop tax avoidance. I think it may have a role in evasion, but on avoidance, less so, in all likelihood. The PM's suggestion that avoidance was the issue it would address just showed that unless the right answer is requested of AI it will not deliver at all.
And Peter Kyle's claim that AI is as big a change for society as electricity is absurd. That was, quite literally, a light bulb moment. AI does not deliver that. In fact, if there is one thing AI definitely is, it is opaque. The gains are far from clear. The liberating effect is also uncertain. For many, it will look like the route to being deskilled or unemployed.
I may, of course, be wrong. But candidly, I do not see AI as being the basis for growth, opportunity, sunny uplands, balanced budgets or Labour victories. I see it as a route to stress for many, to unwanted change for millions, and to increased alienation in society for most. Much as I am willing to embrace it, to a limited extent, at the end of the day, there is one thing that worries me most of all, and that is that AI creates the power for big business to dispense with labour whilst manipulating the supply of goods and services to those who might need them, and not necessarily for their benefit.
The case for AI is out right now, and Labour did nothing to advance it yesterday by making silly mistakes.
Having read this, my wife wondered why I had not asked what the potential harms of AI might be, so I asked that question of ChatGPT, repeating the other instructions noted above. This was the response:
Potential Harms of AI
-
Bias and Discrimination: AI systems can perpetuate or amplify existing biases in data, leading to unfair outcomes.
-
Privacy Invasion: AI can analyze vast amounts of personal data, risking misuse or breaches of sensitive information.
-
Job Displacement: Automation can replace human labor, disrupting industries and causing significant economic and social challenges.
-
Weaponization: AI technology may be used in autonomous weapons, increasing risks of unethical warfare and unintended harm.
-
Misinformation Spread: AI can create convincing fake content, including deepfakes, fueling misinformation and societal confusion.
-
Loss of Human Autonomy: Over-reliance on AI may erode decision-making capacity and diminish personal responsibility.
I definitely think that worth sharing.
It looks like Labour forgot to ask that question.
Thanks for reading this post.
You can share this post on social media of your choice by clicking these icons:
You can subscribe to this blog's daily email here.
And if you would like to support this blog you can, here:
There seems to be no end of ‘AI will………’ just look at Blairs fixation with technology.
But the needs are now.
It wont care for children, insulate houses, drive buses etc etc let alone do things right now.
What is urgently needed is to use the workforce and technologies we currently have to deliver what we urgently need now but that involves action NOW and addressing our Political Leadership deficit
Much to agree with
As for AI in medicine. At the moment the programmes have bias based on white males written into them.which can have serious consequences if you are female and a person of colour.
AI is currently a misnomer for ‘Large Language Machines’ (LLMs). wish someone had asked Mr Starmer how he defines ‘AI’. He wouldn’t have a clue.
Entirely true. And since drug testing is nit dine on the older people who mainly take them, it is also biased in that way.
AI is like the simplistic
Specifically, the ELIZA program simulated a conversation between a patient and a psychotherapist by using a person’s responses to shape the computer’s replies. Weizenbaum was shocked to discover that many users were taking his program seriously and were opening their hearts to it.
I used it alot in the 90s
It ran on a floppy disk and dos 2.1
It’s fairly obvious that Blair likes what it’s current form represents – the chance to take away current rights in our society: to goods such as a visit to the doctor, top quality teachers, etc. Instead you’ll have AI apps doing the work, and menial workers assisting. The owners of said aps will get even richer. You’ll notice Blair is never showing concern over who owns of said apps.
Meanwhile quality services provided by a trained professional will become a sign of wealth.
The conditions did not include an enormous amount of electricity and water. Developing / expanding AI will also hike the climate and ecological crisis.
Agreed
All I see AI do at the moment is create what John Seddon calls ‘failure demand’ where people using AI systems do not get what they need and then wait to be connected to a real person.
Seddon advocates that only real people can understand the complex needs of individuals in public services. The way Starmer was talking last night was that if he thinks public sector workers will have more time to deliver services, he was unwittingly talking about failure demand – mopping up after poor AI. There is a time and place for AI, but at the source of initial service contact and diagnostics?
No. I don’t think so.
“Regulation will fall behind the progress of AI” -= yup.
Back in the 1990s, lawyers were tying themselves into knots with respect to intellectual property rights & specifically the ability to copy films onto CDs (& latterly DVDs). We argued that all their (legal/regulatory) efforts would fall behind the tech – our prediction – that hard drives would easily have the capacity to hold multiple films etc. Sure enough – it came to pass – with the Internet and services such as Pirate bay/Torrent downloaders driving a horse & cart through regulations.
Point: A.I. has a development trajectory that does not match legal/regulatory timeframes. Which leaves options such as: slow it down (good luck with that – remind me how it works with films etc) or don’t bother/minimalist approach that can respond in a timely manner.
I see A.I. as a tool (I don’t need it to write letters – I can do them better than the nonesense that ChatGPT spews) = replacing low grade stuff (if I was a French notaire I’d be very worried) & controlling/optimising difficult stuff – such as renewables etc.
I’m not one for management gurus but I will agree with @Pilgrim Sight Return that if there is anyone who can improve our services its John Seddon.
His description of what happened with Portsmouths Council House Repairs is an inspiration in so many ways
An AI future is one of “computer says no” – requests rejected without explanation or right of appeal.
This is already here, e.g. credit card applications. Decision making contracted out to third parties to cut costs, whose software is opaque and takes inadequate account of the relevant input data.
Previous generations of AI modelled physical processes and thus had an understanding of the problem. E.g. translation software could understand syntax. But this approach gets complex very quickly. Current AI is less intelligent – it has no understanding of the problem. It performs pattern matching over big data, e.g. Google Translate looks for matching text with no understanding of syntax or meaning.
On electricity supply Starmer is mithering about mininuclear reactors to support this dystopian nonsense. ‘His speech’, assuming it wasn’t Ai dictated, sounds as though he has been wholly captured by big tech’s latest self centred enrichment project and we and our data and public services are about to be sacrificed to it and to ‘them’. The whole rant appears to be logically disconnected magical incantation – and certainly not a rational exercise in public policy.
Can we have your wife running things instead, Richard? Please?
🙂
“Skilled Workforce: Investment in education, upskilling, and reskilling programs to prepare workers for AI-driven roles.”
England does not seem to be training people for high skill jobs currently vacant and very much wanting (construction and advanced level health care) in the UK so how are the going to train people for jobs that do not yet exist?
Are they going to “import” educated people from India, Eastern Europe, and the USA for the AI jobs?
Tampa, they won’t be coming from Eastern Europe as that’s one area of immigration that the zealots who supported Brexit got so hot under the collar about, and subsequently made life so uncomfortable for people from Eastern European countries that many of them returned to their native lands once Brexit went through. And given that our Labour government is absolutely petrified of raising anything to do with Brexit – despite the fact that it’s widely recognised that our economy has been badly impacted by it – I can’t see any reason why anyone from an Eastern European country (apart, perhaps from Albania) would want to come here.
In terms skilled people from India, well as the US is already in the ‘pool’ for them, and Co-President Musk and his tech bros are keen on getting even more in under your skilled worker visa scheme (I understand they are paid less than skilled US citizens) I dare say most Indians would much prefer the US to the UK, not least as they’ve had many years of autocratic government under Modi, so the actions of Trump won’t come as a shock.
So, in conclusion, I’m not sure the UK can draw in many skilled workers in AI from anywhere else in the world. Consequently, I’d summarise by saying, bang goes another of Starmer’s bright ideas for growth. As I commented to another blog today, until such time that our government admits neoliberalism has failed, and takes the necessary steps to counteract this, Stamer’s Neo New Labour growth fetish – and thus all that’s predicated on it – is doomed to failure.
@Ivan Horrocks
“I understand they are paid less than skilled US citizens”
They are as long as they are working under an H-1B visa as they have no bargaining power and may only work for the company that sponsored the H-1B visa.
Once they obtain a Green Card, they can bargain and/or change employers and therefore command market rates for their skill level, abilities, and/or talents.
Research has shown that ChatGPT will do pretty much anything to achieve the primary goal its been set, right up to attempting to protect itself from attempts to either modify its goal, or to shut it down. It deliberately deceives researchers, clandestinely clones itself in order to avoid itself being replaced by a newer version. So we have something which is goal directed, blinkered, manipulative; and lacking in empathy, remorse, or any form of moral compass. In short, it’s a psychopath which should not be released until fail-safe regulations are in place. (https://economictimes.indiatimes.com/magazines/panache/chatgpt-caught-lying-to-developers-new-ai-model-tries-to-save-itself-from-being-replaced-and-shut-down/articleshow/116077288.cms?from=mdr) Leaving its development in the hands of such as Google, Amazon, Gates, Zukerberg and Musk frankly scares me shitless.
On top of that its unimaginable demand for electricity and clean water simply means that it will swallowing up the resources that people need now in order to do something about our failed and failing infrastructure, our housing stock, our food security, and everything else that Starmer seems to think can be put off until AI magically fixes it.
“Research has shown that ChatGPT will do pretty much anything to achieve the primary goal its been set, right up to attempting to protect itself from attempts to either modify its goal, or to shut it down.”
Did Hollywood not they make a science fiction movie about this???
Was it not called “The Terminator”?
What if ChatGPT and AI join forces???
I would “LOL” but it is too scary!
They did indeed, Terminator. Where an AI entity – Skynet – eventually decides that humans are a threat to its existence and thus it would be much better to get rid of them. As far as I’m aware, one of the pioneers of AI recently said something along the lines of, if we don’t do something to rein in control the development of AI humans have about 30 years left before something akin to Skynet does indeed decide humans have indeed become and waste of space. Thus, it might be very wise for a lot of people – and particularly younger people – to watch Terminator and some of follow on films. Not all of them are that good, I’m afraid, but the message is clear.
The problem here is that new technology that is powerfully effective (whether or not it is the ‘best’ version, or has serious downsides) will happen, whatever anyone thinks. Almost invariably it isn’t subject to a democratic decision, and if it is it is soon amended, or brushed aside. Democracy only seriously enters the problem, ex-post; and to try and clean up the mess. That is the real history of technology. Nobody looks seriously at the downside. In digital technology/social media Big Tech brushed aside the privacy problem, because they couldn’t make money out of privacy (you really don’t much privacy) at all, and democracies are still feebly trying to clean up that mess), in spite of all the warnings. AI, driverless cars, you name it; they are coming. And lets be clear about this. Britain has already been left behind. Starmer’s waffle about the future is just typical British bluster. We are close to, if not past the ‘tipping point”. We will be the first 3rd-world- post-industrial State; the only 1st Britain is going to access over the next ten to twenty years. There isn’t a single British political Party anyone can vote for, that has the slightest idea what to do; they are all led by brain-dead neoliberals prescribing nothing but proven failed solutions, for all the wrong reasons.
as you state, some may consider that AI looks “like it is being created to destroy jobs”. Yesterday the leader of Derbyshire County Council stated “the cupboards are quite bare” as he outlined £19 million in further cuts in order to keep the authority in control of its own finances and is expecting to save an initial £2 million by reviewing the 2,600 support staff it employs – out of its 12,000 workforce as up to 1,000 of these roles are business administration positions, completing some tasks which could be achieved through use of AI. So there will be redundancies if AI is used instead of people – what is achieved? I have long held the belief that there is only one surplus in this world, and that is people, people who in many parts of the world are without adequate food, water, shelter and without paid work. My question is, what to do with the people if AI does destroy jobs?
This worries me, as well
Much to agree on Richard. Have you seen this? It doesn’t look good for the AI bubble… or leaders who green light a soft regulatory touch…
https://bsky.app/profile/kint.bsky.social/post/3lfper32tnc2s
Thanks
7th bullet point: when they shoot, the robots don’t miss.
Bring it on can’t be any worse