Summary
I believe that while AI has potential, it can't replace human judgment and skills in many professions, including teaching, medicine, and accounting.
AI might automate certain tasks, but it lacks the ability to interpret nonverbal cues and understand complex, real-world problems.
Professionals need experience and training to provide human solutions, and AI's limitations make it unsuitable as a replacement for deep human interaction and expertise.
The Guardian's Gaby Hinsliff said in a column published yesterday:
The idea of using technology as a kind of magic bullet enabling the state to do more with less has become increasingly central to Labour's plans for reviving British public services on what Rachel Reeves suggests will be a painfully tight budget. In a series of back-to-school interventions this week, Keir Starmer promised to “move forward with harnessing the full potential of AI”, while the science secretary, Peter Kyle, argued that automating some routine tasks, such as marking, could free up valuable time for teachers to teach.
She is right: this is a Labour obsession. The drive appears to come from the Tony Blair Institute, its eponymous leader having had a long history of misreading the capacity of tech, little of which he ever seems to understand.
The specific issue she referred to was the use of AI for teaching purposes. AI enthusiasts think that it provides the opportunity to create a tailor-made programme for each child. As Gaby Hinsliff points out, the idea is failing, so far.
I am, of course, aware of the fact that most innovations have to fail before they can succeed: that is, by and large, the way these things work. It would be unwise as a consequence to say that because AI has not cracked this problem as yet it will not do so. But, even as someone who is actively embracing AI into my own workflow, I see major problems with much of what labour and others are doing.
The immediate reaction of the Labour market to AI would appear to be to downgrade the quality of the recruits now being sought as employers think that AI will reduce demand for those with skills in the future. And yes, you did hear that right: the assumption being made is that specialist skills will be replaced with AI in a great many areas. Graduates are being hit hard by this attitude right now.
In accountancy, for example, this is because it is assumed that much less tax expertise will be required as AI will be able to answer complex questions. Similarly, it is assumed that AI will take over the production of complex accounts, like the consolidated accounts of groups of companies.
Those making such assumptions are incredibly naive. Even if AI could undertake some parts of these processes, there would be massive problems created as a consequence, the biggest of which by far is that no one will then have the skills left to know whether what AI has done is right.
The way you become good at tax is by reading a lot about it; by writing a lot about it (usually to advise a client); and by having to correct your work when someone superior to you says you have not got it right. There is a profoundly iterative process in human learning.
Employers seem to think at present that they can do away with much of this. They do so because those deciding it is possible to eliminate the training posts have been through them and, as a result, have acquired the skills to understand their subject. They do, in other words, know what the AI is supposed to be doing. But when those fewer people who will now be recruited reach a point of similar authority, they will not know what the AI is doing. They will just have to assume it is right because they will lack the skills to know whether that is true, or not.
The logic of AI proponents is, in that case, the same as that used by the likes of Wes Streeitng when they advocate the use of physician associates, who are decidedly partly trained clinicians now working in the NHS, and even undertaking operations, without having anything like the depth of knowledge required to undertake the tasks asked of them. They are trained to answer the questions they are given. The problem is that the wrong question might have been asked, and then they both flounder and cause harm.
The same is true of AI. It answers the question it is given. The problem is how it solves the problem that is not asked - and very rarely does a client ever ask the right question when it comes to tax. The real professional skill comes from, firstly, working out what they really want, secondly, working out whether what they want is even wise, and thirdly, reframing the question to be one that might address their needs.
The problem in doing that is that this is an issue all about human interaction, but which also requires that the whole technical aspect of the issues being looked at (which usually involve multiple taxes, plus some accounting and very often some law) be understood so that the appropriate reframing can take place, all of which requires considerable judgement.
Do I think AI is remotely near undertaking that task as yet? No, I don't.
Am I convinced that AI can ever undertake that task? I also doubt that, just as I doubt its ability to address many medical and other professional issues.
Why is that? It is because answering such questions requires an ability to read the client - including all their nonverbal and other signals. The technical stuff is a small part of the job, but without knowing the technical element, the professional - in any field - and I include all skilled occupations of all sorts in that category - has no chance of framing their question properly, or knowing whether the answer they provide is right or not.
In other words, if the young professional is denied the chance to make all the mistakes in the book, as would happen if AI replaced them, then the chance they will ever really know enough to solve real-world problems posed by real-world people is very low indeed, not least because almost no one who seeks help from any professional person wants a technical answer to any question.
They want the lights to work.
They want the pain to go away.
They want to pay the right amount of tax without risk of error.
They want to get divorced with minimum stress.
The professional's job is not to tell them how to do these things. It is to deliver human solutions to human problems. And they can't do that if they do not understand the human in front of them and the technical problem. Use AI to do the tech part, and what is left is a warm, empty, and meaningless smile that provides comfort to no one.
I am not saying we should not use AI. I know we will. But anyone thinking it can replace large parts of human interaction is sorely mistaken: I do not believe it can, precisely because humans ask utterly illogical questions that require a human to work out what they even mean.
And that's also why I think Gaby Hinsliff is right to say that AI can only have a limited role in the classroom when she concludes:
It's true that AI, handled right, has enormous capacity for good. But as Starmer himself keeps saying, there are no easy answers in politics – not even, it turns out, if you ask ChatGPT.
Thanks for reading this post.
You can share this post on social media of your choice by clicking these icons:
You can subscribe to this blog's daily email here.
And if you would like to support this blog you can, here:
One approach (which is already being used in some cases) appears to be setting up one AI system to watch and monitor a second.
Are we going to meekly accept that AI systems in for example a healthcare setting will decide which humans should be treated and which won’t: who will live and who will die?
It seems to me that systems that exploit the advantages of AI and the advantages of humans together are likely to be more powerful than those that rely purely on one or the other. But there are profound implications for the sorts of skills that the humans will need to do that job.
Absolutely nailed it. As a former Big Four man I know AI will be a useful tool in the kit. As PC’s and spreadsheets became and still are and before that typewriters. Professional services are about human judgement. AI is the new, new thing. Everyone likes to talk about it as they think it makes them look smart but few understand it.
AI is already working at a high level consolidating and cross referencing medical and other scientific research around the world, and that is a a great thing. I also think it will be better at running public services such as the rail network, which has always been hampered firstly by the inability to run an increasingly complex national network in the 50s & 60’s, that lacked the solid technical services we take for granted today. So, saying nationalised services in the sixties didn’t work and therefore they can’t work today, is I think a completely futile point of view.
HOWEVER, having said that I do agree that iterative learning and critical thinking are the foundation of intelligent evolution in both human and technological terms. In Issaac Azimov’s Foundation Trilogy, (if I recall it from reading it in the late 60’s) the techno/religio centre of the expanded intergalactic human race had degenerated due to the lack of human input and the overtaking of the technological governance, so the so called centre of civilisation crumbled allowing criminal gangs and outriders of the periphery simply ran an anarchic chaotic existence far from any kind of civilised society.
I won’t go on about that right now, but there seemed to me to be the thread of your thinking about AI and the inherent danger of relying on technology to the point that it lames mental, philosophical and evolutionary human activity at the heart of our so called global community.
But to refer back to Asimov for a second, the whole thing was disrupted by the emergance of The Mule, a chaotic mutant that changed the whole concept of existence, a powerful force which over turned the equally chaotic status quo.
I’m thinking MMT here. Capitalist, neo liberal chaos cannot continue, because it’s burning us and the planet out. The adoption of MMT could create a whole set of different, human based concepts of running businesses and services for the good of the majority pf people, not the elites, a new form of egalitarianism, based on human wellbeing, not fiscal or territorial hoarding and piracy.
Thanks
Aopreciated
It is amoiong time since I read Asomov
Euan Blair has more than a passing interest in AI, so no wonder his father is impressed. Multiverse, the first so-called AI edtech unicorn, founded by Blair Jr in 2016, and attracting investment from both sides of the Altlantic. Matching school-leavers with job opportunities, it would be interesting to see how the algorithm works.
It has never made money, as far as I know
More to the point, has it ever truly successfully matched a person to a job in a job-seekers market?
I’ve been using AI to do research for my business. It’s very good at collating information I don’t have access to, and if you ask it to show its workings you can verify somewhat. But it will make stuff up rather than say nothing or ‘I don’t know’.
Fundamentally AI can deal with simple problems involving lots of data. But the results depend very much on the quality of the data.
It may also one day be able to deal with complicated problems. Problems that can be unravelled and broken down into simple problems.
I believe what it will never be able to deal with are complex problems, arising out of complex systems.
Humans are a such complex systems – therefore, best understood and supported by other humans.
The tragedy to me is that as always, instead of using technology to automate drudgery, and free humans to be humans, we are yet again insisting that tech takes over doing the difficult, interesting stuff (badly), while humans are left to deal with the drudgery.
See Phil Jones 2Work without the worker” [https://www.versobooks.com/en-gb/products/2518-work-without-the-worker] for a heartbreaking description of where this is headed.
Much to agree with.
Thank you and well said, Richard.
One should highlight the, literally, hundreds of millions given by Silicon Valley to their Blair fronts. Euan Blair and his old friend, but new investment partner, Sunak are diversifying into AI.
Sajid Javid is also the recipient of Silicon Valley money and an investor in such ventures. Javid leads JP Morgan’s efforts in healthcare. Blair is a client of and adviser to JP Morgan.
Nature published this article about a month ago.
https://www.nature.com/articles/s41586-024-07566-y
It is entitled “AI models collapse when trained on recursively generated data”
The title says it all really and it is why I am very concerned about AI.
I have read it, and as a layman, I find it very worrying. More further down on my concerns.
But first for the non-technical, there is another link at the very bottom of this paper that cites the above article. It shows in pictures, that anyone can understand, what happens when an AI uses AI produced data in its database.
It is by Elizabeth Gibney and is another Nature article
https://doi.org/10.1038/d41586-024-02420-7 you can see the results, but the paper is behind a paywall.
Please look at these pictures and it shows very clear what is at the heart of my concern
So, my concern is that the data on the web has been used as the source for the databases and now, as time passes, more and more data on the web, which is still being trawled for data, is actually produced by AI. Is that data being identified accurately and weeded out before data is used in a database? I have no idea.
But the implications of these two papers are obvious. As time passes, if it is not human generated data, some/many AI systems are producing rubbish.
Do those who are actually deciding that an AI based system should be used have the real technical knowledge and background to take that decision? I very much doubt it. I suggest that often the drive to use AI will come from those who just want to save money by getting the human out of the loop, or using cheaper people who do not have the knowledge and skills (as Richard has pointed out).
Thanks
Tony Blair’s son Euan has an AI company – Multiverse – currently matching applicants to jobs.
I would not , of course, consider there to be any linkage to Blair’s pushing of AI to Starmer and co. !
There is a narrative emerging here about AI then:
Politicians who have lied about tax but used it as a means to compete with each other – are going to solve the problem by reducing costs further for public services by replacing people with AI (because they all think the same anyway.
Allied to this, is this idea that all the money that has been created has been created and no more is forthcoming from the government (except that is until next banking crash, when the rich will need to have their income streams protected).
So, this is the IT faction’s golden opportunity – the same people who have brought the surveillance economy and contributes to unrestricted online fascism, pornography and pedophilia – not to mention problems with national security – the same people (many of whom have regretted what they have created – including some of the investors) are going to help us out with AI?
This is exactly what happened before we had social media – the upsides were all there was – but we’ve living with the downsides – such as increases in depression and suicides in your girls (just one of the downsides). Would you trust the same people who gave us that? I don’t.
This all reminds me about financial derivatives that brought down the economy in 2008. Derivatives (according to Satyajit Das) were supposed to diversify risk and add security in certain deals, but when they were then unregulated, and made available over the counter, they became THE economy so to speak opening areas of it that they should not have been used in and exponentially adding to risk . And then looked at what happened.
If you want to see the future, watch George Luca’s sci-fi film THX1138. Here is clip if you want to look. There is another scene where Robert Duvall’s character is in the AI booth being sick whilst the calming robotic voice gives him platitudes – it is both funny but also quite horrific and would give the AI advocacy community nightmares if they saw it.
(If I try and up load this post with an attachment it does not take it so, feel free either look at clips on line at Youtube or watch the film – it is very good and one of Lucas’ best – a true sci-fi movie and not Star Wars at all).
Thanx
I’ve been shown ‘working’ AI systems to look at vast amounts of data and produce some kind of prioritised listing or suggestions for improvement. I’ve come to the conclusion that in reality the vast majority of AI out there is basically (at a very simplified level) spreadsheets with clever programming to link things and if it’s not in the ‘spreadsheet’ then it is missed or misinterpreted. The level of nuance in human endeavours are very unlikely to be replicated by this method.
The key question I always ask is what is the error rate. The responses is generally 15-20% with an unjustified statement that this would improve over time, perhaps. Bear in mind that is mostly relatively linear structured data with clear (to a human) implications, it is just the speed of the AI that’s the attraction.
The result is that skilled individuals need to review the entire dataset and output to identify the errors and in order to have those skilled individuals they have had to be trained to that level.
Would we accept that medical diagnoses are going to wrong for an unknown % of patients or tailored education is failing a proportion of children. I doubt it.
I see there being immense benefits in so called AI but at best it can be tool to assist the human in overall decision making process and will always need challenged by knowledgeable humans.
Most is language processing: an organised output from a giant websearch
And I know that simplifies things – but sometimes that helps
Is the answer we always want what Google would suggest?
I am reminded of the post at https://heatherburns.tech/2024/04/29/cheers-ross/ where Heather Burns quotes the recently departed Ross Anderson:
“The idea that complex social problems are amenable to cheap technical solutions is the siren song of the software salesman and has lured many a gullible government department on to the rocks.”
We in Ireland have seen this over and over again.
AI is the new siren song of the software salesman, and we should know the tune by now.
Thanks
AI is expensive.
I am sceptical about AI anyway, but implementing AI on a shoe string is ringing alarm bells.
How many teachers, nurses, social workers and other front line public sector staff do we have to sacrifice to pay for this AI?
AI cannot carry out complex surguries or stitch people up afterwards. AI cannot design sustainable infrastructure. AI cannot wipe bums or cook school meals.
AI is a useful tool, but we have to accept its shortcomings, it’s cost and mitigate its bias.
There is no substitute for properly investing in public services.
AI in limited form in the classroom (eg real time translation for kids with English as 2nd language) is fair enough but to suggest it to replace teachers, nurses or social workers is insane. It’s a tool to help – like a calculator or word-processor – that’s all. What the private sector do with it is, of course, up to them.
[…] By Richard Murphy, part-time Professor of Accounting Practice at Sheffield University Management School, director of the Corporate Accountability Network, member of Finance for the Future LLP, and director of Tax Research LLP. Originally published at Fund the Future […]