I just listened to Tony Blair being interviewed on Radio 4.
I can recall the days when Tony Blair, who had apparently never turned on a PC, believed that IT was the solution for every problem in government . Now it seems that he thinks that AI is the answer to every question.
Amongst the problems that he seems to think it might solve will be the junior doctors' dispute in the NHS, because he spoke about AI when asked what the solution was. The implication of his comments was that he does not see the reason for much of what junior doctors do. It would seem that he thinks that the decisions that they make can be made by AI, and lower qualified staff can deliver the services as a consequence. This then solves the pay dispute, by removing the pay grade. He was not as blunt as this, but that is, I think, where he was going, by inference of what he said, and the context of his other comments.
There have been many occasions when Tony Blair has got things terribly wrong. That is because there have always been a very obvious limits to his understanding. This is now the case with regards to junior doctors.
Firstly, junior doctors are not junior. Many of them will have been in the job for well over a decade. They are the backbone of the entire hospital system.
Secondly, AI works on the basis of algorithms, and they need data. In contrast, the whole point about medicine is that it takes seriously incomplete information, the vast majority of it being communicated nonverbally, and interprets that based on the intuitive experience of the practitioner. The weightings provided are those that at the moment when the decision has to be taken (3am at the bedside of an unconscious elderly person with multiple co-morbidities) seem best to the doctor using a combination of all the heuristics that years of experience has provided to them, many of the inputs for which they will never have time to record. It is, in other words, an exercise in the management of risk in the face of extraordinary uncertainty, including very often not knowing what the patient actually has wrong with them.
Good luck in finding any AI system that can process that in a few seconds, including all input time, right now or at any moment in the foreseeable future. It is not going to happen.
So, Blair is wrong, again. I am not saying AI has no uses. It is contextually autocorrecting my typing as I write this, often, but by no means always, getting things right. That's useful.
But when a politician possessed of little wit and even less knowledge, let alone understanding, think AI can undertake tasks like those a junior doctor is asked to undertake two things will follow.
The first would be massive medical errors.
The second would be senior doctors without the experience that comes from years of appraising the human condition. The loss to society would be immeasurable.
But, no doubt, somebody funding the Tony Blair Institute would have profited considerably in the meantime, and Tony Blair would define that as a success.
Please pardon my cynicism, but incomprehension in the face of reality on this scale is very hard to accept when hinted at as if it is a truth by the likes of Tony Blair.
Thanks for reading this post.
You can share this post on social media of your choice by clicking these icons:
You can subscribe to this blog's daily email here.
And if you would like to support this blog you can, here:
AI or machine learning etc. may well deliver incredibly positive change in many areas. If it does then FANTASTIC.
However, relying on any unproven technology to dig yourself out of a hole is ridiculous. Any plan must start from where we are and use the tools and resources available today… anything else is just wishful thinking.
With every new tech innovation, people with little knowledge and less understanding imagine that we have a gateway to utopia. I remember when desktop publishing first became a thing, people were predicting the demise of all professional graphic artists. Instead we had a few years of amateurs producing dreadful material, and then things went back to normal as people realised design is not as easy as they thought.
Of course DTP morphed into tools that professionals use to up their game. I imagine AI will find similar roles. Yes, it needs regulation to avoid misuse, so government needs to take it seriously, but to imagine a future where ‘everything is AI’ is naive beyond belief.
Much to agree with
The law of unforeseen consequences is very much applicable to technology developments of all types.
Incremental developments not so, but disruptive technologies are rarely, if ever, evaluated for potential adverse impacts.
The notion that AI will be uniformly and widely beneficial is a triumph of hope over experience because it is simply not being evaluated at all.
Even worse is the utopian idea that AI will solve problems without any downsides.
Technology never does, and nothing is foolproof, because fools are so ingenious.
Much to agree with
Currently designing an A.I.-based control system – for some renewables. We have one stochastic variable – weather and the aim is to develop a system that can learn/self-improve over time. It is proving difficult (but we will get there) – with one variable. B.Liar is an imbecile, he has less contact with the real world than our hamsters.
🙂
Blair’s 3rd way again… Corporate liberalism with a human face.
Blairism mostly involves tinkering.
Purely technocratic, “we’ll do what works`’.
Relies on tech solutions.
It might come over as pragmatic and problem solving but it is very far from that.
It involves very superficial and lazy thinking and abrogating responsibility to technology for complex human and social issues.
Blairism requires fully accepting the neoliberal status quo, with all its failings.
Tech power is mostly held by large corporations.
There are then no genuine value judgements involved. Problems solved..
Of course the corporate agenda is entirely founded on profit, and consumerism.
It is purely transactional and limited by the lack of a value system.
The market will provide, at a price, + margins and capital growth.
It is not going to “solve” any non-market issues, like climate change, whatsoever.
It is populated by the Fujitsus of the business world.
I regard Blairism as a pretty vacuous approach. Tried and tested, but ultimately failed to bring any system change, or value change.
When I was in medical school, we were told that 100+ hours a week were needed to get the experience to pass the membership exams for the various specialities.
AI consultants is a scary thought.
If your knowledge of AI and machine learning amounts to a spell checker you’ll write nonsense like the above.
Currently the AIs we are building and trialling:
Design new drugs
Monitor and can administer ICU patients
Perform medical diagnosis
Even in their current fairly early state of development they are already better than humans. Which shouldn’t really be a surprise as they have far more data to work with, don’t make subjective decisions and don’t make unforced errors.
You make the assumption all Doctors are highly competent. Turns out a large number aren’t, and even the ones who are get tired or have other things they want to go and do.
But by all means keep going down a route which hasn’t ever worked. What was it Einstein said about doing the same thing and expecting different results?
Oh dear, what a stupid comment. I use the word stupid as the kindest one available.
Of course AI can be uysed to buiold drugs. What you are saying is it can iteratively test a hypothesis. Of course it can.
As for diagnosos: no, it tests someone else’s hypothesis. It is not random;y set loose. So your claimn is false.
And as for admin, it can sort a database. Whoopee! Can’t well in diffeent ways. This is just the latest.
But you wholly miss the point that – as in the case I described – the data set is so incomplete that AI would be itterly useless.
And if you do not know that my description is wholly approproiate.
And precisely because I can tell the differences between the situations I am a million miles ahead of you on this.
Think you are just proving how little you understand AI. My guess is you might have used retail grade AI like ChatGPT a few times and now that makes you an expert.
Instead, you sound like a fool.
The AIs we are working on design drugs at a molecular level. Not just testing some hypothesis. The ability for humans to test for molecular shape, function and interaction is limited in comparison to the AI.
Our diagnostic and surgical AIs learn by watching and doing. Just as human Doctors do. They are still in trial phase but they have unlimited data and can store it perfectly, as well as adding their own “experience” to it. Human Doctors simply cannot do that with the same capacity.
You talk about databases and date sets. Which again shows how little you understand – your own dataset is wholly incomplete but you think you are an expert. You seem to think the AI is just basic software and just sorts databases.
True AI is so far ahead of that. It actually interprets data and infers from it.
To give you an example you might understand, Tesla’s self drive AI isn’t a series of rules programmed into it to tell it what to do in each situation. It learns by watching videos of people driving or by driving itself. Just as a human would do. But the AI can do that at a far greater capacity than any individual can and can recall that data perfectly.
But I suppose we should all accept the word of someone with zero experience in the field of AI who is suddenly a self declared expert in the field.
But
And Tesla has not produced a safe car
Politely, you’re dangerously stupid.
I am bit for a moment suggesting AI has no value.
I am saying you have no comprehension of value and risk, like Blair.
And that is gross stupidity.
And for the record, I think we will always do wisdom better than any machine. But you wouldn’t recognise it.
Next time declare your interests in full, too.
What is very telling about the Tesla ‘self-driving’ technology is that it was initially developed on cars which had a forward-facing radar and ultrasonic sensors. However, in recent years, these have been dropped so the newest models of Tesla actually have a lower specification than the oldest ones. Musk/Tesla have claimed that these additional sensors aren’t necessary and actually make the self-driving technology more complicated to implement, but everybody else in the field disagrees. Most are using LIDAR in association with other technology but Musk has specifically poo-pooed the use of LIDAR.
I’m sure it is an entire coincidence that what Musk/Tesla says will be most effective for ‘self-driving’ also happens to be the cheapest option available…
This is what many people such as Musk see AI as providing. Not something better, but something cheaper. If Tesla were to develop ‘self-driving’ technology using a combination of the best available sensors as well as advancing AI techniques, it would be worthwhile, but they don’t and most in industry will be looking at the bottom line in a similar manner.
I was just wondering what Jeff’s medical qualification and experience is? It would be helpful to know. Thanks in advance.
He is not saying, of course. So I deleted him, since I had told him he had to do so to comment again.
Thank you Richard. I have no medical experience other than that of a patient (I have a heart condition) and will always rather be treated by a human being. However I have experience of the electronics and IT world (having worked at managerial and director level for some years). The faith often shown by those in the sales sector of that industry is considerable but from experience I shared the maxim, as far as software was concerned, of “garbage in/garbage out” until all is thoroughly tested and proven. I think many sub-postmasters might share my scepticism.
Indeed…
It is depressing in the extreme that Blair brings the issue back to saving(re-routing) money over celebrating, supporting and furthering human expertise.
In 2014 the third most common cause of death amongst insured people in the US after heart attack and cancer , was preventable medical error. This is because for profit hospital protocols had to be adhered to, by the medics. For example, if a recovering patient went into decline, here in the UK a doctor could get a potassium level reading and if low, rectify it, saving the patient’s life. Not so in that hospital. How can the patients know in whose interest the Al has been programmed?
The 42 English Integrated Health Boards (ICBs) and their partners the local boards now include actuarial decisions when allocating services to their ‘places’, based on big data, to compute necessary but probably diminishing services in their area. The contracts are of a risk share nature and the fewer treatments they provide, the more money they keep. For-profit companies are now gatekeepers for eg. Diagnostic tests, while also holding contracts to ‘deliver’ those diagnostics, so you can see where the future provision may be heading.
It was reported around 2015 that Ribera Salud (a model for the Sustainability and Transformation Plans introduced by Simon Stevens, was mired in controversy. As I believe it Spain took it back into public hands. But someone could decide that such an experienced company would be ideal to run one of the ICBs. Remember Hinchingbrook.
I was listening to Today on my phone this morning as I was getting the kids ready for school. Upon hearing that Blair was coming on (and after hearing one of those trails about how he would be making a speech saying this, this and that later in the day), I took the decision to stop listening to the programme. I quite like my new phone and didn’t fancy it’s chances after it had been thrown through at the door.
Most of the politicians who go around chanting “AI!” as their mantra don’t have the first fecking clue what they are talking about. The current tech as it stands is useful for certain purposes involving very complex sets of data, but it isn’t a panacea. I’d advocate the use of AI in the creation of expert systems, just as long as there is a trained human being ultimately making the decisions at the end of the process. (In my view), there is nothing wrong with a junior doctor being guided and assisted by ‘AI’ in association with their own training and experience when making a diagnosis, but there certainly is a lot wrong with an individual with a lot less training doing the same. For example, I’d imagine that the AI dudes (and the likes of Blair) are thinking about how much cheaper it would be to train a lot of Physician Associates who are then basically told what to do by an AI model instead of investing in actual doctors. Quite where this would leave the progression from junior doctor to expert consultant, I don’t know, but I’ve read reports that development/training opportunities for are already badly lacking due to the ongoing squeeze on NHS budgets. How long before the law of unforeseen consequences kicks in?
If you’ve used much consumer software over the past year or two, it’s amusing just how many of them are integrating ‘AI’ to assist when searching and in the process of tasks. Not one of these new ‘AI’ facilities would be of any use at all to me in my job for the things I do at present, much of which is communicating with customers. Several of my other tasks at work are tedious and repetitive but I don’t see ‘AI’ being of much use in any of them for many years yet, either.
I suspect that a lot of AI advocates don’t have much of a grasp about what the technology actually does and the general populace don’t realise that the biggest impact AI is having on them at the moment is the creation and promulgation of disinformation and outright bullshit on social media.
Companies like Ezra appear to be using machine learning really quite well.
https://ezra.com/
“Ezra scans for possible cancer in the human body in up to 13 organs.”
“We can identify over 500 common and rare conditions, including cancers.”
“The company’s ultimate goal is for Ezra to offer a 15-minute, full-body MRI scan for $500; it aims to achieve this over the next two to three years. “We want to make booking your screening as easy as booking an Uber.”
The problem I can see is data. Oh, you had this small blip in your thyroid 9 years ago – sorry not insured with your current policy!
There have always been batteries of tests that can be done
BUPA has done this for years
This just extends this
The problems:
– Cost
– Fales positives (lots of them. and the more you test the more you get)
– The data on which most tests are based are dervied from young men, in the main. Very little on older people or women, so the rate of failed tests is higher still
– Vast follow up cost in the NHS
– Stress
$500 for the MRI
A fortune to follow up, mostly for precisely no reason, and all on the public purse
Shall we get real here? If you want to crash the NHS this is the way to do it.
And if people really want to be fit there are four ways to do it that will almost always work:
1. Walk more
2. Eat less ultra-processed food
3. Cut out alchohol, preferably altogether
4. Take a nap in the day (the best way to reduce heart diseases, by far)
Why is no one talking about that?
So, shall we talk about the real solutions, not the nonsense?
So, assuming that the scan comes up woth a positive. What happens? The patient will be referred to a doctor who may be able to confirm that this is a false positive, or may need to run furthe tests, potentially invasive and dangerous ones, to establish whether there is a need for further medical intervention. I agree that Ai can be useful for interpreting scans and x rays, but you need a doctor to decide whether a scan or x ray is needed and what should happen after the results.
For me the big problem if AI were to be extensively used in place of ‘junior’ doctors is that those ‘junior’ doctors are learning while they work. No junior doctors, no specialist or consultant doctors.
The idea that AI can read scans is unproven at present
There is massive jugdement required in such processes.
And if we think AI can do that, who will be left who ever sees enough scans to exercise that judgement. The less of HI (human intelligence) could be staggering
How I applaud the implicit criticism in your following clause “But when a politician possessed of little wit and even less knowledge, let alone understanding”
Blair? Or the political class generally? Probably the former (though I leave you to decide with reference to “our Tony”); certainly the latter.
On a par with “at the push of a button” with reference to IT, which is only true after the exercise of immense amounts of skilful programming and coding to make
IT nearly idiot-proof.
Ai is the same – it requires immense amounts of hidden preparation and groundwork before it can perform properly
And CERTAINLY shouldn’t be used for medical diagnosis, except as an adjunct – i.e. let the medical professional run the data through AI, as an aid to diagnosis, as it MIGHT come up with a connection the professional had overlooked, or not known of, because the research was too new.
But NOT AI guiding one of these under-trained Physician Associates – that would be the tail wagging the dog (and so producing a different result!!).
Thanks
I do feel people completely overlook the biggest drawback with the sudden proliferation of AI as a solution to all the worlds ills, and that is the vast amounts of energy and water that is required to run these gargantuan data centres. It is completely unsustainable in an era where we should be moving towards degrowth as a matter of extreme urgency.
Not only is AI a solution looking for a problem, it is a massive problem in it’s own right.
Agreed
There’s a hefty human cost too.
https://www.theguardian.com/technology/article/2024/jul/06/mercy-anita-african-workers-ai-artificial-intelligence-exploitation-feeding-machine
Wouldn’t reducing the practical experience that junior doctors currently gain by working with patients and replacing part of this diagnostic work with AI lead to less experienced senior doctors? You can’t beat learning by doing. It’s a bit like the loss of map reading skills as we rely on technology to get us from A to B. If the technology fails, we’re lost.
More AI means less HI – human intelligence – seems to be a good summary of that