In a comment made during an interview about his new autobiography, Bill Gates is reported to have said:
Artificial intelligence will likely replace doctors, teachers and more professionals within the decade.
The report added:
While sharing his vision for the future of artificial intelligence (AI) on The Tonight Show Starring Jimmy Fallon last month, the Microsoft co-founder — who is one of the world's most famous businessmen and philanthropists — said that soon, humans won't be needed “for most things.”
It is an interesting idea that humans won't be needed for ‘most things', because, for a start, it presumes that we only exist as economic entities. That may be what Gates views most of the rest of the human population as, but maybe we don't share that view. It takes a particularly warped perspective on life to presume we only exist as producers.
Second, indisputably, AI is going to disrupt many processes. It already is my own. I find it useful. But I question Gates' observation. It presumes that answers are required within a paradigm. AI is programmed to find answers within such frameworks, and will always have problems working outside them. So, it can, for example, most likely work out a course of treatment for type 2 diabetes within the existing framework of thinking on that issue. But given that those treating this disease do not usually point out that it is entirely reversible in most cases - as it, as a matter of fact, is - because to prevent this disease would deny big pharma and the medical profession the massive gravy train of income that this form of diabetes currently provides to them, don't expect AI to disrupt the status quo any time soon. It will not be programmed to do that.
Thirdly, not every job is algorithmic in a fashion that AI can handle. Many are. But a great many are not. Maybe Gates is not aware of that.
This whole issue needs deeper consideration than these few observations provide, and maybe Gates really has given them the necessary thought. His comment does not, however, suggest that he has. Unpacking it, what he is saying that those who now use human logic to undertaken what is essentially algorthmic activity - and a great deal within finance and medicine might, for example, fall within these spheres - are at considerable risk of having their roles replaced by AI, unless that is they can up their games.
They could do that.
They could interpret the algorithm.
They could question it.
They could reassure those who interact with it.
They could develop a new algorithm.
They might even say that the wrong algorithm is being used.
And they might then do something useful, like suggest people fundamentally change their diets, which is the answer to type 2 diabetes in a great many cases, as it might also be with alzheimers' disease, which many now think to be type 3 diabetes. They could, in other words, break the algorithm, which is how change always happens in life.
So, are humans going to have nothing to do? I seriously doubt it. Only the complacent or compliant, and those who do not want to engage, will be left with nothing to do. Those who actually see it as their role to think and interpret will have ample to do, as ever. Professionals will, in other words, have to profess when AI can manage the algorithms. There may be no harm in that, but many might find that a disruptive shock to the system. That, though, might be what many professions and professional people require.
Thanks for reading this post.
You can share this post on social media of your choice by clicking these icons:
There are links to this blog's glossary in the above post that explain technical terms used in it. Follow them for more explanations.
You can subscribe to this blog's daily email here.
And if you would like to support this blog you can, here:
The problem with using LLMs (“AI”) is that they can only be trained by using massive datasets and those data are produced by real people. Once people are dropped from the loop the LLM can only train off it’s own output, which will result in increasing amounts of hallucination and other errors.
Agreed
One of my translator friends is now a part-time postie, thanks to AI.
That was always going to be a jib at high risk.
It was a well-respected profession until silicon-valley decided otherwise.
That is what technology has always done from Spinning Jennies onwards
As you yourself would say, “much to agree with”!
The idea that teachers might be replaced by AI is laughable. No doubt a lot of instruction might be conducted by AI, but teaching is far more than that. Teachers respond to pupils, care for them, connect with their minds, love them. AI does not love! Algorithms will work when the problem is clear and answers have been worked out before. They may well make connections within the field. But what happens when the learner wants to say “But you don’t understand!” and maybe the learner’s problem is either their lack of language (to describe their difficulty, to respond in a fashion the algorithm understands) or something in their life that seems irrelevant to the process of learning but is blocking their engagement with the task – their father’s angry with them, or their sister’s terminally ill, or their boyfriend’s dropped them, or there’s a desperate problem with housing or income?
Much to agree with
Gates is out of touch with reality, and even humanity
It’s what billions do to you. You don’t trust anyone but a machine when you’re that wealthy because you cannot be sure anyone isn’t out to shaft you – except a machine – and even it might be programmed to do so.
I’m one of the few teachers I know who refuses to use AI for anything. We get paid for lesson prep/marking (not enough time but hey, it’s a start).
But students have discovered chatGPT and still seem to think their teachers were born yesterday!
The real power is AI + human. AI’s output depends on how well the problem is defined/ framed. Complex or nuanced questions often yield very different answers depending on how they’re asked.
This is where human expertise is essential: deep domain knowledge is needed to formulate the right questions, interpret AI outputs critically, and apply them meaningfully in context.
Replacing mundane, repetative jobs with AI seems fair enough, but in many professions, humans are needed to provide the human touch. A teacher offers more than just information—they inspire, encourage, and understand. Doctors and nurses provide a lot more than diagnoses and treatments; they express empathy, compassion, care, and hope. These human qualities—connection, presence, and emotional intelligence—cannot be replicated by AI (yet).
I remember being ‘on watch’ in a ships engine room – there was a following sea which could roll the propellers out of the water and I was just there in case the Chief needed a hand.
Now it wasnt modern tech, two Newbury ‘Sirron’ O types, a 2 stroke diesel dating from 1941 but the Chief was going round listening to the fuel pumps using a screwdriver like a stethoscope to check all was well.
Can AI do that sort of thing, deal with the ‘human’ touch that can identify that something ‘isnt right’ before it stops being right? The toddler that isnt ill but isnt 100% either? The corner of a field of crops that needs a look.
It certainly cant provide the reassurance a Doctor or Nurse with a good ‘Bedside Manner’ can give a patient – which I have seen used to great effect on several occasions.
So yes AI has its uses but its severe limitations.
So far, AI works within the system. Many jobs do. The problem is, can it replace judgement? I doubt it.
1. I find MIT’s Rodney Brooke’s “Predictions Scorecard” a healthy read on Robots, AI, Self-Driving Cars etc.
2. I had that same thought, that it should become much more difficult to justify what counts as professional and all the tangibles and intangibles that are usually attached to that status. Eventually it will require a whole lot of rationalising and/or humility and boy am I sceptical with regards to humility! After all we have a solid amount of copium, hopium, make believe and wishful thinking already today in many “professional” high paying jobs.
The logic of Gates’ view is that AI might conclude humans are not needed. Or that most are redundant.
Time for Asimov’s three laws of Robotics.
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
I am making this tongue in cheek but there is an issue here which requires a lot of thought.
Agreed
@Ian Stevenson
Don’t ignore Asimov’s Zeroth Law:
A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
I had forgotten that one,
There is Stevenson’s law. “However, a robot is programmed, it can go wrong and in time , will go wrong.”
Whether they do better than Humans is debatable.
The ultra intelligent robot with the (fading) positronic brain, Daneel contemplates and does some pretty horrendous things in furtherance of that law (and its “humanity” amendments) as described in ‘”Foundation and Earth” (1986) although to his credit, he does seem upset about it.
His replacement and succesor, biological humanoid but alien Solarian telepath with a powerful transducer brain, hermaphrodite Fallom, ends the book ambiguously, as either humanity’s Saviour or possibly it’s Nemesis.
Great book, great writer.
And yet, in a role where it could be useful, supposed “pro-AI” LINO are removing it from NHS England. It really does seem blatantly about dismantling the public health service.
https://www.theguardian.com/politics/2025/mar/31/ridiculous-cuts-to-ai-cancer-tech-funding-in-england-could-cost-lives-experts-warn
While concerns about AI displacing human jobs and reshaping industries are understandable, the reality is far more nuanced—and frankly, far more amusing. Artificial intelligence may excel at specific, well-defined tasks, but it still struggles with the messy, unpredictable realities of human work and interaction.
Consider customer service: an AI chatbot can instantly generate responses, but when confronted with an angry customer whose delivery went missing, it might respond with, “I detect frustration in your message. Here’s an article on mindfulness from 2012.” Meanwhile, a human employee knows the real solution is a discount code and a well-placed “I’m so sorry.”
AI’s “creativity” is equally entertaining. It can produce a photorealistic image of a cat wearing a spacesuit, but ask it to design a functional office chair, and you might end up with something that resembles abstract art more than furniture. Humans, on the other hand, understand that aesthetics must coexist with basic ergonomics—no one wants a chair that looks like a modern art installation but feels like a medieval torture device.
Then there’s problem-solving. AI can optimise a supply chain in seconds, but when faced with a real-world dilemma—like a coworker who insists on reheating fish in the office microwave—it has no recourse. Humans, however, have mastered the delicate art of passive-aggressive Post-it notes.
And let’s not forget emotional intelligence. AI can analyse sentiment in text, but it can’t navigate the subtle politics of a team meeting where two colleagues are clearly feuding but pretending everything is fine. Humans? We’ve turned awkward silence into an art form.
The truth is, AI is a powerful tool, but it lacks the adaptability, intuition, and (occasionally irrational) judgement that make humans indispensable. So while AI might handle data analysis, it won’t be leading motivational seminars anytime soon—unless we’re all suddenly inspired by a monotone voice reciting statistically optimised pep talks.
In short, the future of work isn’t about humans versus machines—it’s about humans and machines, with each playing to their strengths. And ours just happen to include sarcasm, creative procrastination, and an unmatched ability to pretend we know how the office printer works. Advantage: humanity.
We have, since real progress began , always worked with machines.
Clinton’s ‘Vision’ eh?
Is this the same bloke whose vision involved him as President of the Free World (ha ha ha ha ) having oral sex performed on him by a vulnerable young woman?
Shall we just move on?
The database used by an AI machine can cause severe problems in it’s output if the source data becomes polluted with data produced by ai itself. The designers are well aware of this, but I don’t see it as a given that pollution, over time, can be avoided.
Then ai becomes useless and dangerous.
Gates better pick his AI carefully. Just tried my standard test with three:
Give me the smallest non-equal positive integers a,b,c,d such that a^2 + b^2 = c^2 + d^2
1) Microsoft Copliot (25=61 !!):
a = 3, b = 4, c = 5, d = 6.
Verification:
a^2 + b^2 = 3^2 + 4^2 = 9 + 16 = 25,
c^2 + d^2 = 5^2 + 6^2 = 25 + 36 = 61.
2) ChatGPT (4+9=17 !!) :
• a = 1
• b = 4
• c = 2
• d = 3
We can check this by calculating:
1^2 + 4^2 = 1 + 16 = 17
2^2 + 3^2 = 4 + 9 = 17
Thus, 1^2 + 4^2 = 2^2 + 3^2, satisfying the equation.
3) DeepSeek (correct):
Based on the exploration, here’s a valid set of non-equal integers:
(1, 8, 4, 7)
This set satisfies 1^2 + 8^2 = 4^2 + 7^2 since 1 + 64 = 16 + 49, which both equal 65, and all four integers are distinct.
That is almost amusing
Gates better pick his AI carefully. Just tried my standard test with three:
Give me the smallest non-equal positive integers a,b,c,d such that a^2 + b^2 = c^2 + d^2
1) Microsoft Copliot made an arithmetic error (25=61 !!)
2) ChatGPT made an arithmetic error ( 4+9=17 !!) hasn’t improved in two years.
3) DeepSeek was correct.
Interesting
Proves Stevenson’s Law ( see above )
I saw an interesting item on Sky News early on Monday morning, about diagnosing COPD with the help of AI. It’s a short watch – just over a minute.
https://news.sky.com/video/ai-to-help-diagnose-chronic-obstructive-pulmonary-disease-13337876
This is the kind of interesting stuff that AI could be really useful for! But it still requires human input too.
I agree with @el Deco (thank you for that article) that it was a really stupid decision to remove AI from cancer treatments, when obviously it was shortening waiting times and helping sort out the detail of the treatments involved. So improving outcomes, saving more lives, and also saving money – though not for the hospitals that had implemented it already. Grrr. But what else do we expect from LINO other than idiocy?
I actually had my first real brush with AI the other day when it installed itself on my Android phone. Grrr! I told it I didn’t want it on my phone, and asked it how to uninstall it. It told me how to. Then thanked me for our pleasant though short conversation! Which at least left me smiling!
There are things that AI can’t do.
But we are making sure that those things are got rid of. (Empathy, forgiveness, redemption, hope, agape love)
There are also people AI cannot cope with. We are making sure those people are eliminated too.
Maybe eventually the Agents Smith will have the Matrix all to themselves. For a while.
Gates’s comments are likely based on the particular and specific usage of the term “professional” in the IT industry, where it is applied to salaried employees such as programmers, system analysts, software engineers, etc. For the most part these people, highly skilled though they may be, are not members of any formal professional body, and are not subject to professional codes of conduct, and so on. IT “professionals” are thus not comparable to lawyers, accountants, doctors, or anyone who might traditionally be recognised as a professional, and they very possibly are at risk of having their jobs taken over by AI.
I disagree
Many so-called professionals e.g. lawyers, pill prescribing medics following algorithms and accountants who just make the figures balance can all be replaced by AI.
It’s true professionals who will survive.
Absolutely. Lawyers who handle routine, albeit sometimes quite complex stuff such as conveyancing are about to be out of a job.
Or will face big pay cuts.
Interesting discussion.
A few days (weeks?) ago you published a ChatGPT response to some query, probably the likely outcome of one of Trump’s madcap policies. It impressed me because it had clearly based its reply on sources providing considered analysis rather than those which were dogmatic propaganda, and it generated a well written and informed summary of the situation. It was something that would have been marked highly as a piece of student work… except that I got to the end and there were no citations. It would have gone straight back to the student for corrections.
I think that is what most people currently think of as “AI”, a tool for generating a coherent piece of text from a set of inputs. It seems to be rather good at that, and judging by the examples you have included here it does rather better than a Google search where the best sources are often much lower in the ranking than those paid for or tweaked by the provider website to get a high ranking. But as Matthew says above, AI text is only as good as the data it has trawled and if it takes people away from the original websites that data will simply die since in most cases those websites also depend on the number of visits for viability.
Ultimately though I see the main use discussed, the generation of coherent text, as relatively trivial. It is the next stage on from word-processing which back in the 80s generated similar dystopic predictions. AI can be used to generate text from defined data inputs, reducing the work of the person who drafts the document as well as the typist who in former days made it readable.
There are of course major things that AI makes possible. Its biggest success so far is AlphaFold, the program which solves the impossibly difficult problem of calculating protein structures. It has been known for over 50 years (certainly since I was a student) that the order of amino acids in a protein must define its 3-dimensional structure but no one knew how, or even how to set about finding out, given the complexity of a real life protein with typically 400 amino acids. AlphaFold used the heavy duty computing ability of AI to do that, and it quite deservedly won its leaders (Dennis Hassabis and John Jumper) the Nobel Prize. (In typical fashion the lead which Britain at that point had in AI was immediately thrown away by selling out to Google).
Currently there have been news stories about AI taking the role of radiologists in analysing X-rays and scans. Actually all the AI programs have been trained to do is mimic human radiologists in distinguishing normal from cancerous tissue, and image analysis is the sort of thing AI can do fairly easily. Those programs have the advantage over humans that they can’t make mistakes by being distracted, but because they don’t have meaningful clinical knowledge of the cancers they will struggle with ambiguous cases. They might improve the workrate of radiologists by highlighting those ambiguities, but ultimately it is real radiologists who set the standard.
However that does highlight what I think is the real issue of AI, the assignment of legal responsibility. “Computer says no” should not be a legitimate defence to liability for causing harm. If a patient is harmed because the radiologist asked AI to interpret the scan, any liability should be the same as if that radiologist had made the interpretation themself, they should at the least have checked. Similarly – as an example of something which seems to be currently happening – if an Israeli soldier kills an aid worker (or any non-terrorist) simply because AI says they fit some terrorist criterion then that soldier or the command line above should not be able to use the mystery of of AI to avoid liability for prosecution for war crimes, they are responsible for choosing to use that program.
Many thanks
Really useful observations
“it presumes that we only exist as economic entities”. We like to think we are more, but in reality most us have no choice but to be economic entities. Most sell our souls to employers because without income we have nothing. Even those that run their own businesses are economic entities and slaves to the economy. We like to imagine we are more, that we are individuals, that we are unique and irreplaceable, that our bosses, our clients, our customers value us deeply, but the cold hard truth is that if AI can do your job as good as you for a fraction of the cost no one will pay you anymore, no matter what other attributes you might possess. Without income most of us are finished. Gates might come across as cold and detached, but he is a realist. The writing is on the wall. The future looks bleak for most.
I have to disagree.
I have worked more than average during my life. But never, once, have I solely been an economic entity.
I have always been much else as well.
But you are also only looking at one side of the equation. We spend as well. Without our spending the rich will not be so. They need us to consume. Replacing us destroys their wealth. They cannot afford for that to happen.
The spending habits of DeepSeek?
What does ChatGPT want for Christmas?
Chips, cooling kit & electricity would seem to wrap it up. Not much “growth” there for Rachel.
If AI closes down the human side of the economy it will run out of paying customers very quickly. As the government will run out of taxpayers.
@Sam463 – I think the bleakness of your post is coming not from Bill Gates but from yourself. It sounds as if you believe him. The future is challenging, but we are designed for that, we thrive on it, whereas software tend to seize up and produce the Blue Screen of Death.
You ARE special, you DO have value, your post woke me up and made me think. Together we can make a far better future for ourselves and others than our current overlords.
I was at a workshop the other week at a local community group, and we looked at their guiding principles, which cheered me up a lot.
Here they are…
Be good ancestors
Celebrate the wisdom of place
Grow by making together
Harness the power of small
Imagine and demonstrate
Work in the open
Even better, they put them into practice, and I’ve met the people whose lives they have changed.
There is hope. WE are the hope.
Gates by name Gatekeeper by nature.