We already have a massive problem with data overload in our society, with most people having little training in how to use the information they are given. AI creates new risks. It also requires that we understand that human decision-making remains key to the process of generating human well-being.
This is the audio version:
This is the transcript:
I have many things to worry about, and AI is one of them.
Artificial intelligence is, when it comes down to it, all about the management of giant databases. It takes in vast quantities of data, it processes it and sorts it, it asks certain questions of that data, and spits out all sorts of answers.
Now, we've been used to doing that for a long time. For example, accounting is all about the operation of databases. Now, once upon a time, those databases were manual. They were called the general ledger of a business. And they were literally written up with quills, followed by ink pens, followed by biros, followed by whatever.
And then they were computerised. And now those general ledgers of companies can be enormous. Because, of course, we have enormous companies all around the world. But it doesn't change the fact that accounting data is simply a database recording the transactions and estimates that are used to manage a business and to record them in a certain way so that information can be pulled out to answer specific questions that managers, shareholders, stakeholders, and other users of data might ask of the entity that produces the information in question.
But, and I make this point very strongly, very few people seem to understand the answers that accounts provide to them. I include senior management in that in many cases, because frankly, they're not trained in how to interpret accounting data, or how to produce it, and to therefore understand the weaknesses inherent within it.
I include most shareholders in that because all they're interested in is apparently the profit. But profit is a totally malleable number in any set of accounts, capable of manipulation in the way that management wishes to try to produce a result to achieve an outcome that may not be altogether true and fair, even if the auditors try to correct the view along the way.
And we do know that far too many companies have failed in recent years without the auditors giving any warning. And that's because those auditors themselves don't seem to understand the output from these databases.
And we most certainly know that politicians do not understand the information they get from their databases because, after all, the government also has a general ledger. It maintains accounting systems, and the information put out from it by the Office for National Statistics is, frankly, in most occasions, completely rubbish, and the politicians misinterpret it.
Why does this worry me? Well, it is because we have had decades and centuries and lifetimes of opportunity to understand accounting information, which is so core to our well-being. After all, if we think that economics is the art of best-allocating resources for the well-being of society as a whole, then accounting information is pretty fundamental in the way in which we make decisions about that allocation process. I would have thought that's obvious.
And yet, companies, and politicians, and others who are tasked with those decision-making processes very rarely seem to understand the data they are given. Nor do they understand the weaknesses in it, nor do they understand the inherent logics which are used to produce it. And also the flaws in that logic many of which are all too obvious if only they took the time to work them out.
Now, if we haven't done that with accounting data, and I don't believe we have, which is why ignorance of it is so widespread, how on earth are we going to manage the information that comes out from AI, which is much more complex in many cases and more broad-reaching?
AI is going to be used to decide upon, apparently, healthcare and medication.
It is already being used to decide about credit rating.
It is most certainly being used to vet job applicants, wholly inappropriately, I suspect, in most cases, and with a bias towards those who are normal against the rest of the world.
And it is being used for many other tasks, which I think frankly require the human touch.
Of course, I'm saying there are benefits to AI, just as I actually would say there's benefits to accounting data. Accounting data does produce figures which do, for example, let us tax companies, and that's good news. AI data can also manage information in some fields in ways that we could never do without it.
I am not pretending that AI is not going to contribute, for example, to medical research, because clearly it could, and that will be true in other areas as well. But it does require that the person who eventually sees the information and has to make a decision about it does understand how that information is produced, what it really means, what the flaws within it are, and what the gaps in the understanding of the system might be so that they can exercise their judgment to choose between the options that are actually available to them, one of which is to ignore the data that the database has produced and to use their intuition instead.
That is what worries me. I know that people have blind faith in economic and accounting data, which is wholly unsuitable for use and produces wrong consequences. I'm worried that AI could do the same. If that's true, we're heading for difficulties.
We have to teach people about how to properly understand data, whether it is accounting information or anything produced by AI if we are to have a good future. It's not good enough to say, “The computer came up with this answer.” The computer didn't come up with an answer. The computer used an algorithm that somebody had created to predestine that answer in too many cases.
We must understand the limitations of what the computer does, or, as a human race, we're in trouble. So, of course, AI worries me because I can see how little we understand data already. If we have more and more data, we could end up being flooded by it and overwhelmed by it, like the apprentice in The Sorcerer's Apprentice, completely overwhelmed as he was by more and more brooms spreading more and more water. So, we could be overwhelmed by more and more data, overwhelming us with more and more information which we don't know how to use.
This is a crisis for our time and one we could solve, but it requires us to understand that human decision-making remains key to the process of generating human well-being.
Thanks for reading this post.
You can share this post on social media of your choice by clicking these icons:
There are links to this blog's glossary in the above post that explain technical terms used in it. Follow them for more explanations.
You can subscribe to this blog's daily email here.
And if you would like to support this blog you can, here:

Buy me a coffee!

My parents both worked for what was then The Midland Bank.
My father took some photo’s at work (early 50’s?) one is a self portrait, he’s on the phone looking at a huge ledger. I also heard my mothers friends reminiscing about typing out bank statements.
I went to Hillbrush at Mere recently for a coffee while my car was in the garage and they had some early banking details for the company, formed 1924, their account was worked via a passbook.
But we were able to fight the greatest war in human history using – for the most part nothing more complex than an adding machine.
So yes it has all changed – a lot, the challenge though as you say is to make sensible use of the data we now have and the tools to manipulate it.
Interesting comment re winning the war with an adding machine. I wonder how long it would have taken AI to crack the Enigma code. But then AI would probably have created an even more complex code. That’s progress.
“…… we were able to fight the greatest war in human history using – for the most part nothing more complex than an adding machine.”
A slight oversimplification, I’m afraid Mr Boxall. Marian Rejewski and the Polish cryptographers (they never receive the due credit in popular, narcissitic British history) introduced the mathematical principles affected by Alan Turing at Bletchley Park to crack the Enigma Machine codes, with his early version ‘computers’, known as ‘the Bombes’.
“Poland, at risk of invasion from Germany, had formed a bureau called the Biuro Szyfrów after World War I. In response to Germany’s use of Enigma, the bureau trained 20 mathematicians in cryptography and unleashed them on the Germans’ encrypted messages. These mathematicians included Marian Rejewski, who identified patterns in the working of the Enigma machine and was able to reduce the day key combinations from, potentially, 10 quadrillion to only 105,456 (17,576 scrambler settings times 6 possible scrambler arrangements). Replica Enigma machines were available to Rejewski and others at the cipher bureau, and they spent a year cataloging all 105,456 scrambler configurations. This catalog simplified decryption of the scrambler settings, and Rejewski then decoded the easier plugboard settings. By 1932, the Poles had cracked the Enigma code”.
Enigma continued to adapt and develop, but Rejewski’s work was used by Turing at Bletchley, with his Enigma-cracking ‘Bombes’:
“……….. ………. Developing and using the Bombes, Turing and others working at Bletchley Park pioneered machine-driven cryptanalysis and the industrialization of the code-breaking process. These accomplishments had a lasting impact on modern cryptography and computer science. The Bombe machines played a key role in deciphering the Enigma codes and were ultimately crucial to the Allies’ victory over Nazi Germany in World War II.” (Britannica).
The 21st century world we live in has more connection with WWII than through the adding machine. In addition modern Operational Research was developed in the US, to meet the challenges of WWII; to say nothing of the development of nuclear science.
1932 or 1942 John?
Yes, the Polish contribution is often overlooked…. As is Tommy Flowers, a Post Office engineer who built Colossus, perhaps the first modern computer.
1932, according to Britannica* – Enigma and cracking it were long drawn out processes (the flaw in Enigma was. perhaps the German long-term confidence in its endless robustness); the Polish mathematicians provided the groundwork for Turing and Bletchley Park to build on (I have always thought it is perhaps a little improbable to believe Bletchley could have cracked Enigma, if they only began tackling the problem at the outbreak of war; and given the known ‘lethargy’, or rank irresponsibility, of the Conservative Government and security services in addressing the Fascist threat pre-1939).
Poland feared invasion by Germany from the end of WWI. I think it was Willy Brandt who memorably said that the historical problem of Poland (originally a greater Poland- Lithuanian federation, 1569); was that “Poland was on wheels”; the Polish deep long-term concern was understandable; and the concern survives, today. Land borders have always made Europe difficult for the British to understand; but it really shouldn’t – we are ‘long in the tooth’, and have meddled enough in Europe, and still meddle even when we reject them though a basic failure of common sense.
Thanks, John
One reason the Polish cryptographers are overlooked in Britain is that they were – I believe (although not from extensive research) – never made part of the Bletchley team. They survived through the war, and continued working (Poland, then France), and eventually in Britain; but never, it seems on the key work at Bletchley in spite of their work being gifted to the British, and the seemingly obvious pooled intellectual resource benefits that would accrue; and that seems inexplicable. The inner history of that conundrum would be worth unravelling.
There is an error here:
“if we think that economics is the art of best-allocating resources for the well-being of society (agree)……………..then accounting information is pretty fundamental ..to the decisions about that allocation process.”
Richard – you are the exception – but 99% of economists do not understand accounts & thus are functionally incapable of making meaningful decisions about the allocation proves.
As for A.I. where processes are well defined – not too many variables – it will work OK (I’m thinking of my own area). Health, medication, … credit ratings – all involving humans & vast variability – it will end in tears. But as we know, “the people in charge” are for the most part ignorant/stupid/over confident/lack common sense (take your pick) So, like you, I am confident that it will end in disaster.
Much to agree with
It is the ‘channel shift’ ethos that bothers me – needy people funneled towards systems by senior managers that do what John Seddon has pointed out – create ‘failure demand’ – people who have used automated systems (and this includes AI) and then reverted to needing to talk to a ‘real person’ about their issues because the automated version simply cannot deal with the complexity of people’s lives.
Sitting next to housing allocations and rent arrears officers and overhearing their conversations with tenants brings this issue sharply into relief I can tell you.
Marketing has left us with this bullshit view of people – that they can be characterised into certain regimented, convenient cohorts when in fact the truth is that we are all different. In the future, having your differences noted and your needs arising from them met will only be for the very rich.
Much to agree with
Economists are not resource managers or environmentalists.
Nor do they understand systems management.
Economic dogma, like austerity, and monetarism, has been reduced to amoral technocracy, omitting its underpinning political assumptions.
Whilst such a huge range of externalities are ignored, mostly those having adverse impacts like climate change, I would have said economists are the very worst people to be in control of directing resource allocation.
If we think that economics is the art of best-allocating resources for the well-being of society as a whole …. we are either crazy or economists.
Seriously, many seem to think that the price of something is a good guide to its value to society, so economic information, stored away in all those databases of government and private organisation, is therefore hiding the key to bet allocation.
I respectfully beg to differ and agree with tony. It is really sad that information as to the price of something is seen as a realibale proxy for its value and therefore policy and allocation decisions can be made on that basis,
And then you can allocate based on what you can afford. So it you think you can’t find money in the budget for winter fuel, you don’t put it in. The value of the budget steers policy.
Marx said that people make a fetish of money, I think economists make a fetish of tax too, like if something is “bad”, tax it. Or if someone is rich, don’t tax them.
This is indeed one of the takeaways for me from the Steve Keen course in Rebel economics. And why I embarked on developing valuation of Real Capital.
I wonder if we make a category error in treating the inputs and outputs as “data” in the same way.
Just as accounting data (the sums of ledgers, profit and loss, the totals on a balance sheet) differs from the primary data (invoices, bank statements, etc) – for example because some judgement and selectivity is applied in processing the latter to create the former – so the outputs from AI systems (which often involve machine learning, so not algorithmic in the sense of being preprogrammed to create a predefined answer from specific inputs) differ in nature from the masses of data fed in as inputs.
Too often they are black boxes without any ability to interrogate their reasoning to check for gaps and faults and biases. At the end of the day that may be no worse than people, who are inherently faulty, but unlike humans, machines don’t understand. Humans do it instinctively because it is built into the way we think, but we need to build it into the machines.
Damn. Edited out a couple of vital words that are important to the last point I was making.
“machines don’t understand *morality* or *ethics*. Humans do it instinctively because it is built into the way we think, but we need to build it into the machines.”
The machines just do what they are told. And if they are not told to do the “right thing” then, like the brooms and buckets of the Sorcerer’s Apprentice, they might easily do the wrong thing without any qualms.
I’m interested in when AI tackles the question of the optimal mix and size of government spending. Will it agree with the mainstream economists on what governments should do or not do or will it go against them.
Unrelated, but how does a progressive person offset their occasionally necessary airplane journeys. The commercial sellers of offsets are basically a scam.
AI reflects what it is fed
So, it will always support the status quo
It reverts to the mean
How is the amount of money created in the UK determined between licenced banks and government? The current answer appears to be greed on the part of private sector bankers. How does AI address the problem of greed a moral issue?
I agree. Last year (I think) I asked ChatGPT about EU electricity markets & reform. It came out with trash – circular reasoning etc. I then started to correct it & like the brainless numpty that it is – it started to agree with me.
Garbage in (mainstream economic “thinking”) – garbage out.
Yes as you say, many people have enough problems looking at data output at the moment. Areas I have worked with often require an effort to actually find out more about the data – what it means and what it does not mean. A cursory glance with no knowledge of the data is to say the least unwise!
From the BBC this morning
https://www.bbc.co.uk/news/articles/c1dp40kpvzdo
AI gone very wrong sadly
John,
whilst tragic, I think this perfectly illustrates Richards point.
The slip road is, in big white letters, marked “NO ENTRY”. The drivers given conflicting information from the computer(SatNav) and their own eyes, chose to follow the computers instructions.
AI is a tool. Human beings need to be involved in the decision making. More importantly human beings need to understand that given conflicting information the computer is not always right. It needs to be interpreted.
As noted in another comment on here AI is only as good as its training data. If the training data is biased or wrong then the AI will learn that bias or perceive the “wrong” data to be correct.
Famously (or infamously!) facial recognition software had a problem recognising non-white faces. Also credit worthiness software had a bias against non-white applicants because the training data was biased.
Artificial intelligence (AI) is only as good as the models used to train them.
Neoliberalism, neoclassical economics, and healthcare will all claim that AI is impartial and has no bias when the opposite is true.
Be very worried.
“The computer says….” – a phrase I have heard several times over the past decade.
I am not sure a 65yr old set in his ways with no understanding of technology (and with an audience which no doubt has a similar mindset) is best placed to offer considered opinion on the topic. You can offer opinion of course but you tool not have the tools to make that opinion insightful or particularly relevant.
Might you try putting that through ChatGPT so I can work out what you are saying?
Well, just having seen the energy consumption levels involved is enough to convince me AI is mostly unaffordable at a time when reduction in energy consumption is an existential issue. A 5yr old will be able to work that out.
However, any informed person who understands how doctrinal assumptions become embedded in data input selection and an appreciation of how technocracy works, will be able to tell you that al data handling carries the risk of imposing value judgements even before inputting. That imposition is what makes AI dangerous. ⁴
I used to be in IT and started that career in 1982.
I saw first hand, from the beginning, that whatever came out of the computer was believed to be correct. In this case, it was a spreadsheet produced by the finance director.
Unfortunately, it didn’t add up correctly. However, the printout was being sent to the parent company as being correct.
This has now spread to both television and the Internet, with people believing whatever because they have seen it on their phones.
I have seen that too
The cost of that assumption is often quite high
Austerity was one consequence
In respect of “always believe the computer” syndrome, the Post Office comes to mind …
One of my main concerns about AI, though, is the potentially circular use of data. AIs are trained on the wealth of existing data, and the output is becoming a part of the available data. What happens when AI starts learning from data that has been AI generated – when the outputs also become the inputs? This is by definition an unstable feedback loop. I am not confident the practitioners have fully considered the implications.
Very good point to raise
We get reversion to the neoliberal mean
AI might be very neurotypical in its thinking
I share many of your concerns. But do I also detect a kind of Luddism? The current issue of Nat Geo magazine has some fascinating articles on AI, as well as some warnings, including the horrendous amount of cooling water required.
NG tells us AI (and self-learning software) is being used to decipher fragile ancient scrolls, identity brain tumours, predict earthquakes, look for “alien life” (starting with the new Tory party?), decoding animal language. It seems to be very good and quick at discerning patterns which are too small for humans to notice.
As agreed, the discoveries then have to be assessed by another intelligence. Are humans intelligent enough to do that, or will we, as some predict, eventually be replaced by AI.
Let’s be blunt, we do but have the energy to create that data.
AI uses a colossal amount of energy at a time when restraint, rather than conflagration, is required. Microsoft will no longer keep to their earlier net zero energy budget after the massive increase in demand for AI services in the last two years or so.