Several discussions off the artificial intelligence (AI) app, ChatGPT, have arisen in the comments on this blog so I amused myself by trying it yesterday.
After trying various economics related ideas I thought I would do a test on a subject I think no one could dispute that I am an expert on, which is me. So I asked to about Richard Murphy, chartered accountant, and got this:
Richard Murphy is a chartered accountant, economist, and political activist based in the United Kingdom. He is best known for his work in the field of tax justice, and is the founder of the Tax Justice Network, a global network of activists and experts working to promote transparency and accountability in tax systems worldwide.
Murphy has written extensively on a wide range of issues related to taxation and public policy, including corporate tax avoidance, the role of tax havens in the global economy, and the need for tax reform to address social and environmental challenges. He has authored several books, including "The Joy of Tax" and "Dirty Secrets: How Tax Havens Destroy the Economy".
In addition to his work as an accountant and economist, Murphy is also an active campaigner for social and environmental causes. He has been involved in a number of grassroots movements and organizations, including the Green Party, and has been a vocal advocate for progressive policies on issues such as climate change, inequality, and public services.
Murphy's work has received both praise and criticism, with some viewing him as a leading voice in the fight for tax justice, and others criticizing his views as overly radical or unrealistic. However, his work has helped to raise public awareness of the issues surrounding tax justice, and has played a significant role in shaping policy debates on taxation and public finance.
One of the odd things is that once it has had a go at a topic you can ask ChatGPT to have a second go. Doing so I then got this variation:
Richard Murphy is a chartered accountant, author, and economist who is known for his work on tax justice and economic reform. He is a professor of practice in international political economy at City, University of London, and has worked as a consultant for a range of international organizations, including the United Nations and the European Union.
Murphy is the founder of the Tax Justice Network, an international organization that campaigns for greater transparency and fairness in taxation. He has written extensively on issues such as tax havens, financial regulation, and corporate social responsibility, and has been a vocal critic of the current economic system and its impact on inequality and environmental sustainability.
In addition to his advocacy work, Murphy has also been involved in the development of a number of economic policy proposals, including the Green New Deal and the People's Quantitative Easing. He has also served as an advisor to political parties and governments, including the Labour Party in the United Kingdom.
Murphy is a prolific writer and commentator, and has authored numerous books and articles on taxation, economics, and social justice. He is widely regarded as a leading expert on tax policy and has been recognized for his contributions to the field, including being named as one of the 50 most influential people in tax globally by International Tax Review in 2017.
There are obvious problems. First I am not a member of the Green Party. I have not been at City for some time. I have not been a consultant to the UN and have only advised political groups in the EU, as well as work on the EU VAT Gap. Come to that, I never advised the Labour Party. I declined the role.
I shared these results with my wife. In response she suggested ‘You're so vain you probably think that chat is about you”, to which I had two responses. One was, “Only partly, because it is really not reliable” and the other was “Well, in fairness, it thinks it is'”
At which point I offer three conclusions. First, this is not reliable and provides actual misinformation. Second, don't be vain with AI. Carly Simon got it right:
Thanks for reading this post.
You can share this post on social media of your choice by clicking these icons:
You can subscribe to this blog's daily email here.
And if you would like to support this blog you can, here:
My amusing anecdote: I once asked a model how many volumes of a book had been published. It found one outdated source that said 8, another incorrect source said 3, so the model concluded there were 11, when in fact the answer I was looking for was 10. A regular web search for the publishers website reveals this correct information instantly.
These modern chat models are glorified text prediction engines trained on data scraped from the web. They are incredibly accurate at writing grammatically correct English that reads well. They don’t have any knowledge inference whatsoever. If the model spits out that you have ties to the greens or labour, that’s probably because you’ve written about them a lot on the past. It has no concept of the semantics of what it means when two words often appear together, just that they’re often co-located in text alongside certain verbs or nouns. In that sense its not surprising that they’re largely useless beyond being fun little tools.
I think at the moment we need to be careful about two things (amongst others I’m sure, but these are at the front of my mind at the moment):
1. We must not place models of this nature in operation in a safety critical situation. Is giving accurate information a safety critical scenario? In the modern age I’d say yes.
2. We must not anthropomorphize them. The press is using a term being to describe the inaccuracies you mention as “hallucinations”. If I wrote an app that gave correct responses only half the time, that would be rightly called out as incomplete and error-ridden. We should call out failing models for what they are – not daydreaming minds, but buggy incomplete design.
I have found one use
They are incredibly good at reproducing basic, hegemonic economics for me to shout at
ChatGPT uses data from early 2021 and prior which could explain some mistakes, but likely can’t filter through which data sources are accurate or not
I have just over 50 years experience with IT, and have watched a few ‘revolutions’ unfold. I have certainly tried ChatGPT which I found unexciting, and of course as others have commented, not up to date. I was impressed with offspring #4’s use of it as his python (a programming language) help system, and it does write the same useful code snippets you would normally get from the person in the next cubicle, and when it gets it wrong, it will learn.
Hate to get all Marxist with all of you modern economists, but I see the only difficulty with AI is the ownership of the means of production. It is a brilliant invention if your ambition is to control the unfettered access to all the information on the internet, freely available to everyone equally. I know that was just a dream some of us had.
Many years ago I watched an episode of the BBC’s “Click” programme, where they were gushingly enthusiastic about about how some company was using an early AI system to do high frequency stock trading. My immediate reaction was horror, arising from the realisation that far from being of benefit to society at large, the benefits of AIs would in large part accrue only to those wealthy enough to afford the resources necessary to develop, train and deploy them. Nothing has since caused me to alter this opinion,
I tried it producing LaTeX for mathematics. It did a good job.
Wikipedia is probably better for basic info and not too distorted.
I agree, based on this
Although the entry on me is rubbish
Or it was the last time I looked
In its Q & A mode I got it to apologise several times and to thank me for correcting it. Would be interesting if Richard corrected the points on the Greens and Labour etc and then to re-query to see if it really is learning from its users.
I haven’t even thought about doing that
I am really not that bothered
You could get it to tell you basically anything you want with enough prompting, but the way they’re currently built they can’t “learn” anything beyond the current user it’s chatting with. And given some past efforts in this area (See Microsoft’s Tay chatbot) that’s probably a good thing.
There are ways you could use a chatbot like this where you could retain a history with a given user over a much longer period, and have it “remember” the facts that the user expects to hear, so it doesn’t give the same incorrect responses, but that doesn’t really solve the underlying problem of it making the mistakes in the first place.
Unfortunately, unless something has changed in the past month, ChatGPT isn’t yet learning from the user interactions (outside of existing, stored conversations)- which is probably a good thing given the propensity of certain of our fellow humans to attempt to subvert these types of tools (and it is just a tool) into displaying the worst excesses of humanity (Fascism/Racism etc.) whether intentionally, or just for the hell of it.
IF you have managed to gain access to Bing Chat, that performs slightly better insofar as it is able to search the web, but it is still confidently incorrect on numerous ocassions.
It could be useful in a research assistant mode – where you need to collect infomation quickly on a topic – which would be useful even if you know it is not going to be 100% accurate.
It is also seems pretty good at helping to generate software – had a quick look at how it can generate code and comment on strenths and weaknesses of different languages such as SAS, SQL, python, Anylogic etc.
If I was setting exams right now I could have a lot of fun asking students to say why a piece from this source that seemed superficially right was actually quite wrong.
In the past some very interesting AI models were proposed but what passes for AI today is usually an enormous database together with a fancy way of accessing it. The number of “answers” it has to a give enquiry must be vast and many will be mutually inconsistent. There must be some way of prioritizing the answers and there is no guarantee it will find a particular one unless you tell it where to look. My guess is that is alters these priorities depending on some assessment of what people find most acceptable.
I asked it the origin of a sentence from D.C Lau’s 1963 translation of the Dao De Jing. I chose this translation because it is not up to date. It does not take into account the latest scholarship and the recent excavation of older manuscripts and it would not be used in a college course. On the other hand, it’s the version used in the Penguin Classics series so is not completely obscure. I kept giving hints as to where to find the sentence but It could not find it even after I told it that it was from a translation of the work and the chapter in which it could be found. It was only after I asked “Is it from the D.C. Lau translation?” that it answered “Yes, you are correct.” and printed out the entire chapter.
So obviously the information was in the database but it took an awful lot of prodding to get it out! Still it did spew out an awful lot about Daoism, not all entirely correct, on the way.
Interesting
No doubt it is better at more commonly asked questions
I use some other digital tools based on ChatGPT. As I experience brain fog and the inability to construct half decent sentences I have found that for specific uses it’s been incredibly helpful and actually time saving. I also never use the text it creates ‘as is’. Being able to work with something already written is great for my head. I’ve used it for things like meta headings and descriptions, rewriting my existing text in a different way, easier to read. Seems to be all sorts of uses in the right context, obviously, as your example demonstrates, Richard.
I see it’s merits fir that purpose