I cannot have been alone in being just a little amused at the news that Google's new artificial intelligence chatbot got an answer wrong in the advert used to promote it.
AI is meant to be the next big thing in technology. Maybe it is. But, I cannot help but feel concerned about this development.
That concern is not based upon the fact that artificial intelligence could make a mistake. As a matter of fact, it can only be as good as the data provided to it, pretty much like most human beings and, as a matter of fact, there is a great deal of straightforwardly wrong information available in the world. Just look at the Wikipedia page about me, which is absolutely riddled with errors. Mistakes are bound to happen in that case.
Instead, three things worry me.
The first is that just as most people have now forgotten how to use a pen, most people will also now forget how to write because a machine can do it for them. The loss of the pen was to some degree inconsequential: it was just a tool. The loss of the ability to write a coherent argument that sets out to achieve an objective might be of much greater consequence.
This is the fundamental skill that much of our education seeks to develop, even if most politicians do not seem to be aware of the fact. As I know from observing students, it is something that many find hard to master, but which is worthwhile achieving. If we provide a machine alternative to this, we diminish the skills available to many by removing the motivation for acquiring them. Worryingly, I see no alternative skill that will replace this one. The consequence might be a deskilled world, where even greater inequality becomes prevalent. Of course, that worries me.
Second, the appraisal of this skill will now become much harder. I have long believed that essay writing (or the preparation of reports, and similar material) is the best way to test the skills of many people. I used essays rather than exams as the basis of appraisal in the courses that I taught. How, however, can this survive when a student might use an AI bot to assist in the preparation of their work or to re-write it before its presentation? Are we really going to have to resort to exam-based testing for everyone now, which is by no means suitable for all?
Third, the Google bots' error is indicative of the ease with which AI writing could be used to spread disinformation. The Orwellian task of rewriting history would be so much easier with an AI bot that could be programmed to exclude offending data and include new false information. Worse, once done, the promulgation of this would be a relatively easy task. The use of these bots for propaganda purposes might be very worrying.
Am I a fan of this technology as a result? No, I am not. And it is going to require a lot to convince me of its benign benefits.
Thanks for reading this post.
You can share this post on social media of your choice by clicking these icons:
You can subscribe to this blog's daily email here.
And if you would like to support this blog you can, here:
Amused?
I wish.
When are people going to realise that the parameters of AI are actually created by human beings and infused with the creator’s dreams, aspirations and prejudices, fears and judgements.
What AI should be called is ‘Partiality Intelligence’. I kid you not.
Like a lot of stuff from Tech – it’s bullshit. It is complementary – never replacement standard – for good old fashioned human judgement. At least with a human being and other human beings you can plead your case and win them over. You will never be able to do that with a ‘AI’ robot because you’ll never be able read or understand the bent/biased code it has been instilled with.
The Tech industry needs bringing to heel – and right quick. They are selling us a lie.
Less subtle than my version, but fair
A serious problem for writing bots is sourcing. If they make stuff up (or the odd case where they actually synthesise facts correctly), you have no idea where the sources were to verify it or back up the claim. And when it regurgitates verbatim other peoples copyrighted work you have no way to credit them, which is highly unethical. Either way the trail of information and sources which has been vital to non fiction writing, scientific reporting and journalism is lost.
As long as we collectively insist on proper sourcing in non fiction writing, that could be one good way of curbing the misinformation damage done, and the work of reverse engineering sources might be so great that using “ai” in this way becomes not worth it.
As to how to fix the other issues, I’ve no idea at present. As someone in tech, I speak about my views and concerns to anyone who will listen (many similar concerns crop up when we start to talk about “ai” writing program code). Voicing concerns like this is a good start. Not sure what’s next.
Thanks
That’s where AI is so cunning, as is the rest of the AI/Tech industry – how subtle they are – as noted by Zuboff and Cadwalladr.
The whole industry needs how to learn how to talk straight in my view because they don’t.
All written AI output should be checked for plagiarism using https://www.copyscape.com/
AI could offer different, unusual and inhuman perspectives on problems and so could be very useful in problem solving. There’s no way they should be considered as any form of authority though. Good grief, just look at the appalling mess Facebook’s moderation algos have made of Facebook (widely regarded as broken these days by those who use it).
Asked GPT for opinion on a podcast I listen to and its ethical guidelines over rode its factual search giving a comprehensively wrong answer.
AI answers are so dependent on the parameters set it cannot give unbiased answers.
The link I tried to provide should read
“IMF support for Osborne’s austerity measures”
On the blog about your glossary. Extremely bad day at the office.
As someone in the HE sector, we are looking at this technology from two perspectives: how, if at all, can we use it to advantage in the learning process (teaching); and how does it impact assessment?
With regard to assessment, perhaps strategies such as using an automated writer to generate a draft answer, and have the student critique the draft answer so produced. In more technical areas such as mine, don’t ask questions that can be looked up on the internet (which we have gotten used to doing anyway with pandemic-induced online exams), but give the students problems to solve and ask them to discuss the strengths and weaknesses of their solution methods. I ran a question from a recent online exam through ChatGPT3 and its answer was poor mostly and categorically wrong in about half. I doubt it would have got a passing mark.
With regard to teaching, we can use these tools to generate draft teaching materials (e.g., summaries of some area), and then refine/correct them for the final product, so I think they will essentially be productivity aids. Using them to generate the teaching materials without subsequent human judgement will not produce quality materials – but of course it could be used to generate a poor quality online course that might miseducate a lot of people.
Interesting
Thanks
I tried Caktus AI. The input phrase was “European Electricity Market Reform” (a subject I know a bit about).
The output was dangerous.
First it referenced out of date material (prior to the growth of renewables) second the “essay” made incorrect assertions was cliche ridden and trite.
But, to the inexpert eye it looked not bad. To an undergraduate needing to knock up a fast essay on the subject, it would be the real deal.
Thus the danger is that AI supports the staus quo – conventional thinking/conventional wisdom,
It is certainly artificial – intelligent? hardly – given it is simply repeating out-of-date nonesense.
You express my concern, entirely
The real problem is that people believe everything that comes out of a computer and don’t question it. I saw this in the early 80’s onwards with spreadsheets that didn’t add up.
You see it with contents on the Internet. If it is there, it must be true.
It is also an age thing. Those of us who are older tend to question the veracity of what we see more.
I think you last point may be true
What I see is that my children tend to trust the content they are provided with more than they should. It is very difficult to get them to think more critically about the system that provides content it – subject matter -sure, we can debate and discuss history, politics etc., – but not the way it is delivered. They just don’t see a problem.
I’m someone at 57 years old classed as ‘digitally adaptive’ whereas my kids are classed as ‘digitally native’. The digital natives are very vulnerable in my view.
This seems to be a very common problem.
If it’s on Instagram it apparently has to be right.
I had to correct ChatGPT three times when it told me how to solve a quadratic (including an error in its own example), and another three times when trying to work out a highest common factor of two numbers.
It certainly couldn’t sit a GCSE in maths right now.
Fascinating
Michael, isn’t the point of such AI that it learns? Part of the learning process is making mistakes, which won’t (or shouldn’t be) repeated if the pupil, whoever/whatever they are, is any good.
I suspect that with use by very large numbers of people these AI systems will rapidly improve so that simple errors like those you note won’t happen. Look how rapidly AI has progressed to outplay human chess players, and then Go players.
From Wikipedia: “AlphaGo is a computer program that plays the board game Go.[1] It was developed by DeepMind Technologies[2] a subsidiary of Google (now Alphabet Inc.). Subsequent versions of AlphaGo became increasingly powerful, including a version that competed under the name Master.[3] After retiring from competitive play, AlphaGo Master was succeeded by an even more powerful version known as AlphaGo Zero, which was completely self-taught without learning from human games. AlphaGo Zero was then generalized into a program known as AlphaZero, which played additional games, including chess and shogi. AlphaZero has in turn been succeeded by a program known as MuZero which learns without being taught the rules.”
And…..”At the 2017 Future of Go Summit, the Master version of AlphaGo beat Ke Jie, the number one ranked player in the world at the time, in a three-game match, after which AlphaGo was awarded professional 9-dan by the Chinese Weiqi Association.[”
AI that can learn can now play chess and Go to a higher standard than any human. Si I wouldn’t be so sure that such AI applied to maths or essay writing might not do the same.
Those of us at school in the 70s may recall the huge arguments around the ‘threat’ poised by electronic calculators. It was certainly a huge deal for my maths teacher. My class was the last he was to teach how to use a slide rule. Too young? More here https://hackeducation.com/2015/03/12/calculators
AI and other technology have always seemed to me less of a problem than the ‘unfair’ advantage some students have always had over other with regards access to real human intelligence. Be that in better funded schools or time invested by parents, tutors and coaches outside the school environment.
In the mathematics and financial world I think we all see calculators and PCs as a massive advantage. In my A level days all we could take in with us were log tables. Later working in the insurance world, in order to sell final salary pension schemes we had to have a full understanding of the actuarial calculations and I remember sitting 3 hour financial services exams without calculators to demonstrate this knowledge.
By the late 1980s my secretary could produce these figures in 5 minutes on her PC. Later these quotes had to be accompanied by a pre-approved sales letter, possibly to avoid the risk of mis-selling. This was the beginning of the journey which we are discussing here.
Log tables
That takes me back….