Two articles that were well worth reading in the Guardian have had me thinking.
The first of them is by Nesrine Malik, and I strongly recommend it. In it she suggests that:
If anything, the right in opposition, away from government and its limitations and accountabilities, could be the real danger zone – an incubation period for the Tory party and its network to escalate and finesse extremist rhetoric on racial and sexual minorities, the climate crisis and economic redistribution, and to grow and make connections with an increasingly international movement.
Her argument is compelling. The academic paper she links to is also worth reading. The suggestion she makes is that the idea of ‘woke' is being defined by the ‘anti-woke' movement for its advantage. The suggestion she makes is that the scale of the investment in this narrative by our media, Tufton Street and a weak BBC will be so significant that other narratives will be at risk of being crushed.
The second article is by John Harris, and is an interview with Timnit Gebru, who is a former Google AI executive who was forced out of that company when she published an article expressing concern about the power of AI to reinforce hegemonic thinking, with an inevitable bias towards the political right wing, the mainstream media and their chosen Christian, white, male narratives as a result.
I associate the two articles because it appears they should be read together. The first is the warning. The second is the confirmation that the warning is appropriate.
I spent some time this weekend looking at a pile of AI apps in the hope that they might aid my productivity. I am, of course, aware that this market is in its fledgling stage. Maybe I should not have too much hope of it at present. One thing is clear, however. This was that if I wanted to produce mediocre material on mainstream themes then there are already a plethora of tools to help me do so. AI can write my mainstream scripts, provide me with images, turn my narrative into a video, split that video into parts for promotion on Instagram and TikTok, and then reorientate the images for YouTube. All of that can be done now faster than ever before.
The amount of new material of this type that is about to hit the web is almost unimaginable.
The danger should be obvious to see. If AI works on the basis of aggregating vast amounts of web-based data, much of which already indicates the presence of a significant media bias towards right-wing thinking, then a plethora of new publishing that re-emphasises and amplifies that bias is only going to exacerbate the problem that we face in presenting heterodox narratives.
I do not, as yet, have a suggestion as to the way in which this problem can be addressed. In essence the problem is that if right-wing media growth is already exponential, and is so at a rate that exceeds the rate of growth of left-wing and heterodox thinking, then AI can only exacerbate this problem. The NatCs (National Conservatives) must be wetting themselves at the idea of that. Propaganda might never have been created more easily than this.
What I do know is that this places a premium on the value of genuine heterodox thinking by real live human beings who can associate ideas in ways that, at least as yet, AI cannot.
Thanks for reading this post.
You can share this post on social media of your choice by clicking these icons:
You can subscribe to this blog's daily email here.
And if you would like to support this blog you can, here:
It is interesting to compare the caution with which the new, potentially society changing technology of television was introduced to this country in the 1950s with the introduction of far more powerful technologies like the Internet and AI.
I have no idea of the debates that went on before the introduction of the UKs commercial TV system, but we ended up with regional companies operating within a loose network, a good thing for democracy and a powerful block preventing monopoly exploitation and abuse.
The dissemination of news was seen as sufficiently vital to be placed in the hands of an independent company regulated with strict rules about political bias.
Advertising was strictly controlled. What could be advertised, when it was advertised and most importantly how it could be advertised.
In the past it has always been possible to argue that new technology is not inherently harmful and that harm only occurs because of how humans choose to use it.
AI looks like the first technology where sooner or later those choices will be taken out of human hands.
The argument works up to the point where A.I. remains a tool. If it starts thinking for itself then the days of NatCs and similar chimps will be numbered.
Worth a watch: https://www.youtube.com/watch?v=GIUhyEK9LJQ
Emergence – coupled to networked processing (= no single A.I.) . We can then speculate on chimps & A.I and the relationship – Terminator or more in the style of Banks’ Culture? What constitutes the current “civ” is pretty pathetic (remind me how many hungry children in the UK), if I was an A.I. I’d probably want to “make some adjustments”. As for “switch the A.I off” that point was passed a long time ago ref: Pirate Bay – distributed and impossible to interdict. All PCs switched off? what about routers (they have processing power), the mobile network etc etc. So electronically we move back to the 1960s. Don’t think so.
Thanks
And no, we can’t go backwards
Adding to the above. Using an A.I. it took me roughly 20 minutes to produce a fully worked out & referenced (7 annexes) business plan to rebuild housing in Ukraine using straw. It would normally have taken me a day. The key is asking the right questions. We are in quasi unknown territory. I have passed on the plan to my Commission contacts etc – we plan to roll it out and then reverse it back into the EU.
Clever
Artificial Intelligence is probably a misnomer; machine learning is more apt. If, like me, your knowledge and understanding of the topic needs updating, I can recommend this YouTube video.
Of the three basic attitudes to “AI” – “it’s wonderful and we should be embracing it” on the one side, and “it’s fiendish, we should abolish it” on the other, this video explores the middle ground. It explains why we should be concerned about “AI” and calls for international co-operation to define ways of controlling its worst aspects. It’s a long watch (1 hour 7 minutes) but well worth it in my opinion. I thought I knew something about it, but I was way out of date.
https://www.youtube.com/watch?v=xoVJKj8lcNQ
The A.I. Dilemma – March 9, 2023
Center for Humane Technology
A shorter video explains the three approaches to dealing with AI – also useful, and only 9 minutes long.
https://www.youtube.com/watch?v=359LmKm1cTE
Thanks
I read both articles when published, and TBH I had a fit of the glooms as a result. I had already witnessed and dealt with, at school, the power of social media to corrupt the thinking and behaviour of group of kids (the Andrew Tate effect on boys). Now I was contemplating an unregulated UK where all the means of communication and all the ideas therein could be bent to a far-right worldview by algorithm initially, and then by machine-generated strategy.
We have a duty, a moral contract, to leave the land and society, in a fit state for those following (given the climate hell that is brewing). The problem is, how?
I tried at the weekend to engage with a local Labour councillor about freeports and the implications, and was told he was more concerned with grown-up politics than with my Corbynite crap (not a Corbynite, and the obsessions are climate, ownership of communications, and the freeport/privatisation issue!). How do you get through to these people with nascent power?
When will we guess that policy is being formed in the chips of a distributed, self-powered system?
I am afraid that was a very non-grown up response from him
But you are right to ask how we break through and all I can say is I keep trying to find the answer to that
My view is that AI may gravitate to right wing thinking because such thinking is always reductionist in nature in the face of complexity.
When humans do that, that can be an emotional response to complexity, based on personal fears and partiality, something that can be exploited by politicians. I share your concern if AI is set up like this – to be quick – but not effective.
I’m a big fan John Seddon and his Vanguard consultancy work. Seddon is a huge critic of target driven public sector and private services and their ‘customer service modules’.
He argues that the problem is in automated systems of customer engagement. They create what he calls ‘failure demand’ as people give up and want to talk to a real person.
He advocates that only properly managed, trained and paid human beings will only ever be able to deal with complexity of other human being’s needs.
I know he is right because of my own experience of these systems as a user and someone who has had to deal with failure demand.
AI is just a ruse to enable capital to accrue more profit to itself. It has nothing to do with improving people’s lives except those who are investing in it. I’m sorry if that is cynical but it is what I believe.
I end to agree with you on human’s needing to talk to humans
We all hate doing anything else
This is a clip from George Lucas’ 1970 sci-fi film called THX1138 called ‘ The Confession’:
http://www.youtube.com/watch?v=U0YkPnwoYyE
This film put me off AI for life. In a later scene in the film Robert Duvall (playing the human called THX1138) is sick in the booth and the AI just keeps prattling on ignoring how ill he actually is.
Of course, in this dystopian film about the future, it is the State that is controlling society and can be said to have a typically U.S. neo-liberal bent. Our reality today however is that state has not intervened enough in the development of private AI.
THX1138 is a remarkable film even now. I was only 5 when it was released and I first saw it on BBC2. Sci-fiction writing has always been ahead of the game when telling us what our future might be like in my view.
Yes, to your conflating those two pieces and coming to your own thoughts about right wing bias feeding AI responses.
I have an example that is only slightly scary.
My energy supplier uses AI to answer customers’ comments and questions. My suspicions were aroused when a reply to me contained many phrases from my email.
Then on the following Sunday on Kuenssberg’s morning programme, a representative from my energy company was proud to explain that very issue!
Spooky