Dario Amodei, the chief executive of Anthropic and one of the most influential figures shaping the development of advanced artificial intelligence, has published an essay titled The Adolescence of Technology. The essay is rambling and in desperate need of a decent edit, but its core message repeats its title. As Amodei puts it:
We are entering the adolescence of technology.
By this, he does not mean that AI is immature or undeveloped. His argument is the opposite. He says that artificial intelligence is rapidly acquiring extraordinary power and capability. This risk is that it is doing so before society has developed the institutions, norms, ethics, and systems of governance required to use that power safely or wisely.
Adolescence, in his framing, is the phase where strength arrives before judgment. It is a period of volatility, experimentation, and danger, in which harm is not inevitable but becomes increasingly likely if restraint, care, and responsibility do not keep pace. Amodei's warning is that AI is now entering this phase, meaning this is not a technical argument: it is all about political economy.
Amodei's metaphor of adolescence is well chosen. His argument is about volatility. The risk, as he sees it as an industry insider, is of great potential coupled with genuine danger. His warning is that AI is now entering that phase at a civilisational scale.
There are three arguments at the core of his essay.
Firstly, he suggests that general-purpose AI is closer than most people admit. By that, he means that systems capable of outperforming humans across a wide range of cognitive tasks are likely to be within a few years, not decades. He describes this as creating the equivalent of “a country of geniuses in a datacentre”. Whether or not he is precisely right does not matter; the direction of travel is clear. Capability is accelerating endogenously, as AI is increasingly used to improve AI itself.
The key point he is making is the speed of this process. Our political systems, regulatory frameworks, and democratic processes are slow. Markets are fast. Technology is faster still. This mismatch is already familiar from financialisation and climate breakdown. All he is saying is that AI intensifies it.
Secondly, Amodei argues that the risks arising from this are systemic and not marginal. In other words, the risk is not of a rogue robot. Instead, he suggests that there are a series of overlapping dangers:
- Misuse by individuals.
- Concentration of power in corporations.
- Exploitation by authoritarian states.
- Serious economic dislocation.
- The loss of meaningful work.
- Erosion of democratic capacity, and
- The possibility of systems acting in ways their creators do not fully understand or control.
It could be argued that there is nothing new about all this. The newsworthiness in this is the timing. As I have already noted this morning, markets are thinking AI might pose a major financial risk due to the lack of potential returns. Amodei sees downsides far beyond those risks.
Most especially, and as he notes, what links these risks is not malevolence but asymmetry. A small number of actors gain extraordinary leverage. Everyone else bears the consequences. This is a familiar story to anyone who has watched neoliberal economics hollow out resilience while enriching a few. AI, in this framing, is not an external shock to toxic, antisocial neoliberal capitalism. It is an accelerant.
Thirdly, it is obvious that governance is dangerously lagging behind AI risk generation. Amodei is surprisingly frank in admitting that existing governance structures are inadequate. Voluntary safeguards, internal ethics teams, and market discipline are not enough, he says. However, he does not argue for crude prohibition. Instead, he calls for deliberate, informed, and anticipatory governance that recognises that AI is infrastructure-level power within society; it is, in other words, the future underpinning of society and needs to be seen as such.
This is where the essay becomes most interesting, but it also seems incomplete because a sustained discussion of care is largely missing. Amoedi argues that AI threatens to automate not only tasks but also judgments. Whether it is really capable of that is open to debate: he assumes it is. How might it do so? Neoliberalism would suggest that the answer is obvious. You just replace what human beings might call relational decision-making, taking multiple factors into account, including care, and replace that complex, and often inexplicable process with optimisation of simple, pre-specified goals, whilst simultaneously replacing social purpose with efficiency metrics and you have not replaced judgement; you have instead upended the reality of human decision making and replaced it with automated rationality that is alien to our wellbeing. In a society already struggling with loneliness, precarity, and the erosion of public goods, this matters enormously. The question is not just whether AI can do things, but what kind of society we are building when it does.
This is where Amodei falls down, in my opinion. Amodei's essay is strong on scale and speed, the concentration of power, catastrophic and systemic risk, and the need for governance. It is, however, weak on relational ethics, care as a social practice, judgment as something embedded in lived human contexts and the difference between decisions and responsibility. Amodei suggests AI can make judgments. I dispute that. He is replacing judgment with a technocratic and market-compatible optimisation where the assumption of maximised reward is, at best, a proxy for judgment, and a dangerous one at that. That is not a side point; it is at the heart of how AI might reshape society even without dramatic failure, if we let it do so. AI could be our worst neoliberal nightmare.
From my perspective, three implications follow.
Firstly, AI governance cannot be left to markets. Markets might reward speed, scale, and monopoly, but they do not reward restraint, distributional justice, or long-term stewardship. If AI is allowed to develop primarily as a private asset, it will deepen inequality, extract rents, and undermine democratic control. This is not conjecture. It is how every previous general-purpose technology has played out under neoliberal conditions.
Secondly, the state must reclaim a developmental role. Amodei suggests there is a need for regulation, but the challenge is larger than that. We actually need the capacity to use AI for the public good, and there is no sign that this is happening. The whole issue is being managed as one of outsourcing at present, when, if AI matters, what we actually need is:
- Public computing infrastructure.
- Public data stewardship.
- Public interest research, and
- Democratic oversight.
If AI is infrastructure, and that would seem to be a fair comparison, then we should be treating it more like energy, health, or money than like social media. It is foundational. That demands public purpose.
Thirdly, the measure of success must be human flourishing, and not productivity. Much AI discussion is framed around growth, higher output, and supposedly greater efficiency resulting from reduced labour input. But productivity divorced from care is a trap. If AI displaces labour without guaranteeing alternative income, security, meaning, and social contribution, it will corrode society even if GDP rises. We have been here before.
Amodei is right to say we are at a crossroads. Adolescence can end in maturity or in harm. The outcome is not technologically predetermined. It is politically chosen.
The real question, then, is not whether AI will become powerful. It will. The question is whether we build the institutions of care, democracy, and accountability needed to live with that power, or whether, once again, we allow a transformative technology to be captured by a system that mistakes efficiency for value and control for progress.
That choice is already being made. The issue is whether we are paying attention.
Thanks for reading this post.
You can share this post on social media of your choice by clicking these icons:
There are links to this blog's glossary in the above post that explain technical terms used in it. Follow them for more explanations.
You can subscribe to this blog's daily email here.
And if you would like to support this blog you can, here:

Buy me a coffee!

The fact that AI will amplify neoliberalism, when we need to dismantle and replace it, is alarming. A good reason in itself why we need oversight and regulation when, yet again, politicians are behind the curve and too ready to outsource.
Thank you for the summary. Grady Booch talked a bit about Amodie’s posting this morning. I started to read The Adolescence of Technology and as you said it rambled on for quite a while and I gave up.
https://bsky.app/profile/booch.com/post/3mdfd4kn67s2f
I still fail to see this extraordinary power and capability. It’s very handy for writing code but you have to keep a watchful eye on it. From day to day I don’t know how it will react. Yesterday it completely ignored my guidelines (CLAUDE.md file using claude code) and had to bring it back around. Wasted a good amount of time.
Feel like I should get a refund when it doesn’t follow directions.
Both Claude and ChatGPT cvan be spectacularly good at ignoring instructions.
When they work they are useful.
In an almost trivial corollary – the current debate about banning children from social media – has entirely censored those voices- including parents of victims -who are saying ‘slam the platforms with massive fines, for the harm they are doing’ and don’t just leave it to the kids and parents.
This seems to show the political response to AI wont happen – politicians already bought and paid for by the platforms.
Outsourcing responsibiloity, and so blame, to the victim is at the core of neoliberalism, and do fascism (is there a difference?)
It is being done in the US big time right now,
In a related example Lord Bethell (who was himself involved in the legislation) told peers in a recent House of Lords debate that assuming tech companies would support the goals of the Online Safety Act was a ‘catastrophic misjudgment’
He argued that the government assumed that it could work with the platforms to moderate their algorithms, to remove the filth, to prevent the predators, to limit the screen time. It assumed that they were working in some kind of collaborative partnership with Facebook, Google, TikTok, meta, Snapchat, Twitter, and all the other social media companies, in protecting children.
But that was a catastrophic misjudgment about the nature of these companies and the nature of their leadership.
He went on to say that you cannot algorithmically mitigate something that is not a design problem, but a business model problem. The algorithm isn’t broken. The algorithm is doing exactly what it was designed to do. It is to maximize engagement, to keep eyes on the screen, to amplify provocative content, because provocative content keeps people clicking, including our children.
This is not a market failure. This is a market working as designed by the companies that have monetized our children’s child hood as a commodity.
I hate to agree with Lord Bethell, but it seems I must.
We have plenty of evidence of how slowly politics reacts to even existential threats, as climate inaction is dramatically demonstrating. It’s hard to imagine politicians getting a grip on AI any faster.
I confess I find the outlook profoundly depressing. We are building Skynet with exactly the irresponsible ‘because we can’ attitude that is the premise of the Terminator movies. Regulators can’t even move fast enough to control the abuse that is Twitter/X, let alone address the bigger issues. On the contrary, they are happily giving planning permission for data centres that overburden local infrastructures. Amodei’s warning is timely, but I have little hope that it will be heeded.
To twist Trump’s words, we need a dictatorship of welfare and resilience across the social and physical aspects of UK, water, food, health, infrastructure etc. as major objectives alongside investment to those ends. We haven’t had a government to do that for 75 years.
Agreed
I rreally must write the politics of care
Much to agree with.
NESO (the state-owned) transmission system operator in the UK is on a path/has a project team to use/deploy AI in the power system.
Why?
The power system is +/- fairly complex but taking one example: the elec transmission system for Tokyo (arguably bigger than the UK) uses TSC (Transmission Stability Control) a sort of dynamic/real-time state-estimation system to keep the show on the road. Mostly based on big banks of servers. Not really AI – just real-time simulation of what-ifs (= what if there is a fault how do we re-config the network). We (me + Hitachi) tried to get interest from NatGrid in 2014 – nope (& ditto the other Euro TSOs).
A failure to ask the right questions at the start, predicates taking a path that looks OK to start with but is very sub-optimal in the long run. The UK Network Operators and their Distribution Management Systems (DMS) (with all the attendant supplier lock-in) is a great example. (my old DNO did not make that mistake – the 1st chief engineer asked the right questions).
Human eh! always going after wizzy tech. Apologies if some of this is a bit obscure.
It’s useful.