The problem with AI is it just does not care

Posted on

Dario Amodei, the chief executive of Anthropic and one of the most influential figures shaping the development of advanced artificial intelligence, has published an essay titled The Adolescence of Technology. The essay is rambling and in desperate need of a decent edit, but its core message repeats its title. As Amodei puts it:

We are entering the adolescence of technology.

By this, he does not mean that AI is immature or undeveloped. His argument is the opposite. He says that artificial intelligence is rapidly acquiring extraordinary power and capability. This risk is that it is doing so before society has developed the institutions, norms, ethics, and systems of governance required to use that power safely or wisely.

Adolescence, in his framing, is the phase where strength arrives before judgment. It is a period of volatility, experimentation, and danger, in which harm is not inevitable but becomes increasingly likely if restraint, care, and responsibility do not keep pace. Amodei's warning is that AI is now entering this phase, meaning this is not a technical argument: it is all about political economy.

Amodei's metaphor of adolescence is well chosen. His argument is about volatility. The risk, as he sees it as an industry insider, is of great potential coupled with genuine danger. His warning is that AI is now entering that phase at a civilisational scale.

There are three arguments at the core of his essay.

Firstly, he suggests that general-purpose AI is closer than most people admit. By that, he means that systems capable of outperforming humans across a wide range of cognitive tasks are likely to be within a few years, not decades. He describes this as creating the equivalent of “a country of geniuses in a datacentre”. Whether or not he is precisely right does not matter; the direction of travel is clear. Capability is accelerating endogenously, as AI is increasingly used to improve AI itself.

The key point he is making is the speed of this process. Our political systems, regulatory frameworks, and democratic processes are slow. Markets are fast. Technology is faster still. This mismatch is already familiar from financialisation and climate breakdown. All he is saying is that AI intensifies it.

Secondly, Amodei argues that the risks arising from this are systemic and not marginal. In other words, the risk is not of a rogue robot. Instead, he suggests that there are a series of overlapping dangers:

  • Misuse by individuals.
  • Concentration of power in corporations.
  • Exploitation by authoritarian states.
  • Serious economic dislocation.
  • The loss of meaningful work.
  • Erosion of democratic capacity, and
  • The possibility of systems acting in ways their creators do not fully understand or control.

It could be argued that there is nothing new about all this. The newsworthiness in this is the timing. As I have already noted this morning, markets are thinking AI might pose a major financial risk due to the lack of potential returns. Amodei sees downsides far beyond those risks.

Most especially, and as he notes, what links these risks is not malevolence but asymmetry. A small number of actors gain extraordinary leverage. Everyone else bears the consequences. This is a familiar story to anyone who has watched neoliberal economics hollow out resilience while enriching a few. AI, in this framing, is not an external shock to toxic, antisocial neoliberal capitalism. It is an accelerant.

Thirdly, it is obvious that governance is dangerously lagging behind AI risk generation. Amodei is surprisingly frank in admitting that existing governance structures are inadequate. Voluntary safeguards, internal ethics teams, and market discipline are not enough, he says. However, he does not argue for crude prohibition. Instead, he calls for deliberate, informed, and anticipatory governance that recognises that AI is infrastructure-level power within society; it is, in other words, the future underpinning of society and needs to be seen as such.

This is where the essay becomes most interesting, but it also seems incomplete because a sustained discussion of care is largely missing. Amoedi argues that AI threatens to automate not only tasks but also judgments. Whether it is really capable of that is open to debate: he assumes it is. How might it do so? Neoliberalism would suggest that the answer is obvious. You just replace what human beings might call relational decision-making, taking multiple factors into account, including care, and replace that complex, and often inexplicable process with optimisation of simple, pre-specified goals, whilst simultaneously replacing social purpose with efficiency metrics and you have not replaced judgement; you have instead upended the reality of human decision making and replaced it with automated rationality that is alien to our wellbeing. In a society already struggling with loneliness, precarity, and the erosion of public goods, this matters enormously. The question is not just whether AI can do things, but what kind of society we are building when it does.

This is where Amodei falls down, in my opinion. Amodei's essay is strong on scale and speed, the concentration of power, catastrophic and systemic risk, and the need for governance. It is, however, weak on relational ethics, care as a social practice, judgment as something embedded in lived human contexts and the difference between decisions and responsibility. Amodei suggests AI can make judgments. I dispute that. He is replacing judgment with a technocratic and market-compatible optimisation where the assumption of maximised reward is, at best, a proxy for judgment, and a dangerous one at that. That is not a side point; it is at the heart of how AI might reshape society even without dramatic failure, if we let it do so. AI could be our worst neoliberal nightmare.

From my perspective, three implications follow.

Firstly, AI governance cannot be left to markets. Markets might reward speed, scale, and monopoly, but they do not reward restraint, distributional justice, or long-term stewardship. If AI is allowed to develop primarily as a private asset, it will deepen inequality, extract rents, and undermine democratic control. This is not conjecture. It is how every previous general-purpose technology has played out under neoliberal conditions.

Secondly, the state must reclaim a developmental role. Amodei suggests there is a need for regulation, but the challenge is larger than that. We actually need the capacity to use AI for the public good, and there is no sign that this is happening. The whole issue is being managed as one of outsourcing at present, when, if AI matters, what we actually need is:

  • Public computing infrastructure.
  • Public data stewardship.
  • Public interest research, and
  • Democratic oversight.

If AI is infrastructure, and that would seem to be a fair comparison, then we should be treating it more like energy, health, or money than like social media. It is foundational. That demands public purpose.

Thirdly, the measure of success must be human flourishing, and not productivity. Much AI discussion is framed around growth, higher output, and supposedly greater efficiency resulting from reduced labour input. But productivity divorced from care is a trap. If AI displaces labour without guaranteeing alternative income, security, meaning, and social contribution, it will corrode society even if GDP rises. We have been here before.

Amodei is right to say we are at a crossroads. Adolescence can end in maturity or in harm. The outcome is not technologically predetermined. It is politically chosen.

The real question, then, is not whether AI will become powerful. It will. The question is whether we build the institutions of care, democracy, and accountability needed to live with that power, or whether, once again, we allow a transformative technology to be captured by a system that mistakes efficiency for value and control for progress.

That choice is already being made. The issue is whether we are paying attention.

PDF of article


Thanks for reading this post.
You can share this post on social media of your choice by clicking these icons:

There are links to this blog's glossary in the above post that explain technical terms used in it. Follow them for more explanations.

You can subscribe to this blog's daily email here.

And if you would like to support this blog you can, here:

  • Richard Murphy

    Read more about me

  • Support This Site

    If you like what I do please support me on Ko-fi using credit or debit card or PayPal

  • Archives

  • Categories

  • Taxing wealth report 2024

  • Newsletter signup

    Get a daily email of my blog posts.

    Please wait...

    Thank you for sign up!

  • Podcast

  • Follow me

    LinkedIn

    LinkedIn

    Mastodon

    @RichardJMurphy

    BlueSky

    @richardjmurphy.bsky.social