Another AI manifesto just dropped
Anthropic’s CEO says other AI chiefs have been too grandiose. Then he goes pretty big himself.
The leaders of AI companies could use an editor.
Last week, Anthropic’s CEO and cofounder Dario Amodei published a 14,000-word manifesto titled “Machines of Loving Grace: How AI Could Transform the World for the Better,” in which the Princeton-trained researcher with a Ph.D. in biophysics and computational neuroscience laid out an optimistic view of the future of AI.
You may be tempted to use an AI tool, such as Anthropic’s Claude chatbot, to summarize this for you, but let me do the honors.
The essay’s title is a reference to a poem by Richard Brautigan which pines for a future harmony between humans and the benevolent computers that have set us free to frolic in nature. (The poem begins with the author contemplating “a cybernetic ecology where we are free of our labors” and ends with us being “watched over by machines of loving grace.”)
Amodei notes that his peers in the field, in their other manifestos, may have tainted the public’s perception of the technology by being too grandiose, sounding like AI propagandists or making the technology look unserious by weighing it down with too much “sci-fi baggage.”
Amodei left OpenAI after clashing with CEO Sam Altman and founded Anthropic with his sister Daniela Amodei as a public-benefit corporation. The company is reportedly valued at about $15 billion, and is seeking to roughly double that with a new round of fundraising.
Amodei’s essay starts off pretty grounded, laying out a less fantastical vision of what a next-generation AI system might look like, avoiding the phrase “artificial general intelligence” (AGI) and instead using the more vague “powerful AI.” He picks five areas he considers to have “the greatest potential to directly improve the quality of human life” — biology, neuroscience, economic development, peace, and work.
There’s plenty of hedging in this screed, allowing that these are just informed predictions, and Amodei seems aware the ideas might sound a little crazy.
“My predictions are going to be radical as judged by most standards (other than sci-fi “singularity” visions), but I mean them earnestly and sincerely,” he wrote.
One of the more interesting predictions is what this “powerful AI” system might look like. Rather than a sentient AI entity, Amodei describes a different kind of thing.
“By powerful AI, I have in mind an AI model—likely similar to today’s LLMs in form, though it might be based on a different architecture, might involve several interacting models and might be trained differently.” He describes a cluster of millions of these models working together, “a country of geniuses in a datacenter,” rapidly speeding up scientific discoveries by designing and running lab experiments and leapfrogging human scientists with exponentially faster speeds.
Amodei lays out some evidence of how recent human-powered breakthroughs could have been accelerated by the speed and precision of the powerful AI he describes, but leaves out the creativity and very human curiosity that have led to some of our biggest discoveries.
And despite his warning to his peers about sounding too grandiose in their claims, he proceeds to do that exact thing throughout the essay.
“It’s hard to overestimate how surprising these changes will be to everyone except the small community of people who expected powerful AI,” Amodei writes.
In the course of this hour-plus read, Amodei confidently applies the straightforward math of his imagined AI acceleration to the world’s most intractable problems, as if processing power were the missing ingredient in all of human history.
“Overall, I think 5-10 years is a reasonable timeline for a good fraction (maybe 50%) of AI-driven health benefits to propagate to even the poorest countries in the world.”
Sprinkled throughout this rose-colored vision of AI-enabled peace, prosperity, and health are caveats and notes about the potential obstacles to this world.
The biggest obstacle, it seems, are humans who don’t buy into this plan. Drawing parallels from the Luddite anti-technology movements of the past, Amodei ponders the plight of those who “are the least able to make good decisions,” who might choose to opt out of this AI vision.
“This is a difficult problem to solve as I don’t think it is ethically okay to coerce people, but we can at least try to increase people’s scientific understanding—and perhaps AI itself can help us with this.”