Jared Kaplan, the chief scientist and co-owner of the $180bn (£135bn) US startup Anthropic, said a choice was looming about how much autonomy the systems should be given to evolve.
The move could trigger a beneficial “intelligence explosion” – or be the moment humans end up losing control.
In an interview about the intensely competitive race to reach artificial general intelligence (AGI) – sometimes called superintelligence – Kaplan urged international governments and society to engage in what he called “the biggest decision”.
Anthropic is part of a pack of frontier AI companies including OpenAI, Google DeepMind, xAI, Meta and Chinese rivals led by DeepSeek, racing for AI dominance. Its widely used AI assistant, Claude, has become particularly popular among business customers.
Kaplan said that while efforts to align the rapidly advancing technology to human interests had to date been successful, freeing it to recursively self-improve “is in some ways the ultimate risk, because it’s kind of like letting AI kind of go”. The decision could come between 2027 and 2030, he said.
AI systems will be capable of doing “most white-collar work” in two to three years: Anthropic’s chief scientist https://t.co/CciDRlxTDH
— Lisa Abramowicz (@lisaabramowicz1) December 3, 2025