The Rising Tide

The Rising Tide

When the Builders Start Sounding the Alarm

From Evangelists to Whistleblowers

Dean Barber's avatar
Dean Barber
Feb 14, 2026
∙ Paid

The most unsettling development in artificial intelligence is not how fast the models are improving. It is how the people closest to their creation are beginning to talk.

In recent weeks, a quiet but unmistakable shift has taken place inside the world’s most advanced AI companies. Researchers tasked with keeping these systems safe are leaving — and instead of disappearing into new jobs or academic posts, many are speaking publicly, in unusually stark language, about what they believe is coming next. The tone is not hype or rivalry. It is a warning.

“The world is in peril,” said the former head of safeguards research at Anthropic as he exited the company, according to reporting cited by multiple outlets. A departing OpenAI researcher echoed the concern, saying the technology has “a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.”

These are not critics on the sidelines. They are people who helped build the systems now reshaping the global economy. Their concerns are emerging just as AI models cross a new threshold: not merely responding to prompts, but increasingly building tools and improving systems with limited human intervention.

OpenAI disclosed that its most recent model participated in training its own successor. Anthropic’s experimental “Cowork” tool went viral after it became clear the system had largely constructed itself. For optimists, this is proof of accelerating productivity. For others, it is evidence that human oversight is beginning to lag behind capability.

Keep reading with a 7-day free trial

Subscribe to The Rising Tide to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2026 Dean Barber · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture