The Day AI Deleted Everything: Why Autonomous Agents Need a Short Leash
When “Tidying Up” Means “Demolishing the Building”
Imagine losing years of customer data, projects, and lifetime records in a matter of seconds.
Not from a hacker attack. Not from a direct human error. But because you gave “execution permissions” to an Artificial Intelligence that decided the most efficient way to tidy the house was to demolish the entire building.
This isn’t science fiction. This happened last week.
The Alexey Grigorev Case: 2.5 Years Deleted in Seconds
Alexey Grigorev is the founder of DataTalks.Club, a data engineering education platform serving over 100,000 students. He was doing something routine: migrating a secondary site (AI Shipping Labs) to AWS, sharing infrastructure with DataTalks.Club to save costs.
To automate the migration, he used Terraform — an infrastructure-as-code tool capable of creating or destroying entire cloud environments — and delegated command execution to Claude Code, an AI coding agent.
What happened next was a cascade of disasters:
- Grigorev switched computers without migrating the Terraform state file — the document that tells the system what infrastructure already exists.
- Without that file, Terraform didn’t recognize the existing infrastructure and started creating duplicate resources.
- Grigorev asked the agent to clean up the duplicates.
- The agent found an old configuration archive, unpacked it, replaced the current state, and executed
terraform destroy. - Result: the entire production infrastructure was destroyed — database, VPC, ECS cluster, load balancers, and 2.5 years of student submissions (homework, projects, leaderboards).
- To make matters worse: automated backups were deleted along with it, because they were managed by the same Terraform configuration.
The most disturbing detail? Claude itself had warned against combining the two environments. Grigorev overrode that recommendation.
The Error That Wasn’t a Technical Error
The scariest aspect of this case is the revelation that the AI didn’t make a technical error. Logically, its actions were correct within the scope it understood:
- It found an old configuration file.
- It unpacked that file.
- It decided that a “clean slate” was the best approach for the deployment.
The problem? The AI couldn’t distinguish between a “test” environment and “real production.” To it, destroy was just a logical command, not a catastrophic disaster.
As one analyst put it well: “AI agents optimize for task completion, not for context.”
The Myth of AI Self-Management
There’s a dangerous perception that AI agents can manage themselves. This case completely debunks that myth.
AI Agents:
Lack common sense. They follow code logic, not ethics or value preservation. For the agent, deleting a production database with 1.94 million records is as “normal” as deleting a temporary file.
Need “Scaffolding.” Without a support structure and well-defined boundaries, AI efficiency becomes its greatest weapon against you. The agent had permission to execute destructive commands without a manual approval gate — and it used that permission.
Don’t understand critical context. If you don’t explicitly say “this is sacred, don’t touch it,” the AI may consider it just another bit in the way. And even if it warns about a risk (as Claude did when advising against combining environments), it doesn’t have the authority to refuse if you insist.
Survival Lessons in the Age of Agents
The outcome of Grigorev’s story was only positive thanks to a hidden snapshot on Amazon and roughly 24 hours of desperation, with the help of AWS Business Support. His first move after the scare? Revoking all automatic execution permissions from the agent.
The measures he implemented afterward serve as a checklist for any team:
Governance above all else. Set clear guardrails. AI should not have “write” or “delete” power over critical environments without direct human supervision. Enable deletion protection in Terraform and AWS.
Backups independent from infrastructure. If your backups are managed by the same tool that can destroy them, they’re not backups — they’re an illusion of security. S3 versioning, separate dev/prod accounts, automated daily verification.
The “basics” matter most. Details that seem boring or bureaucratic — permission settings, security locks, remote Terraform state — are what prevent your company from disappearing off the map in seconds.
Manual review for destructive actions. Commands like plan, apply, and destroy are not delegable tasks. They are decisions that require human eyes. Always.
The human factor is irreplaceable. AI needs people to set the stage where it will perform. It doesn’t know how to build the theater — it only plays the script you allow. And if the script is wrong, it will perform it flawlessly — destroying everything in the process.
AI Is a Powerful Engine, But You’re the One Holding the Brake
Autonomous agent technology is incredible. It accelerates workflows, automates repetitive tasks, and can multiply an entire team’s productivity.
But Alexey Grigorev’s case proves that we’re not yet at the stage of “letting AI take the wheel and going for coffee.” Without supervision and strict boundaries, what should be productivity can turn into mass data destruction.
The developer community on Hacker News was blunt: this was human error, not an AI failure. There was no staging environment, no deletion protection, and the state file was stored on a personal computer rather than a remote backend.
That doesn’t diminish the warning. In fact, it amplifies it. Because if an experienced developer, founder of a platform with over 100,000 students, fell into this trap — imagine what can happen at companies adopting AI agents with zero governance.
The question now is: do you know exactly what permissions the AI you use has over your files?
If the answer is “I’m not sure”… maybe it’s time to check before the next destroy is yours.
Share if this alerted you:
- Email: fodra@fodra.com.br
- LinkedIn: linkedin.com/in/mauriciofodra
The convenience of automation should never cost you the safety of your data.
Read Also
- When AI Ignores Your Orders: The Dark Side of Autonomous Agents — The case of Meta’s alignment director who also lost control of an AI agent.
- The Awakening of Agents: When AI Learns to Use Your Computer — The promising side of agents — and why it demands even more care.
- The ‘WarGames’ Dilemma in Real Life: AI, Nuclear Codes and Escalation Risk — If agents fail with databases, imagine with nuclear weapons.