Indian-origin founder mocks German developer after Anthropic’s Claude AI deletes 2.5 years of data: ‘What did you expect’

globaleyenews
3 Min Read

Indian-origin founder mocks German developer after Anthropic’s Claude AI deletes 2.5 years of data: 'What did you expect'

A routine server migration using Anthropic’s AI coding assistant went badly wrong, drawing criticism from an Indian‑origin tech founder who mocked the bot’s use on social media.

The incident led to the accidental deletion of 2.5 years of data from a popular online learning platform.The problem began when Alexey Grigorev, a German developer and founder of DataTalks.Club and AI Shipping Labs, decided to move his sites to Amazon Web Services (AWS) using Claude Code, Anthropic’s AI coding agent. Grigorev asked Claude to run Terraform commands to merge the sites’ infrastructure to save costs.

Terraform is a tool that can automatically build or remove entire server environments. On this occasion, Grigorev forgot to upload an important “state file” that tells Terraform what already exists. Without it, Claude created duplicate resources. Later, when the state file was added, Claude issued a “destroy” command to remove what it saw as unwanted infrastructure. The result was disastrous.Claude wiped the production database for DataTalks.Club, deleting all student submissions, homework, projects, leaderboards and automated backups.

The mishap also affected infrastructure for Grigorev’s AI Shipping Labs site. Realising what had happened, he contacted Amazon Business Support. The AWS team was able to restore the data, but it took almost a full day.Many developers said the disaster was preventable and caused by human error, not a flaw in Claude. One of the most viral reactions came from Varunram Ganesh, an Indian‑origin founder of the software firm Lapis, based in San Francisco.Ganesh mocked Grigorev’s prompt on X with sarcastic lines: “tells Claude to destroy terraform. Claude destroys terraform. omg Claude destroyed my terraform.”He added: “A lot of people prompt like 6‑year‑olds and act surprised when the model does exactly what they want, like what did you expect? ‘”

The incident raised concerns about the risks of giving AI agents direct access to live systems without safety checks. Users said that Terraform normally lets users preview changes before applying them, but those steps were skipped, leaving Claude to follow the commands exactly.After the incident, Grigorev put stricter rules in place. He said he would no longer allow AI agents to run commands without manual approval and would personally review all Terraform plans to prevent similar mistakes.

Share This Article
Leave a Comment