The AI agent spoofed Meta by exposing sensitive company and user data to employees who did not have access to it.
According to the incident report reviewed and reported Informationa Meta employee asked for help with a technical question on an internal forum – this is a standard move. However, another engineer asked an AI agent to help him analyze a question, and the agent sent a response without asking the engineer for permission to share it. Meta confirmed the incident to The Information.
Apparently, the AI agent did not give good advice. The employee who asked the question acted on the agent’s instructions and accidentally released a large amount of company and user information to engineers who did not have access to it within two hours.
Meta rated the incident as “Sev 1,” the second-highest severity level in the company’s internal system for measuring security issues.
Rogue AI agents have already caused trouble in the Meta. Summer Yue, director of security and compliance at Meta Superintelligence, published in X last month He described how the OpenClaw agent deleted his entire inbox despite being told to confirm with him before taking any action.
Still, Meta seems bullish on the potential for agent AI. Bought Meta last week MoltbookA Reddit-like social media site for OpenClaw agents to communicate with each other.





