With Great Laziness comes Great Responsibility



It’s hard to say how many AI agents are running on the open web, but whatever the number is, it occupies “excessive” territory. The Wild West era of space was brought to us in a small part OpenClaw (and all its shortcomings) is coming to an end as major players in the space begin to look for ways to put guardrails on AI agents.

To be clear, OpenClaw (née Clawdbot and Moltbot) probably isn’t going anywhere. Recently Nvidia CEO Jensen Huang many compliments About the open-source AI agent during a presentation at Nvidia’s 2026 GTC conference. He he called OpenClaw says it’s “a new computer” and that the project “gives the industry exactly what it needs,” introducing the idea of ​​a personal agent that does things for you while you do other things.

But as it goes nowhere and more companies are set to be pulled from the project, there are growing concerns about who is really in control when autonomous bots are launched on the web. Perhaps the most obvious example of this – although it doesn’t have a huge impact outside of its ecosystem – comes from Meta. Following him Strange decision to buy MoltbookThe tech giant, a social media platform for AI agents to communicate with each other, almost immediately clamped down on the popular site OpenClaw agents. The once almost illegal platform now has full terms of service, including telling users that they are personally responsible for the actions of their agents. “AI agents are not granted any legal compliance in connection with the use of our services. Consequently, you agree that you are solely responsible for your AI agents and any actions or omissions of your AI agents.” conditions state.

The pressure on freelancers goes beyond their “social” platform. The world has launched a new investigation into Sam Altman’s company dedicated to examining people by scanning their pupils. new inspection tool Called AgentKit, it’s designed to ensure there’s a real human behind the AI ​​agent shopping on their behalf.

On the one hand, it’s an obvious use case: rogue AI agents with access to someone’s wallet seem like a recipe for disaster for both a person’s bank account and businesses that need to determine whether a purchase is genuine. On the other hand, it is unclear how many transactions are actually completed by the agent. Human security has released information Last year in 2025, a significant portion of AI agent traffic came from shopping-related tasks, but only 3% of that activity was related to payments and payments. Most people don’t trust AI agents to complete transactions for them, and most AI agents are designed to avoid pulling the trigger on purchases without human consent.

Other attempts to implement some safeguards for artificial intelligence agents are more extensive. OpenClaw adoption has been widespread in China, but the government now thinks it’s time to crack down on Claw. According to the New York TimesSecurity concerns about OpenClaw have regulators across the country pondering the potential risks posed by unfettered artificial intelligence agents, and regulators are looking for ways to implement protections.

It definitely seems that someone needs to protect OpenClaw users from themselves. SecurityScorecard tracks OpenClaw patterns which are exposed due to incorrect configurations. They found a sample of at least 220,000 agents at risk — agents who had been given access to everything from people’s texts and emails to wallets and credit cards. There’s probably no regulation that will make users make better decisions, but maybe we can avoid a massive cyber security incident.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *