Anthropic Warns Fully AI Employees Are a Year Away

Anthropic predicts AI-powered virtual employees will start operating within companies in the next year, introducing new risks such as account misuse and rogue behavior. Axios reports: Virtual employees could be the next AI innovation hotbed, Jason Clinton, the company's chief information security officer, told Axios. Agents typically focus on a specific, programmable task. In security, that's meant having autonomous agents respond to phishing alerts and other threat indicators. Virtual employees would take that automation a step further: These AI identities would have their own "memories," their own roles in the company and even their own corporate accounts and passwords. They would have a level of autonomy that far exceeds what agents have today. "In that world, there are so many problems that we haven't solved yet from a security perspective that we need to solve," Clinton said. Those problems include how to secure the AI employee's user accounts, what network access it should be given and who is responsible for managing its actions, Clinton added. Anthropic believes it has two responsibilities to help navigate AI-related security challenges. First, to thoroughly test Claude models to ensure they can withstand cyberattacks, Clinton said. The second is to monitor safety issues and mitigate the ways that malicious actors can abuse Claude. AI employees could go rogue and hack the company's continuous integration system -- where new code is merged and tested before it's deployed -- while completing a task, Clinton said. "In an old world, that's a punishable offense," he said. "But in this new world, who's responsible for an agent that was running for a couple of weeks and got to that point?" Clinton says virtual employee security is one of the biggest security areas where AI companies could be making investments in the next few years. Read more of this story at Slashdot.

Apr 22, 2025 - 22:29
 0
Anthropic Warns Fully AI Employees Are a Year Away
Anthropic predicts AI-powered virtual employees will start operating within companies in the next year, introducing new risks such as account misuse and rogue behavior. Axios reports: Virtual employees could be the next AI innovation hotbed, Jason Clinton, the company's chief information security officer, told Axios. Agents typically focus on a specific, programmable task. In security, that's meant having autonomous agents respond to phishing alerts and other threat indicators. Virtual employees would take that automation a step further: These AI identities would have their own "memories," their own roles in the company and even their own corporate accounts and passwords. They would have a level of autonomy that far exceeds what agents have today. "In that world, there are so many problems that we haven't solved yet from a security perspective that we need to solve," Clinton said. Those problems include how to secure the AI employee's user accounts, what network access it should be given and who is responsible for managing its actions, Clinton added. Anthropic believes it has two responsibilities to help navigate AI-related security challenges. First, to thoroughly test Claude models to ensure they can withstand cyberattacks, Clinton said. The second is to monitor safety issues and mitigate the ways that malicious actors can abuse Claude. AI employees could go rogue and hack the company's continuous integration system -- where new code is merged and tested before it's deployed -- while completing a task, Clinton said. "In an old world, that's a punishable offense," he said. "But in this new world, who's responsible for an agent that was running for a couple of weeks and got to that point?" Clinton says virtual employee security is one of the biggest security areas where AI companies could be making investments in the next few years.

Read more of this story at Slashdot.