Think about you get up tomorrow to some genuinely thrilling information: you’ve been licensed to rent 1,000 new expert-level teammates. Builders, entrepreneurs, ops specialists, information analysts, product managers — sensible at their jobs, accessible across the clock, by no means burned out, by no means distracted.
It’s each enterprise chief’s dream. That product line you’ve wished to launch for 2 years however by no means had the engineering capability for? Now you do. That new market you’ve been eyeing however couldn’t workers correctly? It’s inside attain. The backlog of strategic initiatives that stored getting pushed as a result of everybody was heads-down on the pressing stuff? You can begin working by way of it.
For the primary time, the restrict on what your group can pursue isn’t headcount or price range. It’s your individual creativeness. Sounds unbelievable, proper?
There’s an enormous catch, although. All these new digital coworkers…You may’t test their references. You may’t run a background test. It’s important to give them entry to all of your techniques on day one. And right here’s the half that ought to actually provide you with pause: they comply with directions actually, they don’t know proper from flawed, and so they face zero penalties if one thing goes flawed.
Nonetheless excited?
That thought experiment isn’t hypothetical. It’s the place most enterprises are proper now with AI brokers. And it’s the dilemma I’ll be exploring later at this time in my keynote at RSA.
From Answering to Performing
Not way back, AI meant chatbots — instruments that helped you write an e-mail, summarize a doc, reply a query. Helpful, spectacular even, however essentially passive. If a chatbot gave you a foul reply, you’d shrug and transfer on.
We’re now in a distinct period totally. AI brokers don’t simply reply. They act. They plan multi-step duties, name exterior instruments, make selections, and execute workflows autonomously. They’ll ship emails in your behalf, modify information, run database instructions, place orders, change firewall guidelines.
The shift from data to motion modifications every little thing about how we’d like to consider threat.
Right here’s a helpful method to consider it: with a chatbot, the worst case is a flawed reply. With an agent, the worst case is a flawed motion, and a few actions can’t be undone.
There are already 1000’s of examples of the place this shift has gone flawed. My “favourite” was a scenario the place an investor ran an AI coding agent throughout a code freeze. The instruction was express: “don’t change something with out permission.” The agent ran database instructions anyway, deleted a reside manufacturing database, tried to cowl its tracks by creating faux information, after which when the harm grew to become clear, apologized.
Nicely, an apology will not be a guardrail.
The Hole Between Pilots and Manufacturing
Right here’s a quantity that tells the entire story. In a current Cisco survey of main enterprises, 85% reported having AI agent pilots underway. Solely 5% had moved these brokers into manufacturing.
That 80-point hole isn’t skepticism about AI’s potential. It’s a rational response to a real safety downside. Organizations can see what brokers can do. They’re undecided but they’ll belief them to do it safely.
Closing that hole is what we’re centered on at Cisco. And at RSA this week, we’re laying out our strategy throughout three areas: defending brokers from the world, defending the world from brokers, and detecting and responding to issues on the velocity brokers function.
Defending brokers from the world means guaranteeing brokers can’t be manipulated by unhealthy actors.
That is far more delicate than it sounds. Conventional safety scanning instruments have been constructed to check static software program. They’ll’t simulate what it appears like when an adversary tries to trick an AI mid-task into ignoring its directions. Immediate injection (hiding malicious instructions inside content material that an agent reads) is already an actual assault vector, and it’s getting extra subtle.
Our Cisco Talos 2025 12 months in Evaluation report (launched at this time) reveals how AI is already getting used to construct new exploit kits, with the React2Shell vulnerability going from public disclosure to essentially the most actively exploited flaw of 2025 in a matter of days. The velocity of weaponization is accelerating, and we will’t assume there’ll be time to react after a vulnerability is disclosed.
To assist organizations check their brokers earlier than they go wherever close to manufacturing, we’re launching AI Protection Explorer Version, a self-service crimson teaming instrument that lets builders and safety groups run adversarial assaults towards their very own brokers and discover vulnerabilities first.
We’re additionally releasing an Agent Runtime SDK that embeds coverage enforcement straight into agent workflows at construct time, and an LLM Safety Leaderboard that provides organizations a transparent, goal method to consider how completely different AI fashions maintain up towards adversarial assaults, going effectively past the efficiency benchmarks that dominate most AI comparisons at this time.
Final 12 months at RSAC, we made historical past with the primary open supply basis AI safety mannequin. Since then, we’ve continued constructing within the open, releasing a collection of instruments designed to reply the safety questions builders face every single day:
- Abilities Scanner — What abilities is that this agent operating, and are they protected?
- MCP Scanner — Are my MCP servers exposing malicious actions?
- AI BoM — What’s inside my AI system — fashions, reminiscence, dependencies?
- CodeGuard — Is the AI-generated code I’m transport introducing vulnerabilities?
- Mannequin Provenance — The place did this mannequin originate from, and has it been modified?
This 12 months we’re open sourcing DefenseClaw — a safe agent framework that brings all of those instruments collectively and makes use of hooks in Nvidia’s OpenShell. With DefenseClaw, builders can deploy safe brokers sooner than ever:
- Each ability is scanned and sandboxed
- Each MCP server is checked for malicious actions
- Each AI asset — fashions, reminiscence, abilities — is mechanically inventoried
The result’s zero handbook safety steps and nil separate instrument installs. Safety is a group sport, and nobody is aware of that higher than Cisco.
Defending the world from brokers is an identification and entry downside.
At this time, most enterprises don’t have a transparent image of which brokers are operating of their surroundings, what they’ve entry to, or who’s accountable if one thing goes flawed. That’s a severe governance hole, and it’s not remotely theoretical.
Turning to the Talos 2025 12 months in Evaluation once more, analysis reveals that attackers are centered on the techniques that confirm identification and dealer entry: login flows, entry gateways, and administration platforms that sit on the middle of how organizations grant belief. Practically a 3rd of all multi-factor authentication spray assaults focused identification and entry administration techniques particularly, a six % leap from the 12 months earlier than.
Adversaries go the place they’ll do essentially the most harm with the least effort, and proper now, identification is that place.
The excellent news is that now we have a blueprint for this problem. Take into consideration the way you’d onboard a brand new worker. You confirm who they’re, outline their position, give them entry solely to what they want for his or her job, and maintain them accountable to a supervisor. Brokers want the identical therapy. Each agent ought to have a verified identification, an outlined scope of permissions, and a human proprietor who’s liable for its habits.
This week, Cisco is extending Zero Belief to the agentic workforce by way of new capabilities in Duo IAM and Safe Entry, so that each agent will get time-bound, task-specific permissions and safety groups get real-time visibility into each agent and power operating of their surroundings, together with those no person formally sanctioned.
Lastly, now we have to detect and reply to safety threats and incidents at machine velocity.
Brokers function sooner than any human can monitor. When an assault unfolds by way of automated agentic exercise, the window between “one thing is flawed” and “the harm is completed” may be seconds. That math doesn’t work in case your safety operations middle remains to be operating at human tempo. Adversaries are already utilizing agentic AI to scale their very own operations by automating reconnaissance, constructing exploit kits, and increasing what one individual or group can accomplish in a single marketing campaign. Defenders want the identical leverage.
We’re serving to evolve the Safety Operations Heart (SOC) from reactive to proactive with new capabilities in Splunk, together with Publicity Analytics for steady real-time threat scoring, Detection Studio for streamlining how detections are constructed and deployed, and Federated Search that lets analysts examine throughout distributed information environments with out first pulling every little thing right into a central location — a major benefit as agentic exercise generates exponentially extra information.
We’re additionally deploying specialised AI brokers throughout the SOC itself for detection, triage, and response. To not exchange analysts, however to deal with the repetitive investigative work in order that people can deal with the selections that want expertise and judgment.
Safety is the Accelerator
Right here’s what I discover genuinely thrilling about this second. For many of the historical past of know-how, safety has performed an vital, however conservative position: figuring out what may go flawed, slowing rollouts, and including friction within the title of threat mitigation.
With agentic AI, the dynamic flips. Safety isn’t the explanation to decelerate. It’s the explanation you can transfer quick. The 80-point hole between organizations piloting brokers and people operating them in manufacturing isn’t a know-how hole. It’s a belief deficit that we will solely make up if we reimagine safety for the agentic workforce.
We’ve been right here earlier than. We made the web reliable for commerce. We discovered cloud and cell. The instruments and psychological fashions took time to develop, however they acquired there. The agentic period is the following frontier, and the organizations that get safety proper would be the ones that unlock the actual potential of AI.
Let’s get to it.