This weblog is collectively written by Amy Chang, Hyrum Anderson, Rajiv Dattani, and Rune Kvist.
We’re excited to announce Cisco as a technical contributor to AIUC-1. The usual will operationalize Cisco’s Built-in AI Safety and Security Framework (AI Safety Framework), enabling safer AI adoption.
AI dangers are not theoretical. We now have seen incidents starting from swearing chatbots to brokers deleting codebases. The monetary affect is important: EY’s latest survey discovered 64 p.c of corporations with over US$1 billion in income have misplaced greater than US$1 million to AI failures.
Enterprises are on the lookout for solutions on learn how to navigate AI dangers.
Organizations additionally don’t really feel able to deal with these challenges, with Cisco’s 2025 AI Readiness Index revealing solely 29 p.c of corporations imagine they’re adequately geared up to defend towards AI threats.
But present frameworks deal with solely slim slices of the chance panorama, forcing organizations to piece collectively steerage from a number of sources. This makes it troublesome to construct an entire understanding of end-to-end AI danger.
Cisco’s AI Safety Framework addresses this hole instantly, offering a extra holistic understanding of AI safety and security dangers throughout the AI lifecycle.

The framework breaks down the advanced panorama of AI safety into one which works for a number of audiences. For instance, executives can function on the stage of attacker targets, whereas safety leads can deal with particular assault methods.
Learn extra about Cisco’s AI Safety Framework right here and navigate the taxonomy right here.
AIUC-1 operationalizes the framework enabling safe AI adoption
When evaluating AI brokers, AIUC-1 will incorporate the safety and security dangers from Cisco’s Framework. This integration will probably be direct: dangers highlighted in Cisco’s Framework map to particular AIUC-1 necessities and controls.
For instance, approach AITech-1.1 (direct immediate injection) is actively mitigated by integrating AIUC-1 necessities B001 (third-party testing of adversarial robustness), B002 (detect adversarial enter), and B005 (implement real-time enter filtering). An in depth crosswalk doc mapping the framework to AIUC-1 will probably be launched, as soon as prepared, to assist organizations perceive learn how to operationally safe themselves.
This partnership positions Cisco alongside organizations together with MITRE, the Cloud Safety Alliance, and Stanford’s Reliable AI Analysis Lab as technical contributors to AIUC-1, collectively constructing a stronger and deeper understanding of AI danger.
Learn extra about how AIUC-1 operationalizes rising AI frameworks right here.
