
Coalition for Secure AI Welcomes Palo Alto Networks and Snyk, Advances AI Security with New Publication and Workstream
June 26, 2025In the rush to adopt artificial intelligence, IT organizations are challenged with what this technology means for security. Leaders in the tech industry with the Coalition for Secure AI (CoSAI) took a deeper look at security risk governance and how the environment is changing when it comes to AI.
The Problem: AI Isn’t Just Another Software Tool
Remember when cybersecurity meant protecting against viruses and hackers trying to break into your systems? Those days are over. AI systems create security challenges that traditional approaches simply can’t handle.
These aren’t theoretical risks. They’re happening right now, and they represent just the tip of the iceberg.
Three Ways AI Changes Your Risk Profile
There are three interconnected ways AI transforms organizational risk:
- AI Systems Become Targets
Organizations using AI face entirely new attack methods: prompt injection, data extraction, and model poisoning, that require completely different defenses than traditional cybersecurity.
- AI Becomes a Weapon
Attackers aren’t just targeting AI systems; they’re using AI to supercharge traditional attacks, operating at speeds and scales that human defenders can’t match alone.
- AI Makes Critical Business Decisions
When AI systems approve software, review expenses, or manage infrastructure, a compromised model doesn’t just leak data, it makes bad decisions at business speed and scale.
The Security Frameworks Aren’t Keeping Up
While established security frameworks like NIST, MITRE, and OWASP provide valuable starting points, there’s still more work to be done when it comes to AI-specific security challenges. These frameworks were built for traditional software, and need to adapt and evolve to address AI’s unique realities:
- AI systems can develop new capabilities and vulnerabilities through interactions that are impossible to predict
- Traditional security boundaries break down when AI processes untrusted input as both data and instructions
- Standard versioning concepts fail when identical AI models produce different outputs based on settings or conversation history
What Defenders Need to Do Right Now
There are six critical areas where organizations must adapt their security programs:
- Expand Threat Analysis: Traditional risk assessments miss AI-specific vulnerabilities like model poisoning and behavioral drift
- Build AI-Ready Response Programs: When an AI system starts leaking sensitive data, you can’t just block malicious IPs, the threat operates through natural language
- Secure the AI Supply Chain: AI systems rely on open-source models and datasets that create new supply chain risks
- Establish AI Governance: Organizations need clear answers to questions like “Who’s responsible when an AI system makes a bad decision?”
- Adapt Identity Management: AI systems operate continuously and make autonomous decisions so they need identity frameworks designed for this reality
- Use AI to Defend AI: While AI creates new attack surfaces, it’s also becoming a powerful security tool that can analyze vast amounts of data and detect anomalies humans might miss
It’s Not Just a Technical Problem, It’s a Team Sport
Successfully securing AI systems requires coordination across your entire organization:
- Leaders must bridge the gap between those who build AI capabilities and those who protect them
- Technical teams need to translate security policies into practical system protections
- Operations staff must monitor AI systems for threats that traditional security tools might miss
- Compliance professionals face the challenge of applying traditional frameworks to AI contexts
- Security validation teams need new skills to test AI systems through adversarial approaches
The Path Forward: Four Critical Investment Areas
There are four areas where organizations must invest immediately:
- AI Asset Inventory: You can’t protect what you can’t see and most organizations have limited visibility into their AI systems
- AI-Specific Incident Response: Traditional playbooks fall apart when an AI system is compromised
- Enterprise AI Threat Modeling: Current threat models use software frameworks to understand AI risks, but these need to evolve for emerging threats
- Zero Trust Evolution: Current frameworks assume fixed identities and predictable behaviors, but AI systems make autonomous decisions and shift trust boundaries in real-time
Why This Matters Now
Industry professionals anticipate AI-enabled threats from malicious actors within the next 1-2 years. Organizations that fail to secure their AI investments face both technical vulnerabilities and very real business threats. The time for action isn’t tomorrow, it’s today.
“In security, defenses are only as strong as the weakest point. Comprehensive protection fails if a single vulnerability remains exposed. This reality intensifies with AI systems, where novel attack vectors emerge alongside traditional threats.” – CoSAI.
Read the full paper here: https://github.com/cosai-oasis/ws2-defenders/blob/main/preparing-defenders-of-ai-systems.md
Learn More and Get Involved
The Coalition for Secure AI brings together an open ecosystem of AI and security experts from industry-leading organizations, dedicated to sharing best practices for secure AI deployment. This report represents the collective wisdom of security practitioners, AI developers, and risk management professionals who are working to solve these challenges.
Ready to dive deeper? The full paper provides detailed frameworks, implementation guidance, and practical steps your organization can take today. It’s designed as a living resource that will continue to evolve as new threats emerge and best practices develop.
Want to join the conversation? CoSAI is an open project where industry professionals collaborate on AI security research and product development. Whether you’re a security practitioner, AI developer, or business leader, there’s a place for your expertise in building the secure AI future we all need. Join our GitHub community now.
The AI revolution is here, and it’s transforming everything about how we work and live. But without proper security foundations, that transformation could become a disaster. Don’t wait for the first major breach to start taking AI security seriously. The tools, frameworks, and community support you need are available today.
[Access the full paper and learn more about joining CoSAI’s mission to secure the AI-powered future.]