
AWS re:Inforce Sessions Highlight Secure AI and Industry Collaboration
May 22, 2025
Coalition for Secure AI Welcomes Palo Alto Networks and Snyk, Advances AI Security with New Publication and Workstream
June 26, 2025The bottom line: Traditional security controls cannot protect AI systems. Your organization needs new defenses across four critical layers: data, models, applications, and infrastructure. The risk of not implementing these may be invisible compromises that could go undetected for months.
The Coalition for Secure AI’s latest paper identifies the essential controls that industry leaders are implementing today to secure their AI deployments. Here’s what you need to know from some of the top experts in the industry.
Why AI Security Is Different…And More Complex
AI systems learn their behavior rather than executing predetermined logic. This fundamental difference creates attack vectors that don’t exist in traditional software:
- Invisible backdoors: Poisoned training data can embed malicious behaviors that activate only under specific conditions
- Model manipulation: Compromised AI models appear to function normally until triggered by adversarial inputs
- Cross-tenant data leaks: Poor isolation in vector databases can expose sensitive customer information across different clients
- Supply chain opacity: Most organizations cannot trace the provenance of their training data or verify the integrity of third-party models
These risks often manifest in the content and behavior of AI systems (not in traditional code vulnerabilities) making them nearly impossible to detect with conventional security tools.
The Four-Layer Security Framework
Our analysis reveals that AI supply chain security must address four interdependent layers:
1. Data Layer
The foundation of AI security starts with data integrity. Organizations must track the provenance of every data sample, document all transformations, and maintain versioning with the same rigor applied to source code.
2. Model Layer
AI models require cryptographic signing at every stage: from pre-training checkpoints through production deployment. Models must be version-controlled and scanned for manipulation before each use.
3. Application Layer
AI models are integrated into real-world apps through APIs, plugins, and orchestration layers. Each integration point introduces new risks, particularly in open ecosystems where third-party components may be compromised.
4. Infrastructure Layer
Training pipelines, server environments, and underlying cloud infrastructure must be continuously monitored and auditable.
Six Essential Controls for AI Security
Based on extensive collaboration with security leaders across the coalition’s member organizations, these controls represent the minimum viable defense for AI systems:
1. Implement Data Provenance Tracking
What to do: Create an audit trail for every piece of training data, including source, transformations, and weighting decisions. Why it matters: Essential for forensic analysis when security incidents occur and model trustworthiness.
2. Require Cryptographic Model Signing
What to do: Sign model weights at every checkpoint and verify signatures before deployment or inference. Why it matters: Prevents tampering and ensures you’re running the exact model you intended to deploy.
3. Mandate AI-Specific Red Teaming
What to do: Go beyond accuracy testing to include adversarial testing, ethical evaluation, and prompt injection defenses in your evaluation pipelines. Why it matters: Traditional testing methods miss AI-specific vulnerabilities like jailbreaking attempts and adversarial inputs.
4. Deploy Behavioral Monitoring for AI APIs
What to do: Profile normal model behavior during development and monitor for anomalous API calls or file access during production inference. Why it matters: Compromised models often exhibit subtle behavioral changes that can be detected through runtime monitoring.
5. Establish Third-Party AI Risk Management
What to do: Demand Software Bills of Materials (SBOMs) for all AI models, libraries, and MLOps tools. Assess third-party AI services with the same scrutiny as critical infrastructure. Why it matters: Most security breaches originate from compromised third-party components.
6. Harden AI Serving Infrastructure
What to do: Implement defense-in-depth for caching systems, RAG pipelines, and agent orchestration frameworks. Why it matters: Infrastructure-level attacks can compromise even secure models through cache poisoning and cross-tenant leaks.
Executive Action Items
For CISOs: Start by mapping all AI usage across internal and vendor systems. Many organizations discover they have 3-5x more AI deployments than initially estimated.
Create documentation standards including model cards and AI SBOMs. Organizations that cannot document their AI systems cannot secure them effectively.
For Executive Leadership: Establish an AI governance board that includes security, engineering, and legal representatives. AI security cannot be solved by any single department.
Build internal literacy around the unique risks and responsibilities of AI supply chains.
Resources and Community
The Coalition for Secure AI is developing open-source tools and frameworks to implement these controls. We’re working with contributors from leading companies to develop open standards, best practices, and technical artifacts that any organization can adopt.
Our complete technical paper provides more detailed information. We invite you to review it, contribute feedback, and engage with our community of security and AI leaders.
You can find the full paper on GitHub, HERE.
The Path Forward
AI security requires immediate action. The organizations that implement these controls now will have significant competitive advantages as AI adoption accelerates and regulatory requirements emerge.
Start with data provenance and model signing—these foundational controls enable all other security measures. Then expand to behavioral monitoring and infrastructure hardening based on your specific risk profile.
The window for proactive AI security is closing. Organizations that wait for security incidents to drive their AI security strategy will find themselves defending against attacks they cannot detect with tools that were never designed for AI systems.
Ready to secure your AI supply chain? Read the full outlook on AI supply chains on GitHub and join the Coalition for Secure AI community at coalitionforsecureai.org.