
CoSAI Presentation at the 2025 All Things Open Conference
September 26, 2025The rapid adoption of artificial intelligence across enterprise environments has created an unprecedented challenge: how do you verify the authenticity and integrity of AI models before deploying them in mission-critical systems? A new collaborative paper from the Coalition for Secure AI (CoSAI) introduces model signing as a foundational solution to secure the AI supply chain.
The Growing Risk in AI Deployment
As AI models become increasingly sophisticated and autonomous, the stakes for security failures have never been higher. Unlike traditional software vulnerabilities that typically affect isolated functions, compromised AI models can impact multiple systems simultaneously, making detection more challenging and consequences more severe.
Three key factors make AI supply chain security uniquely complex:
Expanded Impact Scope: A single compromised model can introduce bias or manipulation across entire workflows, affecting decision-making at enterprise scale rather than isolated processes.
Detection Complexity: AI system failures are often subtle and require continuous monitoring of non-deterministic processes, making traditional security approaches insufficient.
Increased Autonomy: As AI systems gain more agency to perform autonomous actions, the potential for widespread damage from compromised models grows exponentially.
What Is Model Signing?
Model signing applies cryptographic techniques to establish verifiable trust between AI model producers and consumers. Think of it as a digital certificate of authenticity that travels with each AI model, providing proof of origin and ensuring the model hasn’t been tampered with during distribution.
The approach leverages a “claimant model” where model producers make tamper-proof claims about their artifacts through signed attestations. These attestations can verify three critical aspects:
- Integrity: Confirmation that the model hasn’t been modified since creation
- Provenance: Traceable record of the model’s development lifecycle and dependencies
- Properties: Verifiable claims about model performance, compliance, or other characteristics
Why Enterprise Leaders Should Care
For technology executives, model signing addresses several critical business challenges:
Risk Mitigation: Organizations can implement automated policies to reject models from untrusted sources or those lacking proper attestations, significantly reducing supply chain attack vectors.
Compliance Assurance: In regulated industries, model signing provides auditable proof of model provenance and properties, streamlining compliance processes.
Operational Efficiency: Automated verification eliminates manual security reviews, accelerating model deployment while maintaining security standards.
Intellectual Property Protection: Model producers can assert ownership and detect unauthorized distribution, protecting valuable AI investments.
Implementation Maturity Levels
The paper outlines three progressive maturity levels for model signing implementation:
Level 1: Basic Artifact Integrity
Models are treated as cryptographically protected binary artifacts. This foundational level establishes authenticity and integrity guarantees through digital signatures.
Level 2: Signature Chaining and Lineage
Advanced tracking of relationships between models and their dependencies, creating comprehensive provenance trails for enhanced supply chain visibility.
Level 3: Structured Attestations and Policy Integration
Full integration with AI governance frameworks, enabling automated policy evaluation and comprehensive compliance reporting throughout the model lifecycle.
Real-World Impact and Industry Adoption
Leading technology companies are already implementing model signing solutions. The OpenSSF model-signing library and OMS specification provide practical frameworks for organizations ready to implement these security measures.
The collaborative nature of the CoSAI initiative—bringing together experts from Google, Microsoft, NVIDIA, Cisco, and other industry leaders—demonstrates the critical importance and broad industry support for standardizing AI supply chain security.
Getting Started: Key Considerations
Organizations evaluating model signing should consider:
Current Infrastructure: How does model signing integrate with existing MLOps pipelines and deployment processes?
Vendor Ecosystem: What level of model signing support exists across your AI tool chain and model suppliers?
Regulatory Requirements: How can model signing help demonstrate compliance with emerging AI regulations?
Risk Profile: Which maturity level best balances your security needs with implementation complexity?
The Path Forward
As AI systems become more deeply integrated into critical business processes, the question isn’t whether to implement model signing, but how quickly organizations can adopt these practices. The standardization efforts outlined in the CoSAI paper provide a clear roadmap for implementation while ensuring interoperability across the AI ecosystem.
Model signing represents a fundamental shift toward trustworthy AI deployment, providing the security infrastructure necessary for confident enterprise AI adoption. Organizations that implement these practices now will be better positioned to leverage AI capabilities while maintaining the security and compliance standards their businesses require.
The full CoSAI paper provides detailed technical guidance and implementation recommendations for organizations ready to take the next step in securing their AI supply chains. As the AI landscape continues to evolve, model signing will become as essential to AI deployment as SSL certificates are to web security today.