Mission and Scope
Securing AI and AI applications and services is a fragmented endeavor. Developers grapple with a patchwork of guidelines and standards which are often inconsistent and siloed. Assessing and mitigating AI specific and prevalent risks without clear best practices and standardized approaches is a significant challenge for even the most experienced organizations.
Coalition for Secure AI aims to address this challenge by fostering a collaborative ecosystem with diverse stakeholders across companies, academia and other relevant fields who can work together to develop and share holistic approaches, including best practices, tools and methodologies for secure AI development and deployment.
CoSAI Workstreams
Software Supply Chain Security for AI Systems:
Extend SLSA Provenance to AI models to determine AI security by understanding how it was created and handled throughout the software supply chain. This work stream will help explore what types of AI-specific information can be captured to protect AI software. For example, who did the training, and did they handle the training process in a secure and auditable way? Has the model been free from tampering since it was created? This workstream will also explore how to use cryptography to prove the security of AI supply chains through popular model hubs.
Preparing Defenders for a Changing Cybersecurity Landscape:
Develop a defender’s framework to identify needed investments to counter the offensive cybersecurity capabilities of current and potential AI models as well as mitigation techniques and best practices. The Defender’s framework aims to scale investments and mitigation strategies with the emergence of pivotal offensive cybersecurity advancements in AI models.
AI Risk Governance:
Develop a risk and controls taxonomy, checklist, and scorecard to guide practitioners in readiness assessments,
Frequently Asked Questions
Key aspects of CoSAI's mission include:
1. Advancing AI security research and developing open-source solutions to address critical challenges in building and deploying secure AI systems.
2. Promoting the development of uniform standards and risk governance methodologies for secure AI development across the industry.
3. Providing accessible resources, guidelines, and open-sourced tools to enable developers at all levels to create secure AI systems.
4. Fostering a collaborative ecosystem that brings together industry leaders, academics, and experts to address the fragmented landscape of AI security.
5. Focusing on integrating and leveraging AI securely across various phases of development and usage, complementing existing initiatives in the field.
By uniting diverse stakeholders and leveraging collective expertise, CoSAI aims to enhance threat mitigation strategies and drive security advancements that can be shared across the AI ecosystem, ultimately building greater trust in AI systems.
The TSC advises the PGB on technical matters and is responsible for the project's overall technical health, direction, releases, and workstreams. TSC members include representatives from sponsoring organizations and appointed seats for academic, non-profit and government entities.
The PGB defines Workstreams that carry out specific efforts and appoints Maintainers to oversee project components.
1. Software Supply Chain Security for AI Systems: This workstream aims to lower barriers to AI provenance adoption and risk management. It focuses on extending SSDF and SLSA principles to AI development, addressing challenges in integrating provenance solutions into infrastructure and development practices. Key deliverables include guidance on evaluating model and data provenance, best practices for assessing full AI application provenance, and proposals for additional provenance data to enable high-efficacy model provenance evaluation.
2. Preparing Defenders for a Changing Cybersecurity Landscape: This effort involves developing a defender's framework to identify necessary investments and mitigation techniques to address the security impacts of AI use in business applications, by attackers, and by defenders. The framework aims to scale investments and mitigation strategies in response to emerging offensive cybersecurity advancements in AI models. Deliverables include education on AI-driven vulnerability discovery and attacks, recommendations for preventing abuse of public LLMs, and guidance on building AI Red Teaming capabilities.
3. AI Risk Governance: This workstream focuses on developing a risk and controls taxonomy, checklist, and scorecard to guide practitioners in readiness assessments, management, monitoring, and reporting of their AI products. Proposed deliverables include responsible scaling policies, executive risk committee stakeholder recommendations, and a launch rubric for evaluating organizational readiness for secure AI integration and use.
These projects are designed to address critical challenges in AI security and provide practical, actionable outcomes for the AI community. CoSAI may expand its focus to include additional workstreams as the initiative progresses.
Sponsorship: Organizations can join CoSAI as sponsors, which allows them to have representation on the Project Governing Board (PGB) and Technical Steering Committee (TSC). Interested organizations should contact join@oasis-open.org for more information about sponsorship.
Contributions: CoSAI is an open source project, so anyone can contribute to its workstreams and repositories, regardless of whether they are affiliated with a sponsor organization. Contributors must agree to the Contributor License Agreement (CLA) and make regular contributions such as documentation, code reviews, responding to issues, and participating in proposal discussions.
Maintainers: Individuals can also become Maintainers, who are selected by the TSC to oversee one or more components of CoSAI repositories. Maintainers review code and pull requests, prepare releases, triage issues, and perform similar tasks. Any person may be appointed as a project Maintainer, whether or not they are affiliated with an OASIS member or CoSAI sponsor.
Participation in Committees: In addition to the TSC, the PGB may form other committees as needed, such as a Marketing Group or Advisory Council. Members of these committees are Contributors to the CoSAI project and must sign the CLA.
To get started, interested individuals and organizations can explore the CoSAI website to learn more about the project, its workstreams, and how to get involved.
CoSAI Website: Visit the official CoSAI website to find the latest news, announcements, and resources related to the project. Read about CoSAI's mission, scope, workstreams, and how to get involved.
Subscribe to the CoSAI Mailing List to receive announcements and project updates.
Public Repositories: Many of CoSAI's workstreams and projects will be developed in public repositories on platforms like GitHub. By following these repositories, you can stay up-to-date with the latest contributions, discussions, and releases.
Events and Webinars: CoSAI may organize events, workshops, or webinars to share updates, best practices, and learnings with the broader community. Keep an eye out for announcements about such events on the CoSAI website or through OASIS Open's communication channels.
OASIS Open Updates: As CoSAI is hosted by OASIS Open, the consortium may share updates about the project through its newsletters, blog posts, or social media channels. Following OASIS Open can help you stay informed about CoSAI and other related open source projects.
Industry News: Since CoSAI involves major industry players and focuses on an important topic (AI security), it is likely to receive coverage in technology news outlets and blogs. Following these sources can provide you with a broader perspective on CoSAI's work and its impact on the AI ecosystem.