Comprehensive Mapping of Mitigation Methods to Address AI Supply Chain Threats
Authors:
Summary
This RFC proposes to include within the scope of this workstream the identification, a categorization, and mapping of comprehensive sets of mitigation methods in order to address current and emerging threats to the AI supply chain.
These mitigation methods will encompass, but not be limited to, general software supply chain security methods, e.g., cryptographic signing of ML artifacts, provenance tracking, as well as machine leaning-based methods such as defending against adversarial attacks, mitigating data poisoning, addressing bias and fairness, and others.
The goal of this effort is twofold:
- Establish a structural foundation for defining and implementing controls to address specific risks in the AI supply chain.
- Facilitate the ability to link security claims made by AI producers to specific mitigations which will help enable stakeholders assesments and reason about the riskiness of AI systems effectively.
We propose that this effort be undertaken in conjunction with the establishment of a threat model (RFC #2), as the two frameworks - a threat model and mitigation methods - will complement each other to form a comprehensive set of recommendations.
Priority
- P1: This is important to include in the next release from this workstream.
Level of Effort
Large: This will take several weeks to document.
Drawbacks
Structuring the mitigation methods before completing the threat modeling might lead to mis alignment between the two, causing either gaps in coverage or focus on less relevant issues. This can be alleviated by:
- Focusing initially on mitigation methods for high-impact well-defined risks.
- Prioritizing mitigation methods according to their effectiveness, applicability and maturity.
- Incrementally and iteratively refining the mitigation methods and aligning them with the threat modeling as it evolves.
Alternatives
Evidently, the various controls we wish to put in place are a mandatory part of the white paper. The alternative can be to consider the mitigation methods only after we establish the threat model. The impact of this might be significant delay in addressing the required controls and mapping them to the corresponding identified risks.
Reference Material & Prior Art
Unresolved questions
Mitigation methods for risks to AI systems vary widely across domains and demand diverse professional expertise. Achieving a comprehensive list of risk mitigation methods will require close collaboration among workstream members.
Comprehensive Mapping of Mitigation Methods to Address AI Supply Chain Threats
Authors:
Summary
This RFC proposes to include within the scope of this workstream the identification, a categorization, and mapping of comprehensive sets of mitigation methods in order to address current and emerging threats to the AI supply chain.
These mitigation methods will encompass, but not be limited to, general software supply chain security methods, e.g., cryptographic signing of ML artifacts, provenance tracking, as well as machine leaning-based methods such as defending against adversarial attacks, mitigating data poisoning, addressing bias and fairness, and others.
The goal of this effort is twofold:
We propose that this effort be undertaken in conjunction with the establishment of a threat model (RFC #2), as the two frameworks - a threat model and mitigation methods - will complement each other to form a comprehensive set of recommendations.
Priority
Level of Effort
Large: This will take several weeks to document.
Drawbacks
Structuring the mitigation methods before completing the threat modeling might lead to mis alignment between the two, causing either gaps in coverage or focus on less relevant issues. This can be alleviated by:
Alternatives
Evidently, the various controls we wish to put in place are a mandatory part of the white paper. The alternative can be to consider the mitigation methods only after we establish the threat model. The impact of this might be significant delay in addressing the required controls and mapping them to the corresponding identified risks.
Reference Material & Prior Art
Unresolved questions
Mitigation methods for risks to AI systems vary widely across domains and demand diverse professional expertise. Achieving a comprehensive list of risk mitigation methods will require close collaboration among workstream members.