Know your legal requirements
The NIST AI Risk Management Framework (AI RMF) is designed as a voluntary framework applicable to any organisation involved in the design, development, deployment, or use of artificial intelligence systems. It is not a mandatory compliance requirement in the same way that some regulations are (for example, EU AI Act). However, it offers very useful guidelines - think of it as a guide to help your organisation ensure the benefits of AI are realised responsibly.
This post opens a series of publications discussing NIST AI RMF - item by item.
Govern 1.1
Govern 1.1 is about legal and regulatory requirements. They need to be understood, managed and documented, but what exactly does it involve?
- Identify and understand local and international laws and regulations related to AI development, deployment, and use
Define and document all the minimum requirements in laws and regulations: GDPR (EU), non-discrimination laws (your AI systems may be making decisions about individuals!), IP laws, cybersecurity, and industry- and application-specific regulations (e.g. HIPAA and FDA will impose certain requirements on AI systems used in healthcare - related to protecting patient health information and ensuring safety and efficacy of AI-powered medical devices, correspondingly).
- Monitor all changes and updates - you want to be on top of the evolving regulatory landscape
-
Subscribe to blogs or newsletters publishing regular updates
-
Set up Google (or other) keyword alerts to receive notifications about new publications
-
Join industry associations to connect with industry peers
-
Attend conferences and webinars
-
Use social media! Follow key influencers: regulation experts, industry leaders, and government agencies
- Align risk management efforts with applicable legal standards.
Already have a risk management framework in place? Map risks to legal requirements, identify the gaps - do all your risk management practices adequately address legal requirements?
- Create and maintain policies for training and re-training staff on legal and regulatory aspects impacting AI design, development, deployment, and use
Such training may involve: data privacy laws, AI fairness, IP rights, cyber- and data security, industry-specific regulations, as well as role-based training for data scientists, engineers, and project managers.
Make it relevant and interesting: use real-world case studies and hypothetical scenarios, organise interactive workshops and simulations, role-playing exercises, invite external experts, and encourage staff to participate in relevant conferences, webinars, and online courses.
- Make sure the AI system has been reviewed for its compliance to applicable laws, regulations, standards, and guidelines
-
Conduct internal audits
-
Engage external auditors or consultants to provide an independent assessment of your AI system’s compliance
-
Use checklists
-
Maintain detailed documentation about audit findings and corrective actions
-
Conduct fairness assessments of the AI system (demographic parity, equal opportunity, disparate impact, counterfactual testing - stay tuned for more information about these and other metrics)
The next step?
As we have explored, Govern 1.1 of the NIST AI RMF provides a clear directive: know your legal obligations. The next crucial step is to translate this understanding into concrete actions. We encourage you to assess your organisation’s current processes and explore our other posts about the standard and how to comply with it to ensure the trustworthiness of the AI systems you develop, deploy, or use.