SafeAI Pro

Know your legal requirements

By SafeAI Pro

The NIST AI Risk Management Framework (AI RMF) is designed as a voluntary framework applicable to any organisation involved in the design, development, deployment, or use of artificial intelligence systems. It is not a mandatory compliance requirement in the same way that some regulations are (for example, EU AI Act). However, it offers very useful guidelines - think of it as a guide to help your organisation ensure the benefits of AI are realised responsibly.

This post opens a series of publications discussing NIST AI RMF - item by item.

Govern 1.1

Govern 1.1 is about legal and regulatory requirements. They need to be understood, managed and documented, but what exactly does it involve?

  1. Identify and understand local and international laws and regulations related to AI development, deployment, and use

Define and document all the minimum requirements in laws and regulations: GDPR (EU), non-discrimination laws (your AI systems may be making decisions about individuals!), IP laws, cybersecurity, and industry- and application-specific regulations (e.g. HIPAA and FDA will impose certain requirements on AI systems used in healthcare - related to protecting patient health information and ensuring safety and efficacy of AI-powered medical devices, correspondingly).

  1. Monitor all changes and updates - you want to be on top of the evolving regulatory landscape
  1. Align risk management efforts with applicable legal standards.

Already have a risk management framework in place? Map risks to legal requirements, identify the gaps - do all your risk management practices adequately address legal requirements?

  1. Create and maintain policies for training and re-training staff on legal and regulatory aspects impacting AI design, development, deployment, and use

Such training may involve: data privacy laws, AI fairness, IP rights, cyber- and data security, industry-specific regulations, as well as role-based training for data scientists, engineers, and project managers.

Make it relevant and interesting: use real-world case studies and hypothetical scenarios, organise interactive workshops and simulations, role-playing exercises, invite external experts, and encourage staff to participate in relevant conferences, webinars, and online courses.

  1. Make sure the AI system has been reviewed for its compliance to applicable laws, regulations, standards, and guidelines

The next step?

As we have explored, Govern 1.1 of the NIST AI RMF provides a clear directive: know your legal obligations. The next crucial step is to translate this understanding into concrete actions. We encourage you to assess your organisation’s current processes and explore our other posts about the standard and how to comply with it to ensure the trustworthiness of the AI systems you develop, deploy, or use.