Documentation and transparency: NIST AI RMF Govern 1.4
The NIST AI Risk Management Framework (AI RMF) is designed as a voluntary framework applicable to any organisation involved in the design, development, deployment, or use of artificial intelligence systems. It is not a mandatory compliance requirement in the same way that some regulations are (for example, EU AI Act). However, it offers very useful guidelines - think of it as a guide to help your organisation ensure the benefits of AI are realised responsibly. This post continues a series of publications discussing NIST AI RMF - item by item. What is Govern 1.4? Govern 1.4 discusses the benefits of establishing the risk management process through transparent policies, procedures, and other controls (based on your organisation’s risk priorities).
Policies and procedures relating to documentation and transparency bring along many benefits:
Clear documentation outlines the roles of individuals and teams involved in AI development, deployment and use, ensuring everyone understands their responsibilities. Effective communication fosters collaboration and prevents misunderstandings, leading to smoother workflows. Clear roles and responsibilities enhance accountability, ensuring that individuals are held accountable for their actions. You can trace decisions and actions, making it easier to identify and address issues. An organisation’s accountability is enhanced: individuals can be held accountable for their actions. Documentation can involve stakeholders throughout the development process, fostering a sense of ownership. Moreover, when stakeholders are involved and understand the product, they are more likely to support its adoption and use. Clear documentation facilitates the replication of AI systems, ensuring consistency and reliability. Well-documented systems are often more robust as they can be easily maintained and updated. Explainability of ML models is improved: it is easier to understand how they make decisions.
As you can see, there are two main aspects here: transparency and accountability, which are very closely connected. To allow for accountability, your organisation need to be transparent and clarify the roles and responsibilities of people involved in AI system design, development, deployment, assessment and monitoring to relevant stakeholders. For example, simply including AI actors’ contact information in product documentation has a lot of benefits:
Then, there is documentation related to AI models themselves, and here you can document AI system business justification, scope and usages, potential risks, training data, algorithmic methodology, testing and validation results, deployment and monitoring plans, plans for public disclosure of AI risk management materials (e.g. audit results, model documentation etc.)Model documentation is a particularly interesting aspect, and I will leave links to some great resources in the comments. For example, you need to document datasets used for the model training (curation rationale, data sources, and much more). There is A Guide for Writing Data Statements for Natural Language Processing by Emily Bender that helps us report such aspects as demographic information related to a linguistic data source, speaker and annotator demographics, language variaties (e.g., rather than saying “American English”, it is recommended to provide more details - say, “Standardised American English” or “Northeastern American English”), among many others. Model Cards for Model Reporting assist in creating model cards with all information includes how a model was built, what assumptions were made during its development, what type of model behavior different cultural, demographic, or phenotypic population groups may experience, and an evaluation of how well the model performs with respect to those groups.
Documentation plays a critical role in ensuring that AI systems are developed and deployed in a responsible manner. By providing a clear record of the system’s development, testing, and deployment, documentation can help to build trust and confidence in AI technologies.