AI Conformity Assessment
Do you think AI systems should be subject to rigorous conformity assessment? The EU AI Act makes it a cornerstone of its regulatory framework, particularly for applications deemed to carry significant risks. This article summarises key insights from the AI Standards Hub webinar last week - it was a very informative session!
What is conformity assessment?
Conformity assessment is about checking (and proving) that something measures up to a set of predefined standards. Not a new concept, it has been widely used in various industries to ensure quality, safety, and compliance. But with the rise of AI, especially in high-stakes applications like medical devices, the need for robust conformity assessment has become even more critical, and conformity assessment bodies are now playing a key role in this process, as they have to adapt to the unique challenges posed by AI systems.
How is it usually done?
Conformity assessment involves a range of activities designed to verify that a product, service, process, system, or organisation meets specified requirements, standards, or regulations. Here’s an elaboration on each of the common methods:
-
Testing: Testing involves subjecting a product, system, or component to specific procedures to determine its characteristics or performance against predefined criteria. For AI systems, testing could involve performance testing, bias and fairness testing, or security and safety testing. The aim is to generate objective data about the attributes and behaviour of the AI system, proving that it meets technical specifications and functional requirements.
-
Inspection: Inspection is the examination of a product, process, service, or installation to determine its conformity with specific requirements. Unlike testing, which often involves active manipulation or input, inspection often involves a more visual, systematic check of features, documentation, or procedures. For AI products and processes, inspection might involve documentation review or process/lifecycle check.
-
Certification: It is a third-party attestation related to products, processes, systems, or persons. An independent certification body, after conducting an assessment (which may include testing and inspection), issues a certificate stating that the object of conformity assessment meets specified requirements.
-
Supplier’s Declaration of Conformity (SDoC): This is a first-party conformity assessment where the supplier (manufacturer, importer, or responsible party) themselves declares that their product, process, or service conforms to specified requirements.
-
Management system assessment (like ISO 42001 for AI management systems): This type of assessment focuses on an organisation’s management system rather than a specific product. It verifies that an organisation has put in place a systematic approach (a “management system”) to achieve certain objectives, such as quality, environmental performance, or in our case, responsible AI development and deployment. For AI, this would primarily involve auditing against ISO/IEC 42001:2023 - Artificial Intelligence Management System (AIMS): an independent auditor assesses the organisation’s processes related to AI system lifecycle, including data governance, risk management, ethical considerations, transparency, human oversight, and accountability, and checks for mechanisms that ensure continuous improvement of the AIMS.
-
Accreditation:
Accreditation is the formal recognition by an independent, authoritative body (an “accreditation body”) that a conformity assessment body (like a testing laboratory, inspection body, or certification body) is competent, impartial, and operates consistently according to international standards (e.g., ISO/IEC 17025 for testing labs, ISO/IEC 17020 for inspection bodies, ISO/IEC 17021 for certification bodies). An accreditation body (e.g., ENAC in Spain, UKAS in the UK) thoroughly assesses a conformity assessment body’s technical competence, management systems, impartiality, and consistent operation. Accreditation provides an additional layer of confidence and credibility to the entire conformity assessment system. When a product is certified by an accredited certification body, or tested by an accredited laboratory, it signifies that the body performing the assessment itself has been rigorously evaluated and deemed competent.
Why is AI conformity assessment important?
The EU AI Act mandates conformity assessment for high-risk AI systems before they are placed on the market or put into service. This process ensures these systems meet the Act’s stringent requirements.
For most high-risk AI systems (listed in Annex III), providers can generally perform an internal conformity assessment (self-assessment), assuming they apply relevant harmonised standards. However, for certain critical high-risk AI systems (e.g., biometric systems used in law enforcement, or where no harmonised standards are used), a third-party conformity assessment by a designated “Notified Body” is mandatory.
The conformity assessment process under the AI Act involves demonstrating compliance with requirements such as:
- Risk management system
- Data governance (data quality, bias mitigation)
- Technical documentation
- Record-keeping (logging of events)
- Transparency and provision of information to deployers
- Human oversight
- Accuracy, robustness, and cybersecurity
Upon successful completion of the conformity assessment, the provider issues an EU Declaration of Conformity and, for high-risk systems, is required to affix the CE marking.
Of course, conformity assessment is important not only for regulatory compliance - it ensures safety, mitigates risks, builds trust, fosters greater accountability, promotes culture of continuous improvement, and even facilitates international trade through creating globally recognised standards and conformity assessment schemes.
What is the role of World Trade Organisation Committee on Technical Barriers to Trade (TBT)?
World Trade Organisation Committee on Technical Barriers to Trade (TBT)** is the one that keeps international trade flowing while protecting legitimate objectives such as human health or safety, animal or plant life or health, the environment, and national security requirements. They do it by ensuring that technical regulations, standards, and conformity assessment procedures do not create unnecessary obstacles to trade. For example, topics they work on include regulatory impact assessments, medical device regulation, critical emerging technologies, such as biofuel, clean energy generation and storage, semiconductors - and, of course, AI. In 2024, the TBT Committee adopted guidelines (G/TBT/54) to support regulators in the choice and design of conformity assessment procedures to ensure that measures to verify compliance with technical regulations and standards do not create unnecessary obstacles to international trade.
What about national regulators?
National regulators in WTO member countries develop and enact regulations, design and implement conformity assessment procedures, and do market surveillance. In Luxembourg, for example, we have Luxembourg Institute of Standardisation, Accreditation, Safety and Quality of Products and Services (ILNAS), but also Luxembourg Agency for Medicines and Health Products (ALMPS) (regulating medical devices), and National Commission for Data Protection (CNPD) (AI systems and AI Act compliance). Australia, in turn, has a shared responsibility model for product safety regulation, involving federal, state, and territory governments: Australian Competition and Consumer Commission (ACCC) overseas general consumer product safety while Department of Infrastructure, Transport, Regional Development, Communications and the Arts is responsible for vehicle safety - and Australian Communication and Media Authority (ACMA). Such regulators use WTO TBT guidelines to ensure their regulations and conformity assessment procedures are effective and facilitate trade while protecting public interests.
What is ISO Conformity Assessment (CASCO) toolbox?
It is comprehensive collection of international standards and guides developed by the ISO Committee on Conformity Assessment (CASCO). It can be used by assessment and accreditation bodies, as well as by businesses - to understand how to comply with requirements, select competent conformity assessment providers, and even implement their own internal assessment processes.
What are conformity assessment schemes?
ISO Committee on conformity assessment has existed for a long time - but now they have a new task: developing conformity assessment schemes - sets of requirements and procedures that help conformity assessment bodies certify a system or a product against a specific standard. We see such schemes applied in the wild every day: just look at the packaging of a household appliance or an electrical device you own - there will be a logo of some certification scheme, which means it has passed some tests to ensure it is safe and will perform as expected. In other words, this new committee will develop such certification schemas to check AI products and systems and allow their provides label them as “certified” (that is, safe and suitable for intended use).
Developing such conformity assessment schemas for AI is challenging for a number of reasons:
- AI system as a product can have different components, and each component as a product means a risk-based approach. They are made of data, software, models, and infrastructure - not as tangible and predictable in their behaviour as, say, a kettle. In AI, risks may stem from training data, the algorithm’s design, and even user interface. So, a more integrated and more holistic risk assessment is needed to go beyond component-level evaluation.
- Need for real-time evaluation. Many AI systems operate in real-time (e.g autonomous vehicles, fraud detection). As the environment constantly changes, a standard snapshot assessment will not do: we need continuous monitoring, efficient logging, and somehow adaptive conformity assessment methods.
- Version control and updates. AI models are frequently updated, retrained, fine-tuned, and updates like this can unpredictable change the model behaviour. We need a robust change management system here!
- Can a statement of conformity guarantee the system’s safety? Again, AI’s dynamic nature makes a definitive statement challenging and calls for ongoing post-market monitoring of the system’s performance, data quality, and adherence to ethical principles.
All this necessitates the development of new, more dynamic and adaptive conformity assessment methods and frameworks. This is what the WG is currently doing. The aim is to publish the schemas in May 2027, so there is plenty of time for experts to get involved in drafting.
Will there be more support for certification bodies and auitors?
Yes! A new document, currently under publication, will soon provide more support to the certification bodies and auditors - it is ISO/IEC 42006 - “IT — AI — Requirements for bodies providing audit and certification of AI management systems”. The new standard will ensure that certification bodies operate with the competence and rigour necessary to assess organisations developing, deploying or offering AI systems. AI systems present unique challenges in areas like ethics, data quality, risk, and transparency. To certify that an organisation responsibly manages these challenges, auditors themselves need specialised knowledge and clear rules for conducting assessments.
Did you know that there is a whole ecosystem of ISO standards related to AI?
- ISO/IEC 38507:2022 applies to all organizations that develop or use AI-based systems and solutions, regardless of industry, size, or technical capability. It checks if the organisation has a proper AI governance and AI risk management systems in place, focusing specifically on the governance layer, ensuring that AI-related decision making aligns with the organisation’s objectives as well as regulations, ethical expectations and societal’s values.
- ISO/IEC 23894:2023 IT-AI -Guidance on risk management provide guidance on how organizations that develop, produce, deploy or use products, systems and services that use AI can manage risk specifically related to AI.
- ISO/IEC 42005:2025 - AI system impact assessment - provides guidance for organisations conducting AI system impact assessments. It can be use by any organisation developing, providing or using AI systems that wants to assess and manage the potential impacts of their AI systems on people and society.
- ISO/IEC 42001:2023 specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organisations. It is designed for entities providing or using AI-based products or services, ensuring responsible development and use of AI systems.
Conformity assessment is not merely a bureaucratic hurdle but a fundamental pillar for the responsible and trustworthy development and deployment of AI, especially for high-stakes applications like AI-enabled medical devices. It provides the essential mechanisms to verify that AI systems meet predefined standards for safety, performance, ethics, and regulatory compliance.
How to get involved in the development of AI-related standards?
Join your national standards body: https://www.iso.org/about/members
Image: Kittens and Cats; a Book of Tales, by Eulalie Osgood Grover; 1911; Boston, Houghton Mifflin. — Source.