An artificial intelligence impact assessment (AIIA) is an evaluation of an AI system’s positive and negative impact on individuals, groups of individuals and society as a whole. An AIIA is comparable to a data privacy impact assessment (DPIA), conducted in order to comply with the General Data Protection Regulation (GDPR), however is much longer and more detailed. The completion of an AIIA is a mandatory requirement for conformance to ISO 42001, and is the most substantial piece of work you will need to undertake when developing an AI management system (AIMS).
The required content for an AIIA is set out in ISO/IEC 42005:2025 Information technology — Artificial intelligence (AI) — AI system impact assessment (ISO 42005), which also includes an example template for completing an AIIA. These assessments must be completed prior to an AI technology’s inception or use, and must be regularly updated and maintained while the system is in use.
AIIAs contain seven sections, with each section outlining specific information you will need to include:
- Section A: System information, such as a system description, its functions and capabilities, purpose, intended uses and unintended uses.
- Section B: Data information and quality, including details of all the datasets employed in the system. This is one of the largest sections of the AIIA, as you will need to assess and provide information on each dataset in terms of 20 different characteristics defined in ISO 42005.
- Section C: Algorithms and models information, such as the origin of the algorithms (whether they have been developed by your organisation, by a third party specifically for your organisation, or whether are they off-the-shelf) and the approach taken in their development.
- Section D: Deployment environment, which covers where the model is going to be used, including geographical areas, any language considerations, any deployment environment complexity or constraints, and how the model will be deployed.
- Section E: Relevant interested parties, covering the directly affected and other interested parties, as well as the roles of those interested parties (aka stakeholders). Interested parties include internal and external individuals, groups of individuals and entities that have an interest in the AI system.
- Section F: Actual and potential benefits and harms to each of the interested parties identified in Section E, for each of the AI perspectives defined in ISO 42005. This is another very substantial section of the AIIA.
- Section G: AI system failures and reasonably foreseeable misuse, the ‘failures’ aspect of which can be closely tied to your disaster recovery (DR) and business continuity (BC) plans, while the ‘reasonably foreseeable misuse’ element requires you to consider the impact of an individual either accidentally or intentionally misusing the system.

ISO 42001 and AI Perspectives
URM’s blog explores ISO 42001, its intentions and structure, and the AI perspectives that will need to be considered by organisations implementing the Standard.

URM’s blog explores artificial intelligence impact assessments (AIIAs) and offers advice on how to conduct these assessments in full conformance with ISO 42001.

URM’s blog discusses the need for policy in relation to the use of AI, real-world cases where AI has caused organisations issues & how to create an AI policy.

URM’s blog breaks down the EU AI Act and discusses its scope, requirements, how it will be enforced, how it may impact the UK & the rest of the world, and more.