ISO 42001 and AI Perspectives

Neil Jones
Senior Consultant at URM
17 May

Artificial intelligence (AI) is a rapidly evolving field that is becoming increasingly pervasive throughout not only our business landscape, but also our culture and society in general, with various AI legislation and regulations having already been passed globally (with more currently in the pipeline).  As such, the International Standardization Organization (ISO) has introduced the ISO/IEC 42001:2023 Information Technology - Artificial Intelligence-Management System Standard.  ISO 42001 is a means through which AI producers (developers of AI systems) can provide assurance for AI consumers (users of AI systems) that the AI system they are using is reliable and trustworthy.  The Standard can also be used by AI consumers, which make use of AI in the services they offer, to provide assurance of these systems’ reliability to their customers.

In this blog, Neil Jones (Senior Consultant at URM) draws on his early involvement with ISO 42001 to discuss the Standard, its intentions and structure, and the AI perspectives that will need to be considered by organisations that implement it.  This blog is drawn from a webinar on ISO 42001, delivered in 2024 by Neil and Lisa Dargan (Director at URM), in which they provided guidance to organisations that are looking to conform and/or certify to ISO 42001.

The Intention of ISO 42001

ISO 42001 provides extensive guidance on how to govern and effectively manage the production of AI systems, allowing organisations to validate the fact that the AI systems which they are developing, using and/or providing are responsible and effective.  It should be noted, however, that this Standard is not intended to be a ‘how to do AI’ guide.  It will also not necessarily guarantee compliance with mandatory regulations, such as the EU AI Act.  The Standard is also not intended to align with any particular regulation or legislation, so, whilst it may be very helpful in enabling you to achieve compliance and does expect you to maintain awareness of the relevant regulations, it will not replace them.

The Structure of ISO 42001

ISO 42001 is written in the ‘Harmonised Structure’ shared by other management system standards, such as ISO 9001 and ISO 27001.  The main body of the Standard sets out requirements in the familiar Clauses 4-10, however some unique aspects can be found in all Clauses, with the exception of Clauses 7 and 10.  One of these aspects includes additions to Clause 4 in respect of climate change.  These changes have since been reflected in 31 other management system standards, including ISO 9001 and ISO 27001*.  While these considerations have now been added to other standards, they were included in ISO 42001 as it was released, due to the huge amount of computing resources and storage resources required by AI systems, which can have an extremely significant, negative impact on the environment.   The climate change considerations presented in the Standard have been identified as contributing to UN Sustainable Development Goals 5, 7, 8, 9, 10, 12 and 14.

*To learn more about the climate change update, read our blog on ISO and IAF add Climate Change Considerations to 31 Management Systems Standards

Following the mandatory management system clauses, ISO 42001 contains four annexes.  Like Annex A of ISO 27001, ISO 42001’s Annex A defines the reference controls which, unlike the Clauses, are not compulsory. Meanwhile, Annex B contains the implementation guidance for these controls, similarly to ISO 27002 (the supporting standard to ISO 27001), however departs from other standards by including this within the Standard itself rather than introducing a further ‘stand-alone’ standard.  Annex C provides guidance on organisational objectives and risk sources, and Annex D provides guidance on the use of AI management systems (AIMS’) across different domains or sectors.

The Artificial Intelligence Management System (AIMS)

The AIMS takes the same approach as any other management system, such as a quality management system (QMS) as defined by ISO 9001, or an information security management system (ISMS) defined by ISO 27001.  Clauses 4-10 of ISO 42001 set out ‘the requirements and provides guidance for establishing, maintaining and continually improving an artificial intelligence management system (AIMS)’.  An AIMS is strikingly similar to other ISO/IEC mandated management systems, with some variation around: context/objectives of the organisation, policy, roles and responsibilities, planning, risk assessment and risk treatment, AI system impact assessment (by far the most significant new conformance activity the Standard introduces), and performance evaluation and management review.   Due to the similarity to other ISO management systems, an AIMS lends itself to becoming part of an integrated management system (a combination of multiple management systems that exist within an organisation into one comprehensive management system which meets the requirements of multiple standards).  For a more in depth discussion of integrated management systems, read our blog on A Comparison of ISO 9001 and 27001.

AI Perspectives in ISO 42001

As mentioned above, certification to ISO 42001 is essentially a verification of the trustworthiness of an AI system, and this trustworthiness is articulated through distinct qualities, which are termed ‘perspectives’ in ISO 42005 (a related standard in the AI set).  These perspectives are defined in the context of benefits and harms to interested parties and grouped into 8 categories, as follows:

  • Accountability
  • Transparency
  • Fairness and discrimination (bias)
  • Privacy
  • Reliability
  • Safety
  • Explainability
  • Environmental impact.


The fairly straightforward concept of accountability (who or what isanswerable for actions, decisions, or performance) becomes somewhat difficultto assign in the context of AI.  Forexample, if an autonomous vehicle drives through a red light with the vehicleowner in the back seat, who should receive a ticket?  The owner, despite them not being in controlof the car at the time, the AI producer, the vehicle producer, or the AI itself? The answer to this question is not welldefined in law, but is what this perspective is attempting to determine: who isultimately accountable for a decision rendered by AI?


The transparency of an AI system defines how easy it is for interested parties (e.g., AI consumers) to obtain but also, crucially, to understand information about the AI’s activities, decisions and system properties.  Due to the complexity of AI models, you may find it extremely difficult to explain, for example, the logic behind decisions made by an AI model.  The best way to achieve transparency in the production of AI and be able to explain an AI system to an interested party is through extensive documentation of aspects such as its features, performance, limitations, components, procedures that are used in the system’s operation, i.e. how the system works.

Fairness and Discrimination

This perspective is about ensuring that the AI renders impartial decisions that are free from unjust or prejudicial treatment of different individuals, organisations, groups of individuals or societies, i.e., that is free from bias. The most widely-recognised issue relating to fairness in the context of AI is bias within an AI system, either in the algorithm itself or, more commonly, within the training data.  There are a number of well-publicised examples of AI not demonstrating the appropriate fairness due to biases in the data used to train it.  For example, AI has been used to predict recidivism (likelihood of reoffence) of prisoners and individuals on parole, and has demonstrated bias towards black people due to the use of historical training data which reflected prejudices in the past, therefore perpetuating discrimination.  

Here, there is an overlap with the General Data Protection Regulation (GDPR) around profiling and bias, which would need to be considered in order to maintain GDPR compliance.


There is yet another (and perhaps a more significant) overlap with the GDPR within the perspective of privacy, which is focused on ensuring the personally identifiable information (PII) that is used within your AI system is well protected.  AI systems are voracious consumers of data - data is vital to the training, validation, verification and testing of an AI model, and, depending on the AI system, a model will have varying degrees of personal data used within it.  For example, AI used in healthcare or an HR decision support system will necessarily contain a considerable amount of (often sensitive) PII, both within the information that is used by the system in production, but also the data used to train and validate the system.  You will need to ensure individuals have control over the collection and use of their PII, and the assurance that their PII will remain confidential and not be misused.

The controls used to address this (e.g., access controls) will be common with other standards, such as ISO 27001.  You will also be able to leverage your data privacy impact assessment (DPIA), performed for the purpose of GDPR compliance, when considering the privacy of your AI system.


As per ISO/IEC 22989 (Information technology – Artificial intelligence - Artificial intelligence concepts and terminology), reliability is the ability of the AI system to perform correctly (i.e., the AI system fulfills stated requirements and demonstrates consistent intended behaviour and results).  To measure and demonstrate the reliability of your AI system, you will need to implement controls around areas such as the monitoring of AI system performance, including metrics such as accuracy, F1 score, the Area Under the Curve Receiver Operating Characteristics (AUC-ROC) curve, etc.   You will also need to ensure that both updates and access to the AI system are well controlled to limit the risk of impact on reliability.


Safety will need to be considered for AI systems that can endanger human life, wellbeing, property or the environment, and most commonly relates to systems that involve mechanical actuation, such as assembly line robots or auto-drive vehicles.  However, safety can also be relevant to an AI system that renders decisions which can impact a person’s health or wellbeing.  For example, if an AI system used in healthcare is producing prescriptions or diagnoses which are incorrect, this could have a significant impact on the patient’s health and cause risk to life.  As such, risks to safety can be present in a much broader range of AI systems than may be initially obvious, and this perspective is focused on how you ensure that these risks are minimised.


Related to transparency, this perspective explores how a human can understand the decisions the AI system has arrived at.  You will need to consider what supporting information you can provide, at the point of delivering a decision from an AI system, that provides consumers with confidence that the AI decision is correct.  This information could be provided through supporting documentation, or as additional information provided as part of the AI’s output.

You will need to provide sufficient information that can allow an individual to understand the decision that the system has made, and the output from deep neural networks, for example, is extremely complex to explain.  As such, the ability to document how a decision has been arrived at is of vital importance, particularly for providing explanations of more sophisticated AI systems.

Environmental Impact

As mentioned previously, AI systems have a significant potential to negatively impact the environment and, as such, consideration of your environmental impact is a key perspective of ISO 42001.  Efficient coding practices are as important to ISO 42001 conformance as secure coding practices are to ISO 27001, as efficient coding can reduce processor load and therefore also reduce power consumption.

Often, the core model of an AI system will operate in a cloud environment, in which case you will need to ensure there are clauses related to offsetting carbon and managing environmental impacts in your contracts with cloud service providers.  If your cloud environment is provided by a major organisation such as Microsoft, Oracle, Amazon etc., you may be limited in your ability to negotiate and dictate what clauses need to be included in your contract.  However, these organisations are generally aware of their environmental impact and their cloud environments are reasonably well managed in this capacity.  

Managing the environmental impact of internally-hosted systems can be more difficult, as your organisation will need to establish its own approach to doing so.  You will also need to consider the impact of any third parties involved in the supply chain, and any individuals or entities that are helping to provide or maintain the systems you are using, e.g., a subcontractor providing you with air conditioning.  Here, you can work to offset your own carbon footprint and include questions about environmental impacts and ISO 42001 conformance on your supplier questionnaires to ensure your suppliers are doing the same.

How URM can Help?

Whilst ISO 42001 is a brand new AI standard, URM’s extensive experience assisting organisations to establish and implement a range of different management systems, and to certify these against the relevant management system standards, means we are ideally placed to help your organisation achieve ISO 42001 conformance and/or certification.  Our team of ISO 42001 consultants can provide a range of services to help you develop and establish your AIMS in full conformance to the Standard.  To begin with, we can conduct a gap analysis of your existing approach against the requirements of the Standard, to identify the areas where you are currently meeting ISO 42001 requirements and those which require improvement, and support with any remediation and implementation activities necessary to allow you to implement and maintain a fully ISO 42001 conformant AIMS.  This can include helping you to conduct an AI system impact assessment – one of the core activities to any ISO 42001 conformance/certification project.  Following the implementation of the AIMS, URM can conduct internal audits to help you determine if it is operating as intended and conforming to  the Standard, thus providing you with confidence and security in AI usage or development.

Neil Jones
Senior Consultant at URM
Neil is a Senior Consultant at URM, with over 20 years of ‘real world’ information security knowledge and experience, having worked in complex telecommunications, (multinational) financial services and professional services environments, with both regional and global responsibilities.
Read more

Book FREE Consultation

URM is pleased to provide a FREE consultation on Transitioning to ISO 27001:2022 for any UK-based organisation.
Thumbnail of the Blog Illustration
Other Standards
Published on
ISO 42001 Artificial Intelligence Impact Assessments (AIIAs)

URM’s blog explores artificial intelligence impact assessments (AIIAs) and offers advice on how to conduct these assessments in full conformance with ISO 42001.

Read more
Thumbnail of the Blog Illustration
Other Standards
Published on
ISO 42001 and AI Perspectives

URM’s blog explores ISO 42001, its intentions and structure, and the AI perspectives that will need to be considered by organisations implementing the Standard.

Read more
I Have been very impressed with the delivery of both the ISO 42001 webinar and last weeks 27001 and will certainly keep URM in mind with regard to any services in the future.
Webinar Attendee
contact US

Let us help you

Let us help you in your compliance journey by completing the form and letting us know how we can best support you.