In healthcare, algorithm-based applications offer broad potential. They can significantly reduce the effort required for documentation, support teams in operating rooms and enable self-learning devices or personalized implants and prostheses. For that reason, more manufacturers, software providers, IT service companies and laboratories are investing in AI-driven products and services. This is indicated by an analysis conducted by the management consultancy McKinsey. At the same time, lawmakers are tightening the rules – globally, policymakers are increasing their oversight of AI through a growing mix of legislation, regulation, guidance and governance frameworks. Companies therefore benefit from structured management through AIMS that ensures their technologies remain forward-thinking and ahead of the curve.
Closing the gap
Established standards such as ISO 13485 (quality management) and ISO 14971 (risk management) remain important in medical devices respectively for quality and risk management, yet they do not answer every question raised through AI use. In some cases, this is because these standards are not up to date with current AI use cases.
To help close this gap, ISO/IEC 42001 was created. The standard, published at the end of 2023, does not solve every regulatory issue, but it provides a framework for many of the most relevant ones. Companies that already work with a quality management system based on ISO 13485 start from a solid foundation. Although the structures are not identical, both systems rely on the same basic logic: plan processes, implement them, review the results and improve them continuously through the Plan-Do-Check-Act cycle.
First comes risk analysis
Companies cannot manage AI responsibly without first understanding the intended use, context and risk profile of each application. A structured assessment at the outset helps organizations identify relevant requirements, define responsibilities and establish appropriate controls for development and deployment.
ISO/IEC 42001 provides a framework for this. It addresses AI governance across the full lifecycle of an AI system and supports organizations in embedding clear processes for oversight, documentation, review and improvement.
The central idea behind the standard is not a one-off check at launch, but ongoing governance. This includes continuous monitoring, regular quality checks and structured improvement measures. In this way, organizations can help ensure that AI-based solutions remain reliable, transparent and fit for purpose throughout operation.
Where errors might go unnoticed
The range of AI use cases in medical technology is expanding quickly. As a result, companies often ask whether every AI system requires formal management. Treating an application as harmless without a structured review can create real problems. One reason is bias: If the training data are distorted, incomplete or unbalanced, the output may also be flawed.
The consequences of this can be both tangible and serious. A voice control system may fail when users speak in regional dialects. Skin cancer software trained mainly on images of light skin may produce unreliable results for people with darker skin. Updates create another source of risk. Whether a change is installed manually or automatically, the system must still perform as reliably after the update as it did before.
The added value of the AIMS
An AIMS offers concrete advantages in this environment. Medical technology increasingly relies on software, either as a stand-alone product or as part of a larger system. ISO/IEC 42001 helps companies structure development work more clearly and define responsibilities more precisely. The standard also supports monitoring and update management. That matters because each new model version can introduce new risks. An AIMS helps decision-makers identify these risks, assign accountability and maintain complete documentation. For that reason, AI governance should begin during product development rather than in after-market preparation.
Relevant beyond classic AI products
The requirements of ISO/IEC 42001 may also be relevant for consumer products without embedded AI of their own, for example in the automated assessment of job applications, the monitoring of suppliers or digital document management. An AIMS can be used to identify weak points in the supply chain and increase efficiency in relation to documentation requirements and data protection.
Another product group that benefits from the use of an AIMS is existing products already on the market. When companies replace, add or update software modules, they may also need to reassess to ensure compliance with relevant legislation. An AIMS not only supports this process but also aids in documentation. As with newly developed systems, the result is greater transparency and stronger regulatory alignment.
High-tech systems used in clinical decision-making provide another example. Here, ISO/IEC 42001 helps organizations document changes, safeguard human oversight and identify bias-related risks.
Certification in practice
The Swiss company Unique shows how audit certification can proceed. The process started with a gap analysis carried out with the support of an external partner. This review examined the company’s AI policy, product lifecycle management and impact assessment. The identified compliance gaps then served as the basis for initiating the measures needed to meet the normative requirements through new or improved processes.
This was followed by a two-stage audit procedure. In the first step, experts assessed whether the AIMS, including its documentation, responsibilities and scope, was ready for certification. In the second step, the focus shifted to the practical implementation of the requirements within the relevant processes.
The Swiss fintech specialist became the first company in Europe to have their AIMS certified in accordance with ISO/IEC 42001. The certificate is valid for three years, provided that the effectiveness of the AIMS is monitored regularly.
From compliance to competitive advantage
As regulatory expectations continue to evolve, AI governance is becoming a strategic differentiator. Companies that implement structured management systems early can reduce compliance risks, accelerate market access, and build trust with regulators and customers.



