As organisations race to integrate AI for competitive advantage, we rarely see a lack of activity. Instead, we see a variation in strategy, often resulting in missed opportunities for efficiency.
We tend to see businesses fall into one of three categories.
First, there are those pushing for speed; deploying AI rapidly to gain an edge while viewing governance as a hurdle to be cleared later.
Second, there are those relying on their existing strength. These organisations trust that their mature ISO 27001 certification covers them. While this is a logical starting point, it often underestimates the AI-specific risks, such as algorithmic bias, lack of transparency, and ethical accountability, that fall outside the scope of a traditional ISMS.
Finally, there are those starting from scratch. They are building entirely new governance structures and committees to align with emerging standards like ISO 42001, NIST, or the EU AI Act. While the goal is correct, the method is often inefficient; they are rebuilding management machinery they already own.
There is, however, a more effective approach.
If you already hold an ISO 27001 certification, you do not need to choose between ignoring the risk or rebuilding your governance from the ground up. You simply need to recognise, and extend, the asset you already have.
Here is the reality of bridging the gap between Information Security (ISO 27001) and AI Governance (ISO 42001).
The Good News: The Infrastructure is Already Built
Let’s start with the efficiency case, because it is undeniable.
Implementing any ISO standard involves a significant amount of management infrastructure. Before you even get to the technical controls, you have to build the machinery of the management system: determining the context of the organisation, establishing leadership roles, defining document control frameworks, and setting up internal audit programmes.
Technically, this is known as the Annex SL High-Level Structure.
In a mature ISMS, this framework is already built, documented, and operational.
This gives you a significant head start. By reusing this framework, you strip away weeks of administrative setup and cost. Consider the practicalities:
- Internal Audit: You already have an audit schedule, a methodology, and a reporting line to the board. You don’t need a new “AI Audit Function”; you simply need to extend the scope of your existing audits to include AI controls.
- Competence & Training: You already have a system for onboarding and training staff on data handling. You don’t need a new Learning Management System; you just need to inject modules on AI ethics and bias.
- Vendor Management: You already assess suppliers for security. You simply need to add questions regarding model provenance and data transparency to your existing due diligence questionnaires.
This reuse represents the vast majority of the structural work required for ISO 42001. The machinery is there; it just needs a new set of instructions.
The Reality Check: Security is Not Trustworthiness
However, this is where we need to be realistic. Having the management infrastructure in place does not mean the governance work is finished.
We often see organisations assume that because they are ISO 27001 certified, they are “covered” for AI. This is an assumption that leaves the organisation exposed.
To understand why, we must look at the fundamental difference in the objective of these standards.
ISO 27001 is about Information Security. It focuses on Confidentiality, Integrity, and Availability (CIA). It asks: Is the training data encrypted? Is access to the model restricted? Is the server patched?
ISO 42001 is about System Trustworthiness. It looks beyond security to broader concerns including fairness, transparency, and data quality. It asks: Is the model biased against a protected demographic? Can we explain how it reached its decision? Is the output accurate and reliable?

The “Secure but Untrustworthy” Paradox Consider a “Black Box” AI recruitment tool.
- From an ISO 27001 perspective: If the CV data is encrypted, access is logged, and the system is available 99.9% of the time, it is compliant. It is secure.
- From an ISO 42001 perspective: If that secure model has been trained on historical data from a male-dominated sector, causing it to systematically reject female candidates, it is non-compliant. It is untrustworthy.
Your encryption keys won’t stop a model from hallucinating legal advice, and your firewall won’t prevent algorithmic bias. This is the specific governance gap that your current ISMS cannot fill on its own.
The Opportunity: Focusing on the AI Specifics
The business case for integrating ISO 42001 with your ISO 27001 is not that “it makes AI governance easy”. It is that it allows you to focus your energy where it matters.
Because you don’t have to waste time writing a new Nonconformity Procedure or setting up an Audit Committee, you can focus 100% of your effort on the critical task of governing your AI models.
To bridge the gap, you must implement new, AI-specific measures. Some of the most critical additions include:
- AI Impact Assessments (AIA): Unlike a security risk assessment, which looks at threats to the asset, an AIA looks at threats to the subject. You need a process to evaluate how your AI affects individuals and society before you deploy it.
- Data Quality vs. Data Integrity: In ISO 27001, we care about “Integrity” (ensuring the file hasn’t been tampered with). In ISO 42001, we care about “Quality” (ensuring the data is representative, unbiased, and suitable for training).
- Human Oversight: You need to define formally when a human must be in the loop. An automated decision might be secure, but is it ethically appropriate to let a machine make it without review?
- Adversarial Robustness: Traditional vulnerability management and penetration testing often miss AI-specific attacks like “Prompt Injection” or “Model Poisoning.” These attacks target the logic of the model rather than the security of the code, requiring a new approach to testing and validation.
Don’t Rebuild. Extend.
The most logical path is not a massive new implementation project, but a targeted strategic extension.
This approach validates your existing investment in ISO 27001. It takes the robust machinery you have built for security and points it at a new target: AI Trustworthiness.

Where to Start?
To ensure your controls are effective, you must establish a clear baseline of your actual AI footprint. This means identifying not just the new tools your teams are adopting, but the generative features silently activating within the trusted software you already own.
The most logical starting point is a structured discovery and assessment phase to answer three critical questions:
- The Inventory Question: Where is the AI? You need to look beyond the IT asset register. True visibility means engaging with business units to uncover “shadow” usage and auditing your existing software stack for embedded features.
- The Gap Question: What is the true crossover? You need to map your existing ISO 27001 controls against ISO 42001 to determine what is fully covered, what requires extension, and what is missing entirely.
- The Roadmap Question: How do we bridge the gap? Create a prioritised implementation plan that focuses resources strictly on addressing the specific AI governance risks identified.
Conclusion
AI governance can be a complex challenge, but it shouldn’t be an overwhelming one.
If you have an ISMS in place, your infrastructure is already built. The most effective strategy is to treat ISO 42001 not as a new mountain to climb, but as a necessary and logical extension of the security culture you have already created.
Dionach can help you identify your real AI exposure, extend your ISO 27001 controls to meet ISO 42001, and build a practical roadmap to trustworthy AI. Our experts ensure your governance is efficient, compliant, and aligned with your business goals so you can adopt AI securely and responsibly.
Like what you see? Share with a friend.


