The launch of a new AI tool is often a moment of celebration. The promise is huge: streamlined processes, new insights, and exciting possibilities. But in the race to go live, a fundamental question is often overlooked: when your AI makes a critical decision, who is actually responsible for it?
It is a question few leaders can answer with confidence. The data reveals a stark gap between ambition and oversight at the highest levels. A recent 2025 EY survey found that while a vast majority of leaders are pushing for greater AI integration, less than a third could confidently identify their organisation’s key AI risks.
A crucial step is being missed in this race. Deploying AI isn’t just a technical upgrade; it is the introduction of a new decision-making entity into your organisation. And if we don’t pause for some serious reflection before going all-in, we risk building solutions that are not only ineffective but potentially discriminatory, opaque, and legally non-compliant.
The ‘Black Box’ Problem: Can You Explain Your AI’s Decisions?
Imagine you have a new AI model for approving loan applications. It assesses thousands of data points and produces a ‘yes’ or ‘no’ with remarkable speed. On the surface, it’s a model of efficiency. But what happens when a deserving applicant is rejected and asks for the reason?
It’s a simple question, but answering it can be a genuine challenge. The honest answer might be: “we don’t precisely know.” This is where you encounter the ‘black box’ problem. The inner workings of many powerful AI systems, whether developed in-house or supplied by a vendor, are incredibly opaque, especially the more sophisticated ones. They arrive at conclusions through a complex web of calculations that even their own designers can struggle to fully interpret.
That level of ambiguity might be acceptable for something trivial like a film recommendation, but it’s completely untenable for life-altering decisions concerning a person’s finances, career, or home. From a legal, ethical, and public trust standpoint, you must be able to explain the reasoning behind your decisions. This directly impacts your ability to implement what’s known as meaningful human oversight: the vital principle that a human must always be able to understand, challenge, and ultimately override an AI’s decision. Without that clear explanation, any human involvement risks becoming a rubber-stamping exercise rather than a genuine safeguard. How can your organisation stand by a conclusion if you can’t unravel the logic behind it? It’s a serious vulnerability in any critical process.

Automating Past Flaws: The Problem of Biased Data
This next point requires some honest reflection. An AI model is only as fair and impartial as the data it’s trained on: both in terms of its content being unbiased and its use being lawful under regulations like GDPR. Think of the AI as a diligent student, absorbing and replicating every pattern it’s shown.
If an organisation’s historical data contains reflections of societal or internal biases, there is a significant risk the AI will learn those patterns and apply them at scale. For example, if an organisation has unconsciously favoured certain demographics in past hiring decisions, a new AI recruitment tool will diligently learn to replicate that behaviour.
This can happen when the system identifies seemingly neutral data points (or ‘proxies’) that are often linked to those demographics, which can lead to indirect discrimination. The system might learn to penalise things like:
- Gaps in employment history, which can be a proxy for gender (e.g., taking time off for childcare), disability (e.g., for a period of illness), or pregnancy and maternity.
- The type of university attended, which could be a proxy for a candidate’s socio-economic background, their age, or even a disability if they chose an institution based on its specific support facilities.
- A candidate’s postcode, which can be a strong proxy for their socio-economic background, health outcomes, and even their race.
This isn’t because the AI is inherently prejudiced, but because it has learned from data that reflects our own human history. This is a tangible risk with real-world consequences. We’ve seen numerous examples of AI systems showing biases, such as the well-known case of Amazon’s recruiting tool which had to be scrapped after it taught itself to discriminate against women. This isn’t just a reputational issue; it’s discrimination. It’s important to remember that anti-discrimination laws globally hold organisations liable for such outcomes, whether it’s under the UK’s Equality Act 2010, the EU’s anti-discrimination directives, or Title VII of the Civil Rights Act in the United States.
Performance Degradation: The Ongoing Task of AI Maintenance
It’s easy to think of AI implementation as a one-off project: you build or buy the system, switch it on, and it runs perfectly from that day forward. In reality, this is just the beginning of the journey.
The problem is that the world doesn’t stand still, and the data flowing through your organisation is constantly changing. At the extreme, a model trained on user behaviour before a major global event like the COVID-19 pandemic can quickly become irrelevant, as the fundamental patterns of work, travel, and purchasing change almost overnight. The gradual decay in performance is what’s known as ‘model drift’ or ‘data drift’, and it’s driven by the constant flux of the real world: demographics shift, market conditions fluctuate, and new consumer trends emerge.
Without a robust plan for continuous monitoring, testing, and retraining, a model’s performance can degrade over time, sometimes without anyone noticing. It can become less accurate, less relevant, and could start actively making actively poor decisions. The questions are: who has ownership of your AI’s performance in six months? In a year? What is your strategy for keeping it aligned with the real world?

The Legal Imperative: Preparing for AI Regulation
Finally, for anyone who might see these issues as purely ethical or optional considerations, there is now an urgent and compelling imperative to act. AI governance and security frameworks take time to build, and key regulatory deadlines are approaching much faster than many realise. The landscape is changing, and regulation is solidifying.
Across the globe, and particularly with the landmark EU AI Act, governments are establishing clear legal frameworks for AI. Its ‘high-risk’ category covers any system used to make critical decisions about people’s futures, such as in recruitment, access to essential services, or the justice system, which will face stringent requirements for transparency, human oversight, and data governance.
Even for organisations located outside the EU, the bloc’s market size means its standards often become the global standard in practice. As organisations and supply chains are globally interconnected, compliance with these standards is increasingly becoming a prerequisite for international collaboration and public trust. Getting ahead of this curve is no longer just good practice; it’s a strategic necessity.
The Path Forward: Building Genuinely Intelligent AI
Having a clear view of these challenges is essential for any successful AI strategy. The goal isn’t to put the brakes on innovation. The technology holds enormous promise. Instead, it’s about channelling those involved in the AI race to adopt a more thoughtful mission: to develop it responsibly. It can be tempting to view these issues as yet another burdensome compliance task. But they are much more than that: they are the foundation for building systems that are genuinely intelligent, transparent, and fair.
Getting AI governance right builds significant trust and effectiveness. Organisations that can demonstrate their AI is fair, transparent, and robust will build deeper trust with the public, attract better talent, and deliver superior, more reliable outcomes. They will be the leaders in the next phase of digital transformation.
To put this into practice, here are four practical actions for a responsible AI strategy:
- Demand Explainability. Don’t just accept ‘black box’ solutions for critical tasks. Make transparency a core requirement for any system you build or buy. Treat it as a fundamental feature, not an optional extra.
- Audit for Bias as Standard Practice. Proactively analyse your training data before it ever touches a model. Make fairness assessments a non-negotiable stage in your development lifecycle.
- Design for Maintenance. Treat your AI models as living systems. Assign clear ownership and allocate resources for their ongoing monitoring, retraining, and governance from day one.
- Embrace Regulation as a Guide. View upcoming legislation not as a threat, but as a valuable roadmap. Use its principles to structure your governance and build a framework for best practice ahead of the curve.
Putting these principles into action is the next critical step, and several excellent frameworks can guide the way. Resources like the NIST AI Risk Management Framework and the international standard ISO/IEC 42001 provide detailed, practical guidance, while the text of the EU AI Act offers a clear preview of what legal obligations are on the horizon. The key is to treat these not as prescriptive checklists, but as toolkits of best practice to help you build a governance programme that is right for your organisation.
So, who is responsible for your AI’s decisions? Tackling the points we’ve discussed doesn’t slow down innovation; it gives it direction and integrity. The answer isn’t a single person but the organisation’s commitment to a robust framework of diligence: a framework that ensures decisions can be explained, bias is actively addressed, models are constantly maintained, and clear human oversight is always in place. By embedding these practices into your process, you ensure that your investment in AI leads to progress that is sound, responsible, and sustainable. You move from simply implementing new technology to leading with trustworthy innovation.
Like what you see? Share with a friend.



