The headlines around AI in defence tend to focus on the dramatic: drones, autonomous weapons, or robot soldiers patrolling future battlefields. But the most important challenges posed by AI in the UK’s defence landscape are less visible and far more fundamental.
They speak to sovereignty, ethics, and long-term security, not just of systems, but of our strategic autonomy as a nation.
In the latest Strategic Defence Review (SDR), AI rightly features as a game-changing capability.
Yet while there’s plenty of emphasis on the potential of quicker decision-making and better logistics to improved threat detection, the conversation often skims over more uncomfortable questions.
Who owns the technology underpinning these tools? Can we trust it? And are we truly in control?
For AI to strengthen the UK’s national security, it must be more than powerful. It must be sovereign, ethical, and secure.
Sovereignty in the digital age
Defence sovereignty isn’t a new idea. It’s been central to our thinking for decades, particularly when it comes to nuclear deterrents, warships, and aircraft.
But as AI systems become embedded in the heart of military decision-making, we need to reimagine what sovereignty means in the digital age.
Right now, much of the world’s leading AI capability, especially in terms of foundational models and computing infrastructure, is in the hands of large multinational firms, many of them based in the US or China.
For routine commercial applications, that’s often a manageable risk. But in defence? It’s a different story.
Imagine a mission-critical logistics tool powered by an AI model built abroad, trained on data we can’t inspect and hosted on a server outside our legal jurisdiction.
That’s not just a data issue, it’s a sovereignty problem. When we don’t fully understand or control the systems underpinning our defence decisions, we open the door to risk, dependency, and even potential compromise.
This doesn’t mean the UK needs to start building every AI tool from scratch. But it does mean we need to be smart and strategic: Investing in our own compute capabilities, supporting homegrown AI innovation, and ensuring we maintain access, transparency, and control over any systems we deploy in a defence setting.
Ethics: Not just a nice-to-have
The ethical use of AI in defence isn’t just about maintaining our values, it’s also about operational integrity and legitimacy. In warfare, decisions carry immense weight.
Whether it’s selecting a target, prioritising resources, or identifying a potential threat, military leaders must be confident that their systems are reliable and explainable.
This is where “black box” AI becomes a real problem. If an algorithm makes a recommendation that can’t be explained – or worse, one that turns out to be wrong – this can erode trust in the system and in the leadership that relies on it.
The Ministry of Defence has begun to take steps to understand this, with principles around lawful and proportionate use of AI. But these principles need to be backed by robust oversight mechanisms, built-in accountability, and the ability to interrogate AI decisions when lives are on the line.
We also need to be honest about the pace of change. AI development is fast and getting faster. We can’t afford to bolt on ethics after the fact. Ethical considerations must be part of the design process from the very beginning and embedded in procurement decisions, testing, deployment, and training.
This is an area where the UK could take a real leadership role. A publicly stated Defence AI Ethics Charter, developed with partners across academia, industry, and civil society, could send a powerful signal about our intent to use AI in a responsible, human-centric way.
Security: The less glamorous, but critical challenge
We don’t talk enough about the cybersecurity risks that come with AI. As we integrate machine learning into more of our operational systems, we expose new attack surfaces, whether it’s data poisoning, adversarial attacks, or software supply chain threats.
AI is only as good as the data it’s trained on. If that data is compromised, the outputs will be too. Worse still, many AI tools today are trained on open datasets or third-party repositories that could introduce vulnerabilities without anyone noticing until it’s too late.
And it’s not just about malicious threats. There’s also the risk of over-reliance. In future contested environments – say, a GPS-denied battlefield or disrupted comms zone – our digital infrastructure could come under huge pressure. AI tools that are normally reliable might falter. Do we have contingency plans? Can we operate effectively if those systems fail?
Building resilience into our AI-enabled systems is as important as making them cutting-edge. That means planning for failure, designing in redundancy, and ensuring human operators can take back control when needed.
Where do We Go From Here?
The UK is well placed to shape the future of AI in defence – but only if we act decisively and thoughtfully. Here are five priorities I believe we must focus on
- Secure Sovereign Capability: Invest in sovereign compute infrastructure and UK-developed AI models for defence use. Independence doesn’t mean isolation, it means strategic control.
- Ethics at the Core: Make ethical design and oversight an integrated part of defence AI procurement and deployment. Don’t bolt it on after the fact.
- Demand Explainability: Ensure that AI tools can justify their outputs in ways commanders can understand and act upon. Trust requires clarity.
- Design for Resilience: Build systems that can operate in degraded environments and include clear, tested fallback protocols for when things go wrong.
- Support Talent and Innovation: Grow the domestic AI talent pipeline – not just engineers, but also ethicists, legal thinkers, and digital strategists with defence literacy.
Artificial intelligence is not a silver bullet for defence, but it is a powerful tool that will shape the UK’s strategic edge for decades to come. Used wisely, AI can help us make faster, smarter decisions and better protect our interests at home and abroad.
But to realise that promise, we need more than ambition. We need a clear-eyed view of what it means to build AI that we can trust: AI that reflects our values, safeguards our sovereignty and strengthens our national security.
The UK has the opportunity to lead by example. Let’s make sure we take it.