The use of AI in defence has vast potential, but adoption faces critical challenges. but adoption faces critical challenges. Key barriers include the need for problem-driven development, robust data foundations, practical governance, and informed decision-making.Ā Ā
AI presents enormous opportunities for innovation in defence, whether in securing networks, decision support, or intelligence analysis, but success is not guaranteed.
Throughout the course of much of the policy research at The Alan Turing Institute, recurring challenges and bottlenecks have been found, which could inhibit high-potential applications from reaching adoption.
Below are some of the most pressing and critical challenges and some of the emerging recurring recommendations for various AI adoption challenges.Ā Ā
Start Any AI Project With the Problem
Before commencing any ambitious programme of work on AI, the first questions senior decision-makers must ask themselves is what the problem they are trying to solve is and whether AI will be additive.
Rather than begin from a fantasy capability and insist on vendors building the fantasy exactly as stated, defence should take inspiration from technology firms and identify a persistent problem in defence that is suited to AI, then find a way to access talented personnel whose task would be to explore how an expanded use of AI in defence might solve that problem.
Stringent deliverable requirements too early in the process could inhibit innovation and the development of better-than-envisaged solutions.Ā
The Most Common Bottleneck is the Data Foundation
Across several use cases of AI in defence, data challenges are a recurring theme. Ultimately, AI is not a panacea and will not magically fix the limitations in existing human-led defence processes if developers and data scientists do not have access to high-quality datasets.
Before seeking to integrate AI in capabilities, defence should prioritise problems that can be supported by an AI-enhanced data-driven process. There will also be uncomfortable discussions around whether defence is willing to open its datasets to vendors to build AI-based capabilities. These decisions need to be discussed upfront, before enormous investments are made, to prevent inefficient use of resources.
For any AI-based capability, most of the data that the AI-based system would ideally be trained on does not exist in a form that is sufficiently labelled and parameterised for AI. There is a need to radically tackle how to build the underpinning dataset needed and to develop a roadmap and methodology for how that dataset can be built to enable the AI-based capability.Ā

Elevate the Voices of Users When Building Defence AI Governance Frameworks
To build and nurture responsible AI in defence, it is good practice to understand AI implementation and governance challenges from day-to-day users of AI-based capabilities in the Front Line Commands.Ā Ā
Existing academic research on AI governance is focused on commercial and frontier AI models, rarely considering near-term defence applications and existing experience.
Defence practitioners in operational roles report that emerging guidance on AI in defence (e.g. JSP 936) are overly burdensome and the criteria for 100 per cent compliance may prevent necessary capabilities from reaching users or could grind operatorsā work to a halt.āÆĀ
Defence assurance and procurement experts also have less experience with AI software than in traditional defence capabilities and may not have the skills to test, assure and validate AI in defence systems.
Furthermore, the timelines assumed in AI assurance frameworks are incompatible with the operational tempo in some use cases of AI in defence, meaning there is a need for more pragmatic approaches that enable AI deployment in days and weeks, not months. Insight from users would beāÆindispensable in helping to foresee recurring bottlenecks.Ā
Empower the Commanders to Understand AI in defence scenarios: Risks and OpportunitiesĀ
In order to help senior decision-makers calibrate trust in AI-based capabilities in defence, there is a need to better understand how to equip commanders with what they need to know before procuring or deploying AI-based capabilities for uses such as decision support, military planning, defence logistics, and others.
In particular, there is a need to prevent a false sense of security in these systems. Tackling the interpretability problem, especially for non-technical users, will be especially critical in cases where an AI-based system generates outputs or recommendations that human operators do not agree with.
Both under-trust and over-trust are pertinent, so the focus should be on helping senior decision-makers understand what they do not know and what the intolerable risks are.Ā
Final Thought
Teams in Defence and National Security who are already successfully leveraging responsible AI have learned the lessons above through trial and error and long, painful periods of testing, funding, and reconceptualising of major projects using AI in defence roles.
However, the opportunity cost of not adopting AI in defence could have far-reaching consequences on the strategic landscape for decades to come. As promising AI projects come to fruition and promising practices emerge, sharing these evidence-based lessons in a regular and systematic way would be invaluable to the broader defence community.Ā Ā Ā