
Natasa Mihajlovic
Chair, UKAI Life Sciences Working GroupFollowing their submission to the National Commission into the Regulation of AI’s call for evidence, Chair of UKAI’s Life Sciences Working Group, Natasa Mihajlovic says AI in healthcare does not suffer from a lack of regulation so much as a lack of shared understanding about how existing rules apply once systems are live, learning, and embedded in clinical practice.
On 2 February 2026, the UKAI Life Sciences Working Group submitted a response to the Medicines and Healthcare products Regulatory Agency (MHRA) National Commission into the Regulation of AI in Healthcare Call for Evidence. That submission was informed by a roundtable held days earlier, co chaired with Curia, bringing together voices from across the NHS, regulators, industry, technology providers, and professional and policy communities.
The starting insight was a simple one. Artificial Intelligence (AI) in healthcare is already regulated. Medical device regulation, pharmacovigilance, clinical governance, and professional accountability frameworks are all in force and actively applied. The real question is how well these mechanisms cope with software driven systems that evolve over time, behave differently across settings, and blur traditional lines of responsibility.
When software behaves unlike hardware
Much of the regulatory system is built around relatively static products. AI enabled systems are different. They may update regularly, learn from new data, or be deployed in ways not fully anticipated at the point of approval.
Roundtable participants, including Dr Mani Hussain of the MHRA, explored how existing frameworks are applied in practice to software based and AI enabled medical devices. Particular attention was given to post deployment behaviour, version control, and how changes are assessed once a system is already in use.

These are not abstract concerns. In real clinical environments, performance variation across trusts, populations, or workflows can have material consequences for safety, confidence, and adoption.
Risk proportionality matters more than novelty
A consistent theme was the importance of risk-based proportionality. Not all AI is created equal, and not all uses carry the same implications.
Systems supporting administrative tasks or low risk operational decisions raise very different regulatory questions from those that influence diagnosis, treatment, or clinical decision making. Treating them as equivalent risks slows innovation where it is least dangerous, while failing to focus attention where it matters most.
Participants emphasised that proportionality already exists within regulatory frameworks, but is not always applied consistently or clearly in the context of AI.
Oversight does not end at deployment
Another area of focus was what happens after an AI system goes live. Monitoring performance over time, managing updates, and identifying emerging issues are all essential, yet often unevenly handled.
Existing vigilance concepts offer a starting point. Reporting mechanisms, incident review processes, and governance structures are already familiar to healthcare organisations. The challenge is adapting them to systems that may change incrementally rather than fail dramatically.
Without credible post deployment oversight, confidence in AI use will remain fragile, regardless of how rigorous pre market assessment may be.
Clarity of responsibility builds confidence
Responsibility across the AI lifecycle was a recurring concern. Manufacturers, healthcare organisations, and clinicians all have defined roles, but those roles can become blurred when systems are adaptive or embedded deeply into workflows.
Clear allocation of responsibility, linked to function and context of use, is essential not only for accountability but for confidence. Clinicians need to know what they are responsible for. Organisations need clarity on governance obligations. Developers need predictable expectations.
Ambiguity benefits no one.
Regulation as an ongoing conversation
The submission to the MHRA and the accompanying roundtable were practical assessments of how AI is currently regulated and used in healthcare, and where greater clarity or consistency may be needed.
For the UKAI Life Sciences Working Group, this work is a point of continuity rather than conclusion. It anchors ongoing collaboration with Curia, the MHRA, and industry around a shared objective: making AI governance workable within the realities of healthcare delivery.
The future of AI in the NHS will not be decided by whether regulation exists, but by whether it is applied in ways that reflect how healthcare actually functions, and how technology is actually used.