Six months after the launch of the Government’s AI Opportunities Action Plan, artificial intelligence trade body, UKAI, hosted a significant review event to assess the plan’s progress.
The review examined delivery of all 50 of the Action Plan’s recommendations, all of which were accepted by the Government in their official response to the Plan.
The third of a series of panels was titled: “What Are the Opportunities Today?” and was Chaired by Baroness Thangam Debonnaire. Joining Baroness Thangam on the panel were Indra Joshi (Director of Strategic Engagement at OptumUK), Alex Ktorides of Bristows, Alex Kirkhope (Partner at Shoosmiths), Ed de Minckwitz (Director of Public Policy at ServiceNow UK and Elizabeth Seger (Associate Director of Digital Policy at Demos).
Where are we now – and where should we be?
Baroness Thangam Debbonaire opened the session by referencing earlier discussions on competitiveness, regulation, and whether government is aligned with public expectations. She asked the panel to introduce themselves and answer: where do you think we’ll be in six months or two years, and where would you like us to be?
Elizabeth Seger, Director of Digital Policy at Demos, noted that while policy is slow, AI is fast, and AI is already hitting the ground. She said: “Where will we be in six months? Policy is slow. AI is fast. But AI is hitting the ground, people are starting to experience it in daily life: in transportation, government services, even politics.”
Ed de Minckwitz, Director of Public Policy at ServiceNow, said he hoped to see the ambitions of the AI Action Plan put into practice. He described the gap between frontier innovation and public sector adoption, and said applying private sector use cases to political pain points would make a visible difference to citizens.
After 15 years in government, I’ve seen the gap widening between frontier innovation and public sector adoption. But in the private sector, I see rapid transformation.
Ed de Minckwitz
My hope, in six months or maybe two years, is that we see the ambitions of the AI Action Plan put into practice: improving public services, building trust, and boosting the economy.
If we applied private sector use cases to political pain points: immigration, GP backlogs, courts – people would feel the difference. That’s what we need.
How do we raise political literacy?

Next, Baroness Debbonaire asked another question: how do we get politicians to understand AI better – and bring the public with them?
Elizabeth Seger described one effort carried out by Demos. First, a six month scheme to educate MPs about AI: “We just ran a six-month AI parliamentary scheme to educate MPs: about how AI works, what data it uses, what the risks are. No policy agenda, just education. And yes, there were “no stupid questions” moments.”
Following Seger’s comments, Dr Indra Joshi explained that policymakers often fall foul of common misconceptions regarding AI: “It’s about education. Policymakers hear “AI in healthcare” and think of Google or ChatGPT. But that’s a narrow slice of what’s possible. We’ve also overhyped AI. It’s not going to magically solve everything. And some of the things the public want, like joined-up services, aren’t even about AI. They’re about better data integration, APIs, and governance.”
Ed de Minckwitz argued that there is significant misalignment between industry and policymaker. He explained that industry does, and asks Government to figure out how that can fit into the wider picture. Government, he argued, wants solutions to its own specific problems – with those being, more often than not, completely different to industry priorities.
How do you pitch AI to the public?
The conversation then turned to methods that can be used to get members of the public “excited” by AI, with Alex Ktorides highlighting how AI can empower people: by boosting skills and opening new work opportunities.
Baroness Thangam then pressed the panel for concrete examples, saying: “My mum wouldn’t be convinced. She wants concrete examples,” to which Elizabeth Seger replied: “Then tell her: AI is already messing with her life.
It’s deciding what she sees online. It shapes her search results. It can introduce bias, manipulate, deceive. And there’s no clear liability framework”
Seger argued for distributed accountability “because product liability laws don’t work when anyone can download and run AI”. She then further urged: “The UK has a shot at becoming a hub for AI assurance. Let’s leverage that.”
What do you want to see in two years time?
Baroness Debbonaire closed the session by asking the panel what one thing they want us to be doing differently in two years, and how the room could help.
Ed de Minckwitz said the Action Plan recognises how hard this is, and called for radical change to procurement and new partnerships between government and industry.
Alex Ktorides said liability is key. He described AI systems as layered and complex, and said that if the UK can lead on trustworthy legal frameworks, it could become a global hub.
Elizabeth Seger gave three points: solve liability, stop fighting the AI safety agenda and monetise it, and build AI sovereignty through collaboration. She said the UK can’t compete with the US alone and needs shared data, public AI infrastructure, and open-source partnerships.
Dr Indra Joshi said that in healthcare, liability falls on clinicians regardless of the tech, and that this needs to change. She added that responsibility can’t be pushed onto frontline workers just because systems haven’t evolved.
Baroness Debbonaire ended the panel by saying: “Let’s figure out where we still need humans. Jobs aren’t just going away. Creativity is something we can all enjoy more, if we build regulation that protects creators. And I’m speciesist. I want humans to have a future with AI. I believe it’s possible, and it’s people in this room who can help us get there.”