
Lord Tim Clement-Jones CBE
Liberal Democrat Lords Spokesperson for Science, Innovation and TechnologyAI regulation is not the enemy of innovation – it is the condition that gives businesses, public services and citizens the confidence to deploy AI safely, fairly and at scale writes Lib Dem Lords Spokesperson Lord Clement-Jones. Lord Clement-Jones CBE is Liberal Democrat Lords Spokesperson for Science, Innovation and Technology, former Chair of the House of Lords Select Committee on AI, and author of Living with the Algorithm: Servant or Master? – AI Governance and Policy for the Future.
There is a persistent myth that regulation and innovation exist in opposition. The evidence suggests otherwise. What truly stifles AI adoption is not regulation itself, but the absence of clear regulatory frameworks. Small- and medium-sized enterprises (SMEs) face paralysis when they cannot assess compliance costs or liability exposure. Healthcare organisations with life-saving AI capabilities delay implementation because accountability structures remain undefined. This hesitancy is not timidity; it is rational business judgment in a landscape of regulatory ambiguity.
When developers and adopters lack clarity on algorithmic accountability, data governance requirements, or liability frameworks, the economically sensible response is inaction. We are not witnessing a failure of technology, but a failure of governance to provide the certainty that unlocks investment and deployment.
History offers instructive parallels. Victorian railway expansion accelerated once Parliament established safety regulations and liability frameworks. Financial services innovation thrives within clear regulatory perimeters. Similarly, comprehensive AI regulation – properly designed – creates the conditions for confident investment and rapid scaling. Businesses need to know the rules of the game before they will commit resources to playing it.
The Government’s enthusiasm for AI-driven efficiency is understandable. With “Humphrey” tools being developed and major departments investing heavily, the ambition is clear. However, ambition without accountability is reckless. Recent failures are instructive. The Department for Work and Pensions’ (DWP’s) algorithmic error affected 200,000 housing benefit claimants: families subjected to intrusive investigations based on faulty automated judgments. The Netherlands’ childcare benefits catastrophe destroyed lives and brought down a government. These were not minor administrative glitches; they were systemic failures of governance, exposing how automated decision-making, unchecked, can perpetuate discrimination and institutional bias.
The Horizon scandal’s lessons remain unlearned. Citizens facing automated decisions in benefits, immigration, or justice deserve meaningful routes to challenge and redress. The Algorithmic Transparency Recording Standard is inadequate. My Private Member’s Bill addressed this gap by proposing mandatory Algorithmic Impact Assessments and effective dispute resolution for public sector AI. When algorithms make consequential decisions about people’s lives, transparency and contestability are not optional – they are democratic necessities.
AI regulation and governance
Parliament must help drive AI governance, not merely respond to government proposals. Four priorities demand immediate action
- Legislative urgency. Voluntary commitments from technology companies are insufficient. We need comprehensive, risk-proportionate legislation establishing mandatory standards for high-risk AI systems. Impact assessments, bias testing, and accountability structures must be legally enforceable, not aspirational. The current Government’s cautious approach allows regulatory gaps to persist while AI deployment accelerates unchecked.
- Global standards leadership. The EU’s AI Act demonstrates how regulatory clarity, despite complexity, enables business confidence. We should be actively shaping international standards convergence through ISO, OECD, and UNESCO frameworks. Regulatory alignment is not harmonisation – we can maintain our pro-innovation approach while ensuring our standards are internationally recognised and compatible.
- Creative sector protection. The Data Use and Access Act debates revealed deep tensions around intellectual property. Ongoing litigation – creators versus developers such as OpenAI and Anthropic – highlights unresolved questions about training data. Parliament must establish clear principles: creators deserve compensation when their work trains commercial AI systems; transparency about training datasets should be mandatory; mechanisms for tracking and attributing AI-generated content mimicking human creativity are essential. Without resolution, we risk undermining the very creative industries where Britain excels.
- Digital capability building. Technical literacy cannot remain the preserve of specialists. Citizens need practical understanding of how AI systems affect their lives, how their data is used, and what rights they possess. This is not about creating a nation of programmers, it is about enabling informed participation in an AI-shaped society. Digital citizenship should be as fundamental as civic education. Without it, public scepticism will constrain beneficial AI adoption, regardless of how well we regulate.
The path forward requires us to demand specific government action: legislation establishing clear frameworks distinguishing high-risk from lower-risk AI applications; robust accountability mechanisms for public sector deployment, including mandatory impact assessments and effective redress; active participation in international standards development; and corporate governance requirements ensuring businesses embed ethical AI principles in operational practice.

Adopted earlier this year, the Government’s AI Opportunities Action Plan acknowledged that “well-designed regulation can fuel fast, wide, and safe AI development”. This recognition is welcome. Now, we need delivery. Smart regulation does not constrain technological potential, it creates the certainty, trust, and accountability that transform potential into reality.
Parliamentary leadership on AI means championing frameworks that protect fundamental rights while enabling transformative innovation. It means demanding transparency and accountability from both government and industry. It means ensuring the benefits and risks of AI are distributed equitably, not concentrated among the already powerful.
This is not about choosing between innovation and safety, growth and ethics, technology and humanity. It is about recognising that sustainable AI development requires all these elements working in concert.
That, for me, is what responsible parliamentary leadership on AI truly means.