The justice system plays a vital role in peoples’ lives and our democracy – it determines who cares for children, how crimes are punished, how financial and workplace disputes are solved, and much more. Yet it is beset by crises: court delays run to years, many people with legal problems cannot access necessary legal advice, and prisons are overcrowded.
The Government intends to use AI to ‘revolutionise’ public services, and AI is already shaping the justice system through police surveillance, legal research, and advice bots, for example.
Yet AI is not a cure-all, and it carries big risks. Cases like the Post Office Horizon scandal and the Dutch child benefits scandal (where thousands were falsely accused of fraud due to a discriminatory algorithm) show the serious harms technology can enable. The UK justice system also has more data gaps than any other public service, creating extra challenges for responsible AI use.
While most other approaches to AI focus on ethics, this report sets out a rights-based approach to draw on concrete, well-understood, and enforceable legal rights. It proposes two clear requirements on those looking to use AI in the justice system:
Drawing on international examples and in-depth research, the report argues that this framework can help safely harness AI’s potential across the justice sector. The framework is also readily transferable to other public services.