Artificial Intelligence (AI) has the potential to fundamentally change society, from how we access information to how businesses, state agents, and others make important decisions about us.
The justice system is no exception. Police already use AI to boost their surveillance capacity and inform their decisions. Lawyers use AI to advise clients on legal risks and people ask AI tools for legal advice. Some judges abroad already use AI to calculate risk of reoffending, while in the UK judges have already started using AI to write their judgments.
The potential benefits are huge: exponential boosts to efficiency and the availability of legal advice, along with more consistent decision making, among other things.
But the risks are huge too. Overuse of AI by judges could risk eroding judicial independence and legitimacy. AI tools giving bad legal advice could scar lives and worsen the divide between those who can afford a lawyer and those who cannot.
Finally, the police and other state use of big data and AI risks entrenching biases which already exist in society, worsening inequalities – along the lines of race, for example, hurting communities already badly underserved by our justice system and corroding public trust.
- Influence policy and practice to ensure the use of data and AI in the justice system improves access to justice, advances human rights, and strengthens the rule of law.
- Focus on those who are, or who are at risk of being, underserved by the use of data and AI in the justice system, including those who are ignored by beneficial technological developments and those who are already over-exposed to harmful technologies.
JUSTICE have a reputation as a leading voice in the modernisation of the justice system. We have reported on the risks and opportunities of digitising the courts and tribunals, as well as the role of data in upholding the rule of law.
During COVID-19 we staged what we believe was the world’s first entirely virtual mock jury trial, partnering with academics to analyse its impact on accessibility and fairness.
A common thread in all this work has been JUSTICE’s ability to maintain a pragmatic and principled approach to technology. We embrace innovative technology but expect it to enhance the protection of human rights, access to justice and the rule of law.
We will take this pragmatic and principled approach to the new workstream on AI, human rights and the law.
JUSTICE Parliamentary briefing on automated decision-making and proposed amendments to data protection safeguards in the Data Protection and Digital Information Bill.
AI in Our Justice System – A Rights‑Based Framework: This report proposes the first rights-based framework to guide AI use across the UK justice system, to help us harness AI’s power while guarding against its risks.
Lessons learned: Deploying AI in criminal justice
Though research exists on what AI use in the criminal law context should look like, far less work has been done on how it is already being used on the ground and difficulties arising, in under-resourced courts and over-stretched police forces, for example. We will compile and analyse lessons from around the world on this topic to help policymakers learn fast and avoid repeating harmful mistakes.
Access to justice for unmet legal need (focus: civil and administrative law)
One of AI’s biggest potential benefits is its potential to help people who cannot afford legal representation access accurate and timely legal information and advice. We will make recommendations for how that can be achieved while preventing or mitigating harms deriving from inaccurate or unsafe advice.
Effective AI regulation
There is no clear roadmap for AI regulation in the legal sector or wider justice system – a task complicated by a patchwork landscape of public and private regulators. We will engage with regulators, users and the ultimate beneficiaries of regulation to establish a common understanding about what accountability and effective redress for unfair or unsafe AI looks like. We will then seek to work with regulators and other actors to identify where there is insufficient accountability, transparency or contestability and how this can be improved.
Please direct queries to Ellen Lefley, Senior Lawyer