White paper: A pro-innovation approach to AI regulation

In June 2023, JUSTICE responded to the Government’s White Paper, “A pro-innovation approach to AI regulation” (the “Consultation”) presented by the Department for Science, Innovation & Technology.

In its response, JUSTICE recognised that Artificial Intelligence (“AI”) has the potential to radically change our society and the way we live our lives. JUSTICE therefore encourages embracing the opportunities of AI, but alongside which a range of risks must be anticipated and mitigated.

In the context of the law and justice, this involves asking some important questions: what does a human rights and a rule of law approach to AI look like? What is the role of the courts in ensuring accountability in AI? What are the opportunities and challenges to the use of AI in law enforcement, the courts and legal services? How can we ensure AI enhances, and does not undermine, access to justice and the rule of law?

JUSTICE has recently established a new workstream dedicated to “AI and the Law” to consider these questions.

In JUSTICE’s response to the Consultation’s questions, JUSTICE had five key messages:

  1. AI can provide many benefits. However, it can also pose risks to human rights, democracy and the rule of law: risk-based governance is required to effectively protect individuals and society from irresponsible AI design, development and deployment. We suggest that human rights, democracy and the rule of law should be explicitly embedded principles in that governance.
  2. Clear regulation supports innovation: establishing guardrails and legal duties will improve legal certainty and inspire confidence and trust for private and public AI actors, users and those subject to AI decision-making processes.
  3. A statutory duty is necessary: it would oblige AI actors to act responsibly throughout the AI life cycle, whereas optional ethical guidelines do not provide reliable, consistent, or enforceable standards. Meanwhile the existing legal landscape is a patchwork of different private and public rights and liabilities, which does not sufficiently secure accountability for the proposed AI principles.
  4. Transparency is a “gateway” principle: opacity undermines the ability for civil society and directly impacted individuals to assess the safety and fairness of AI systems, and in turn undermines the accountability and contestability of AI.
  5. If the regulation of AI is going to be primarily through existing sectoral regulators, a central body is critical. Its functions should includeidentifying gaps in governance and could additionally provide a central public engagement function.

Read our response here.