Legal AI tool

Introduction

Branding image that shows the SmartAssist brand surrounded by various illustrations of people who are confused or stressed

Using AI to assist vulnerable people in accessing legal help, reducing friction and improving service delivery.

Sector

Not-for-profit, Social Impact, Legal Tech, Artificial Intelligence (AI)

Challenge

Design and implement the first trial integration of the custom-built AI model to validate its real-world effectiveness.

My Role

User Experience, Design Strategy, Information Architecture, User Research

Our Solution

Screenshot of the updated AI results, matched categories page

A/B testing

An A/B integration that introduced a new “AI-first” pathway into the Intake Tool. This allowed direct comparison between experiences, continuous iteration, and gradual rollout. Iterative changes led to a 48% reduction in abandonments and validated the model’s value, enabling its use in other Justice Connect tools.

How we got there

Background

The Intake Tool is an award-winning platform that helps people apply for legal services online. It reduces admin load and improves service matching by collecting key information upfront.

The full list of options of the legal categories page of the Program Sorter

Hard to Categorise

Despite its success, 41% of users reported difficulty categorising their legal issues. Users had to pick from up to 25 legal categories, often legalistic or unclear, resulting in confusion and misclassification.

Introducing the AI

Justice Connect developed an AI model that could interpret user-submitted text and suggest relevant legal categories from 32 options. My role began once the model was ready to be tested in a live setting.

Part of the flowchart showing where the AI tool fits into the Intake Tool process.

Strategic Integration Point

The AI was introduced at the point where users select their legal issue—an ideal moment for A/B testing, with minimal disruption to the rest of the tool.

Screenshot of the old legal categories list page showing where the user can opt-in to use the AI

Opt-In Didn’t Work

Initially, the AI was optional, but only 6% of users tried it. We changed to a 25% default rollout with an option to switch pathways. Surprisingly, 60% opted out of the AI, suggesting deeper hesitations or usability issues.

Forced Approach and Insights

We trialled a forced AI group (no option to opt-out), and within 3 months we saw:

  • 8% lower abandonment
  • 12% less likely to select “something else”. Which we consider a (rough) measurement of confidence
  • Extreme AI results: some users received 10+ suggested categories, while others saw no matched categories
Screenshot of the categories list that those who use the AI have access too. Each category is grouped, and each group can be expanded/collapsed

Rethinking Categories

We conducted a deep qualitative analysis of user-submitted text to understand gaps in language and matching, and made some changes.

  • Refined eligibility logic and result filtering
  • Created new user-centred categories like “Divorce matter” and “Neighbourhood dispute”
  • Expanded to 39 AI-supported categories (from 25), without overloading users thanks to personalised suggestions

Outcomes and learnings

This work successfully demonstrated that the AI had value for users, staff and the sector. The AI has continued to drive iteration and has since been integrated into other Justice Connect tools.

Improvements to Justice Connect online intake

  • 48% reduction in abandonments
  • 10% increase in service conversion
  • 20% reduction in “Something else” selections

Independent analysis

An independent analysis showed that those who used the AI in the Intake Tool:

  • Less likely to quit when asked to select their relevant area of law
  • Able to identify the relevant area of law with specificity rather than selecting multiple options
  • Able to complete an application quicker