Reducing Risk in AI: Moving Beyond Policies and Training
Over the past year, most organizations have taken their first steps toward managing AI risk. Policies have been drafted. Training sessions have been...
Over the past year, many law firms have moved from talking about AI to actively experimenting with it. Tools like Harvey, Microsoft Copilot, Claude, and Perplexity are increasingly appearing inside law firm technology stacks, often beginning with small trials or internal experiments.
The challenge is that many of these pilots start informally: a few attorneys testing a tool, a short trial from a vendor, or an internal working group exploring possibilities. While this can be a useful first step, unstructured experimentation rarely produces clear answers.
A well-designed AI pilot should do more than simply test whether a tool “works.” It should help a firm determine where AI can create real operational value, what risks must be managed, and how the technology can integrate into existing workflows.
Below is a practical framework we use with firms when designing an AI pilot.
The most successful pilots begin with clearly defined workflows, not broad experimentation.
Rather than asking “How can we use AI?”, firms should identify specific tasks that are repetitive, time-consuming, or information heavy.
Common early use cases include:
For example, a litigation team might test AI against brief summarization and issue spotting, while a regulatory group may focus on agency guidance analysis.
The goal is to measure whether AI can meaningfully reduce time spent on routine work while maintaining quality.
Pilots should involve a focused group of attorneys and staff, not the entire firm.
A typical pilot group might include:
Including both experienced and junior attorneys helps surface different perspectives. Senior lawyers often evaluate accuracy and judgment, while junior lawyers focus on speed and workflow improvements.
The pilot group should also include IT or knowledge management staff, who can monitor usage patterns and technical integration issues.
Many firms initially evaluate only a single AI product. In practice, it is often helpful to compare multiple models and tools, as their strengths differ significantly.
For example:
Running several tools side by side allows the firm to identify which platform performs best for particular tasks.
In many cases, the long-term solution may involve multiple AI tools rather than a single platform.
AI pilots should not begin without basic governance in place.
Firms should establish:
Technical guardrails may also be implemented through systems such as Microsoft Purview, which can help monitor data usage and enforce information protection policies.
The objective is not to slow experimentation, but to ensure that testing occurs within a controlled environment.
At the end of a pilot, firms often ask participants whether they “liked the tool.” While user feedback is valuable, the most useful pilots collect measurable data.
Metrics may include:
For example, a pilot might reveal that AI reduces first-pass document review time by 30–40%, while still requiring attorney oversight.
These insights allow firms to determine where AI creates real efficiency gains and where human review remains essential.
Even if the pilot is successful, AI tools cannot operate in isolation.
Firms should evaluate how AI will integrate with existing systems such as:
Without integration, AI often becomes another standalone tool rather than a workflow accelerator.
Planning for integration early helps ensure that successful pilots can transition into long-term operational use.
An AI pilot should not be viewed as a one-time test.
Instead, it should serve as the foundation for a broader AI strategy, informing decisions about:
Firms that approach pilots in this structured way tend to move beyond experimentation much more quickly.
AI is already beginning to reshape how legal work is performed, but meaningful adoption requires more than simply turning on new software.
A well-designed pilot allows firms to test AI in a controlled environment, measure real outcomes, and identify the workflows where the technology can create the most value.
Firms that approach AI thoughtfully today will be far better positioned to adapt as the technology continues to evolve.
Over the past year, most organizations have taken their first steps toward managing AI risk. Policies have been drafted. Training sessions have been...
AI pilots are everywhere in law firms right now.
Earlier this week, markets reacted sharply to news that a frontier AI company had released new legal-focused automation capabilities. Shares of...