The Regular AI Meeting Your In‑House Legal Team Should Already Be Running
Intro: It looks like everyone is using AI. Most legal teams aren’t.
With only a few scrolls on LinkedIn it can feel as though every in‑house legal team is already running sophisticated AI‑enabled workflows.
That pressure is real. Many General Counsels are no longer asking whether AI is relevant, but whether they risk falling behind by not doing something now.
The reality inside most organisations is far less advanced than the conversation around it. In our recent discussion with former GC Amber Foster, she shared that meaningful adoption across in‑house teams is still rare. Most activity sits at the level of individual productivity, with small pilots, quiet experiments, and what she described as pockets of excellence. It remains unusual to see a legal function scaling AI consistently across its workflow.
If AI adoption isn’t meaningful to the team and the wider business, it is worth asking whether there is any real value in using it at all. When lawyers lack confidence and spend their time second‑guessing outputs, adoption slows and enthusiasm fades. This is where structure starts to matter.
Sustainable AI use in a legal context rarely starts with technology. It starts with judgement: deciding what work should be supported, where human oversight must remain involved, and how outputs will be reviewed before they move forward.
For many teams, what’s missing is not curiosity or willingness to experiment, but a recurring way to make sense of that experimentation together.
This is where an AI meeting becomes crucial.
Before You Book the Meeting: A Small Amount of Discovery Matters
Discussions about AI tend to stall when teams move straight to tools without understanding where support would actually help.
Amber noted that legal teams often begin by asking which platform they should be using, whether that is a general tool such as ChatGPT or Copilot, or a legal‑specific product entering the market.
A more useful starting point is discovery:
- Where is legal slowing the business down?
- Which requests reach the team repeatedly?
- What work is escalated that does not require legal judgement?
- What arrives poorly structured before it reaches legal review?
This does not need to become a large‑scale transformation exercise.
Reviewing inboxes for recurring queries, speaking with sales or operations teams about where contract turnaround creates friction, can quickly surface early workflow gaps. Contracts returned because information is missing, repeated policy questions, or intake requests that arrive incomplete are often signs of process issues rather than legal ones.
These are the types of workflows that can later be supported safely by AI through structured intake, internal assistants grounded in company policy, or triage processes that reduce unnecessary escalation.
Discovery gives the meeting something concrete to examine.
Struggling to turn discovery into clear AI use cases?
We work with in‑house legal teams to surface the workflows where AI is genuinely a good fit, starting with how your team actually works.
How to Run a Quarterly AI Meeting in Your Legal Team
Moving from individual experimentation to operational use requires shared habits.
Rather than sending people away to experiment in isolation and report back informally, a meeting creates a space for the team to align on how AI fits into legal work in practice.
Below is a format that can be repeated at least monthly, and in some cases weekly.
1. Bring One Real Piece of Work
Avoid theoretical examples. Each participant should bring:
- a contract,
- a policy document,
- a compliance request, or
- another recent piece of live work.
The aim is to test whether AI can support something the team already does, such as summarising key obligations, redrafting into plain language, extracting deadlines, preparing internal guidance from dense policy text, or triaging intake requests before legal review.
Working with real materials makes both value and risk immediately visible.
2. Experiment Together
Use the session to explore how AI handles that material as a group.
Discuss what it gets right, where assumptions appear, what information is missing, and how outputs would need to be reviewed before they could be relied upon.
Leaders should be comfortable sharing unsuccessful attempts as well as effective ones. That openness helps normalise experimentation and prevents confidence sitting with only a small number of early adopters.
Over time, working through real examples together helps the team build judgement about where support is appropriate.
3. Run a Judgement Mapping Exercise
These meetings are also an opportunity to revisit what work should remain firmly human‑led.
As a team, map tasks into two categories:
- work that requires legal judgement and accountability,
- work that can be accelerated or structured safely with support.
Summarising or preparing first‑pass drafts may sit in the second category. Final legal advice, risk decisions, or approval to proceed would typically remain in the first.
Making this distinction explicit reduces the risk of inconsistent delegation later. Oversight should happen throughout the process, not only once an output has been produced.
4. Discuss Value, Not Prompts
Adoption often fragments when discussions focus on which prompts people used.
A more useful question is simpler: where did AI unlock value this week or month?
That value might show up as reduced contract turnaround time, clearer internal guidance, fewer incomplete intake requests, or time redirected toward higher‑risk work.
Sustainable adoption depends on understanding business impact, not just tool usage. Tracking measures such as speed to contract, clarity of outputs, or time saved alongside maintained quality provides a more reliable signal of progress.
5. Design Review Into the Workflow
No AI system is fully accurate. Hallucinations remain a practical risk.
The session should therefore include how outputs are verified, challenged, and reviewed before they are relied upon. Review processes may vary depending on risk level, but agreeing these norms as a team helps prevent informal or inconsistent use across individuals.
The Importance of the AI Champion
Leadership is what turns AI adoption from individual experimentation into a shared practice.
Amber's advice is direct. If you are leading a legal team, you need to lead this change. That does not require deep technical expertise. It requires visibility, a willingness to experiment, and openness about what works and what does not.
Confidence grows fastest when leaders use meetings to experiment alongside their teams, working through real examples and discussing outcomes honestly. Successes matter, but so do failures.
Many organisations misunderstand the role of an AI champion. Simply naming one or giving people access to tools is not enough. The role only works when value becomes a standing part of the conversation. Amber’s suggestion was practical and repeatable: at least monthly, ask how AI has unlocked value, and capture what changed.
For most in‑house teams, the General Counsel is best placed to play this role. When the GC leads by example, AI stops being a personal productivity experiment and becomes a shared way of working.
Why This Matters
Legal teams rarely benefit from isolated experimentation alone.
Without a shared forum to examine real work, agree where delegation is appropriate, and connect AI use to business impact, AI remains a personal productivity aid rather than an operational support.
Used consistently, these meetings create space to refine judgement, strengthen oversight, and integrate AI responsibly into existing workflows. Over time, the technology becomes less of a novelty and more a dependable part of how the legal team works.
Key takeaway
AI adoption inside legal teams rarely arrives as a single project or implementation milestone. More often, it develops through repeated decisions about where support adds value, where judgement must remain involved, and how outputs are reviewed before work moves forward.
A regular AI meeting gives legal teams a way to make those decisions deliberately. It turns scattered experimentation into a sustainable practice the team can rely on, without stepping away from its responsibility for judgement, risk, and accountability.
If you’d like help identifying where AI support makes sense for your team, you can use the form below to start a discovery conversation.