You open an AI tool at 9:47 p.m., paste one “harmless” client email, and feel the relief arrive before the risk does.
AI tools for solos can save hours, smooth rough drafts, and make a one-person practice feel less like a candlelit siege. But they can also expose client information, weaken privilege arguments, create false citations, and quietly turn a practical shortcut into a professional-responsibility problem. Today, in about 15 minutes, you will get a plain-English, solo-friendly framework for using AI without treating client secrets like confetti.
Safety / Disclaimer: Treat AI Like a Locked File Cabinet, Not a Magic Intern
This article is general educational information for U.S.-focused solo lawyers and small practices. It is not legal advice, ethics advice, or a substitute for your jurisdiction’s rules, court orders, engagement letters, malpractice carrier guidance, or professional judgment.
The American Bar Association’s Formal Opinion 512 explains that lawyers using generative AI must consider duties such as competence, confidentiality, communication, supervision, candor to tribunals, and reasonable fees. That list is not decoration. It is the skeleton of safe use.
Here is the practical translation: AI may help with drafting, summarizing, organizing, and brainstorming, but you remain the lawyer. The tool does not owe your client loyalty. It does not carry malpractice insurance. It does not get a nervous stomach before a sanctions hearing.
- Classify the information before using a tool.
- Redact client-identifying facts unless safeguards support disclosure.
- Verify every legal claim before relying on output.
Apply in 60 seconds: Write “No raw client facts in public AI tools” on your AI-use checklist.
When to Seek Help Before Using AI on Client Work
Pause before using AI when a matter involves criminal defense, immigration status, family violence, medical records, minors, trade secrets, settlement strategy, employee complaints, privileged advice, or protected discovery. These are not “paste and pray” situations.
Also slow down when a court rule, judge’s order, protective order, or client instruction mentions AI, confidentiality, verification, or disclosure. In solo practice, a five-minute check can prevent a five-month headache.
- Ask an ethics hotline if your state bar offers one.
- Review your professional-conduct rules and comments.
- Check local court rules on AI-generated filings.
- Consult a cybersecurity professional for sensitive workflows.
- Contact your malpractice carrier if AI use may affect coverage expectations.
Who This Is For, and Who It Is Not For
This guide is for solo attorneys who want to use AI without turning their practice into a liability piñata. It is especially useful for lawyers handling intake, client emails, demand letters, discovery summaries, legal research triage, marketing content, template drafting, billing descriptions, and administrative cleanup.
It is also for the lawyer who has 27 browser tabs open, one motion due tomorrow, and a coffee that has passed from beverage into evidence. I have seen smart professionals make risky technology choices not because they are reckless, but because they are tired. Fatigue is a terrible security officer.
For Solos Who Want Speed Without Turning Client Secrets Into Vapor
AI can be genuinely useful for solo practice. It can convert rough notes into structure, turn dense explanations into clearer client-facing language, create checklists, summarize public materials, and help you compare options. Used well, it can give a solo lawyer a little breathing room.
But useful does not mean harmless. The difference between a safe prompt and a dangerous one is often one client name, one docket number, one direct quote, or one oddly specific fact pattern.
Not for “AI Will Replace My Judgment” Workflows
This is not for lawyers hoping AI will make legal calls, invent research, diagnose strategy, or handle clients without review. A solo practice can be lean. It cannot be hollow.
Simple boundary: AI may assist the work. It should not become the lawyer, the witness, the client, or the record keeper of last resort.
The Solo-Specific Problem: No IT Department Is Coming
Large firms may have procurement teams, security reviews, contract specialists, and internal AI policies. Solos often have a laptop, a calendar, a password manager, and a heroic number of sticky notes. That does not make safe AI impossible. It means the rules must be simple enough to use on a Wednesday.
For a broader small-firm security lens, it also helps to compare AI controls with cybersecurity tools for small law firms, because confidentiality is rarely protected by one tool alone.
Eligibility Checklist: Is This AI Task Ready?
- Yes / No: Can I do this without identifiable client information? If yes, proceed with caution.
- Yes / No: Have I checked whether the tool stores, trains on, or reviews inputs? If no, stop.
- Yes / No: Will I independently verify legal claims and citations? If no, do not use the output.
- Yes / No: Would I be comfortable explaining this workflow to the client? If no, redesign it.
Neutral action: Use this checklist before adding any new AI tool to client work.
The Privilege Trap: AI May Feel Private Until It Isn’t
The prompt box feels intimate. It sits on your screen like a private notepad. You type, it responds, and no one appears to be in the room. That feeling is exactly why the risk can sneak past careful lawyers.
Attorney-client privilege and confidentiality are related, but they are not twins. Privilege is usually an evidentiary protection for certain communications. Confidentiality is broader and often covers information relating to representation, even if it could be found elsewhere.
Privilege Is Not the Same as Confidentiality
A lawyer can create confidentiality risk before anyone litigates privilege. For example, entering a client’s negotiation posture into a poorly reviewed AI tool may never become a formal privilege-waiver fight. It can still violate duties, client expectations, protective orders, or contractual obligations.
That is the shoe pebble solos need to feel early. The question is not only, “Will privilege be waived?” The first question is, “Should this information leave my controlled environment at all?”
The Third-Party Problem Hiding in the Prompt Box
If a tool provider can store, review, reuse, disclose, or train on prompts, then the prompt may not be as private as the interface suggests. Different tools have different terms. Enterprise plans may provide stronger protections than consumer plans. Legal-specific platforms may offer additional contractual promises. None of that should be guessed.
I once reviewed a workflow where the lawyer had carefully redacted the client name but left a rare business fact, exact city, exact transaction date, and a direct quote from the opposing party. It was “anonymous” in the same way a giraffe in sunglasses is undercover.
One Quiet Question Before Every Prompt
Ask this before you paste anything: Would I send this exact text to a third-party vendor without a confidentiality agreement, security review, or client consent?
If the answer is no, the prompt is not ready. Rewrite it. Generalize it. Use placeholders. Or do the work without AI.
Show me the nerdy details
For sensitive legal work, risk often turns on more than one factor: the type of information, the tool’s terms, data retention, human review, model training, access controls, encryption, client consent, protective orders, and applicable professional rules. A safer review asks how data enters, where it goes, who can access it, how long it stays, and whether it can be deleted.
Confidentiality Checklist: What to Remove Before You Prompt
Redaction is not just blacking out names. Good redaction removes the path back to the client. In a solo practice, that path may be short: one town, one business, one lawsuit, one unusual fact, and suddenly the “hypothetical” is wearing a name tag.
The State Bar of California’s practical guidance warns lawyers not to input confidential client information into generative AI tools that lack adequate confidentiality and security protections. That is a useful bright-line instinct even outside California: do not treat sensitive client facts as casual typing material.
Strip the Client’s Identity First
Start with obvious identifiers. Remove client names, company names, employee names, opposing parties, family members, addresses, phone numbers, email addresses, account numbers, Social Security numbers, medical record numbers, docket numbers, and file names.
Then remove identifiers that hide in plain sight: exact job titles, exact dates, exact locations, rare diagnoses, transaction values, property descriptions, and internal project names.
Remove the Legal Fingerprint, Too
Some fact patterns identify themselves. A “small manufacturer in western Pennsylvania that terminated its CFO after a March 2026 whistleblower email” may not need a name to become recognizable.
Use buckets instead of specifics:
- Replace exact names with “Client,” “Employer,” or “Opposing Party.”
- Replace exact dates with “early 2026” or “after the complaint.”
- Replace exact dollar values with ranges if the number is not essential.
- Replace direct quotes with neutral summaries.
- Replace rare facts with broader categories.
Use the “Newspaper Test” Before Submit
Before hitting enter, imagine the prompt appearing in a local newspaper. Would a client, opposing counsel, employer, neighbor, or judge recognize the matter? Would the disclosure embarrass, harm, or disadvantage the client?
If yes, redact more or avoid the tool. A good prompt should carry the legal shape of the question without dragging the client’s furniture into the street.
Infographic: The 4-Step Solo AI Confidentiality Gate
1️⃣
Classify
Is this generic, confidential, privileged, court-facing, or client-facing?
2️⃣
Redact
Remove names, numbers, rare facts, direct quotes, and legal fingerprints.
3️⃣
Review Tool
Check training, storage, retention, security, access, and deletion terms.
4️⃣
Verify
Human-check law, facts, citations, tone, deadlines, and client fit.
Tool Tiers: Public, Enterprise, Legal-Specific, and Local AI
Not all AI tools carry the same risk. The practical question is not “Is AI safe?” That is too broad. The better question is, “Safe for what task, with what information, under what terms, reviewed by whom?”
Think of tools in tiers. This does not make the decision perfect, but it gives the solo lawyer a map instead of a fog bank.
Public AI Tools: Useful, But Treat Them as Hot Surfaces
Public AI tools can be helpful for generic brainstorming, formatting, plain-language explanations, topic outlines, non-client-specific checklists, and marketing drafts. They are not where raw client facts should wander unattended.
Good use: “Create a general checklist for preparing a small-business contract review.” Risky use: “Here is my client’s full contract and negotiation history. Tell me what to do.” One is a sandbox. The other is a suitcase left open at the bus station.
Enterprise AI: Better Controls, Still Not a Free Pass
Enterprise plans may offer stronger privacy terms, access controls, admin settings, data-retention options, and audit features. They can be worth evaluating, especially for solos who routinely handle sensitive documents.
But a paid plan is not a magic cloak. Read the terms. Review whether prompts train models. Check retention settings. Confirm whether human review occurs. Know who can access the workspace.
Legal-Specific AI: Safer Does Not Mean Self-Verifying
Legal-specific tools may be built around research, drafting, contract review, litigation, or practice management. Examples in the broader legal technology market include Westlaw Precision, Lexis+ AI, vLex, CoCounsel, and document automation tools integrated into practice platforms.
These products may offer better legal workflows, but the lawyer still verifies the output. The tool can help find the staircase. You still decide whether the stairs go where your client needs to go.
If your practice also touches legal-tech product design, the same caution shows up in AI-based risk scoring for law firm workflows, where speed, governance, and explainability have to sit at the same table.
Local AI: The Locked-Room Option With Its Own Maintenance Burden
Running an AI model locally may reduce third-party disclosure risk because data stays on your device or local server. But local does not automatically mean secure. If the laptop has weak passwords, no encryption, poor patching, or sloppy backups, the “private” model is sitting in a cardboard vault.
Decision Card: Which Tool Tier Fits the Task?
| Use Case | Better Fit | Risk Note |
|---|---|---|
| Generic checklist | Public or enterprise AI | Avoid client-specific facts. |
| Client document summary | Reviewed enterprise or legal-specific tool | Check terms, consent, and retention. |
| Highly sensitive strategy | No AI, local secure workflow, or vetted legal platform | Use extra caution and document review. |
Neutral action: Match the tool to the sensitivity of the task before comparing features or price.
Client Consent: When Silence Is Not Enough
Client consent is not a decorative paragraph in an engagement letter. It is a real conversation when AI use may involve confidential information, materially affect the representation, or create risks the client should understand.
Some AI use may not require a dramatic disclosure. A generic blog outline for your law firm website is different from uploading a client’s confidential settlement memo. The hard part is knowing when you have crossed the line from office tool into representation risk.
Informed Consent Needs More Than Boilerplate
Informed consent should be understandable. A client should know what kind of AI tool may be used, what information may be shared, why the tool is being used, what safeguards exist, what alternatives are available, and what risks remain.
A vague clause saying “we use technology” may not be enough for meaningful AI use. That phrase is a lampshade, not a window.
What a Client Should Understand
A useful disclosure can be short. It does not need to sound like a software license got trapped in a law-school basement.
- What task the AI tool may help with
- Whether confidential information will be entered
- Whether the tool has reviewed privacy and security controls
- Whether the lawyer will verify all output
- Whether the client can object or request a non-AI workflow
Engagement Letter Language Should Not Become a Fog Machine
For low-risk administrative use, a general technology paragraph may be enough in some practices. For sensitive uses, consider a more specific clause or matter-level consent. Keep it plain. Keep it honest. Keep a record.
I like consent language that a tired client can understand on a phone screen. If it requires three rereads and a tiny ceremonial candle, simplify it.
For firms that build or review consent-heavy workflows, digital consent tracking tools offer a useful parallel: consent is strongest when it is specific, stored, retrievable, and tied to the actual action taken.
- Name the kind of AI use.
- Explain the information involved.
- Offer a practical alternative when needed.
Apply in 60 seconds: Add one plain-language AI-use sentence to your engagement-letter review list.
Common Mistakes: Where Solo Lawyers Accidentally Create Risk
Most AI mistakes in solo practice are not cinematic. There is no thunderclap. No villain monologue. Just a lawyer moving fast, trusting a polished answer, and forgetting that convenience can wear very good shoes.
Meaningful Mistake 1: Pasting the Whole Client Email Thread
Full threads are dangerous because they carry more than the question. They include names, strategy, emotions, attachments, privileged comments, metadata-like clues, and sometimes the client’s worst sentence written at their worst moment.
Instead, summarize the issue in neutral terms. Remove names. Remove direct quotes. Keep only what the tool needs.
Meaningful Mistake 2: Asking AI to “Find the Law” Without Verification
AI can sound confident while inventing citations, blending jurisdictions, or using outdated standards. This is not merely embarrassing. In court-facing work, it can become sanctions territory.
Use AI for issue spotting or structure if appropriate, but verify the law through trusted legal research platforms, court websites, statutes, rules, and primary authority.
Meaningful Mistake 3: Treating AI Output Like a Junior Associate You Supervised
A junior associate can explain their reasoning, receive training, understand firm policy, and be disciplined. AI does not do any of that. It produces text. Sometimes useful text. Sometimes nonsense in a clean shirt.
The solo lawyer owns the result. That means review is not optional. It is the product.
Meaningful Mistake 4: Forgetting Former and Prospective Clients
Confidentiality does not begin only when the retainer clears. Intake notes, consultation summaries, rejected matters, conflict-check details, and former-client facts need care too.
Meaningful Mistake 5: Billing the Old Way for AI-Assisted Work
If AI helps reduce time, billing should reflect the actual work, judgment, review, and value provided. Reasonable fees matter. Charging as if you manually spent three hours on a task completed and reviewed in forty minutes can create trouble.
The same billing-and-documentation discipline matters in other automated legal products too, including ethics opinion letter automation, where the final professional judgment cannot be replaced by a template or model output.
Fee / Rate Table: AI-Assisted Work and Billing Notes
| Billing Situation | Risk Level | Practical Note |
|---|---|---|
| AI drafts, lawyer revises and verifies | Moderate | Bill for actual time, judgment, and review. |
| Flat-fee matter with AI efficiency | Moderate | Confirm fee remains reasonable for scope and value. |
| Charging manual time not spent | High | Avoid billing fiction. It ages poorly. |
Neutral action: Add AI-assisted work to your billing review habits before invoices go out.
The Solo AI Use Policy: A One-Page Rulebook That Actually Gets Used
A solo AI policy should be short enough to read and strict enough to matter. If it takes 19 pages, you will ignore it during a deadline. If it is only “be careful,” it will not save you when the inbox starts throwing chairs.
Build the policy around three zones: green, yellow, and red. You can tape it near your desk, save it in your practice management system, or keep it as a pinned note.
The Green Zone: Usually Safer Uses
Green-zone tasks are generic, non-confidential, and non-client-identifying. They help with structure, clarity, and operations.
- Generic blog outlines
- Plain-English explanations of public legal concepts
- Office checklists
- Non-client-specific email templates
- Marketing drafts reviewed for ethics compliance
The Yellow Zone: Use Only With Redaction and Review
Yellow-zone tasks may involve legal work but can often be made safer through redaction, tool review, and human verification.
- Drafting a non-final client letter from redacted facts
- Summarizing a neutral, anonymized timeline
- Creating deposition-prep checklists
- Organizing issue lists
- Simplifying already verified research
The Red Zone: Do Not Use Without Strong Safeguards
Red-zone information includes raw client documents, privileged strategy, settlement posture, medical records, immigration details, criminal-defense facts, family-law safety details, trade secrets, login credentials, protected discovery, and anything a client has prohibited.
Blunt rule: If disclosure would make your stomach tighten, do not paste it into a tool you have not fully reviewed.
- Green tasks are generic.
- Yellow tasks need redaction and verification.
- Red tasks require stronger safeguards or no AI at all.
Apply in 60 seconds: Create a three-column note labeled Green, Yellow, and Red.
Prompt Hygiene: How to Ask Without Spilling the Case
Prompt hygiene is the art of asking for help without donating your client’s life story to a tool. It is not glamorous. Neither is washing your hands, and civilization seems fond of that.
Use Role-Free Facts, Not Client Facts
Instead of “My client, Jane Smith, a nurse at Mercy Valley Hospital, was fired on March 3,” try “A hypothetical healthcare employee was terminated after raising a scheduling concern.” You preserve the legal structure without packing the client into the prompt.
When jurisdiction matters, include it carefully. “Under New York law, what issues might a lawyer research in this hypothetical employment scenario?” is safer than a full factual confession.
Replace Specifics With Buckets
Use ranges and categories. Exact facts are sometimes necessary, but many prompts do not need them.
- “Seven-figure contract” instead of the exact number
- “Mid-sized employer” instead of the company name
- “Early 2026” instead of the exact date
- “A regulated industry” instead of the niche business
- “A written complaint” instead of quoted client language
Keep a Prompt Log for Sensitive Workflows
For higher-risk AI use, keep a simple log: tool used, date, task, input type, redaction method, output reviewed, legal authority checked, and final use. This is not bureaucracy for sport. It is a memory rail.
Months later, when someone asks how you verified a document, you do not want to answer with vibes and a calendar migraine.
Mini Calculator: Prompt Sensitivity Score
Give yourself 1 point for each “yes.”
- Does the prompt identify or strongly imply a client?
- Does it include legal strategy, settlement posture, or privileged advice?
- Does it include medical, financial, criminal, immigration, employment, or family safety facts?
Output: 0 points = usually lower risk. 1 point = redact and review. 2–3 points = use stronger safeguards or do not use AI.
Neutral action: Score the prompt before you submit it, not after.
Accuracy Risk: The Hallucination Problem Is an Ethics Problem
The most dangerous AI output is not the weird one. Weird is easy to catch. The dangerous output is polished, plausible, and wrong. It arrives wearing a suit and carrying a fake case citation.
Lawyers have already faced public consequences for submitting AI-generated legal materials with false or unverified authorities. Courts do not usually admire “the chatbot seemed confident” as an argument.
False Citations Are Not Just Embarrassing
False citations can waste court time, harm client interests, damage credibility, and trigger sanctions. Even when the court does not impose discipline, the reputational bruise can linger.
For solos, credibility is inventory. Protect it with almost unreasonable care.
Verify Like Your Bar Card Has Ears
Check every case, statute, regulation, rule, quote, deadline, filing requirement, and jurisdictional statement. Use primary sources where possible. Use trusted legal research tools. Read the authority yourself.
Do not let AI be the final source for law. It can be a sketchpad. It should not be the courthouse foundation.
Here’s What No One Tells You: AI Can Be Wrong in a Useful Voice
The voice is the trap. AI may organize the wrong answer beautifully. It may give you headings, transitions, and a sense of command. Bad law in elegant prose remains bad law. It just causes trouble with better posture.
Operator rule: Use AI to accelerate thinking, not to outsource verification.
Client Communication: What to Say Without Scaring People
Clients do not need a lecture on model architecture. They need honest language. Tell them what you use, why you use it, what you do not put into it, and how you protect their information.
A good explanation builds trust. A vague explanation breeds suspicion. A theatrical explanation makes everyone want to leave the room.
Use Plain Language, Not Robot Theater
Try this: “I may use secure technology tools to help organize drafts or administrative materials, but I do not put your confidential information into public AI tools without appropriate safeguards and, when required, your informed consent.”
That is clear. It is not perfect for every practice, but it is the right kind of sentence: specific, calm, and reviewable.
Give Clients a Real Choice
If AI use requires consent, do not corner the client. Explain the non-AI alternative, even if it costs more time. A meaningful choice is part of trust.
I have seen clients relax when lawyers say, “We can do this without AI.” The option itself lowers the temperature.
Use a Short AI Disclosure Script
Keep scripts for three situations: intake, engagement letter, and matter-specific consent. Do not improvise every time. Improvisation is lovely in jazz and risky in compliance.
- Intake: “Please do not send sensitive documents until we confirm representation and secure handling.”
- Engagement: “Our office uses technology tools under professional confidentiality safeguards.”
- Matter-specific: “For this task, I recommend using a secure AI-assisted review tool. Here is what it will and will not receive.”
Workflow Checklist: Before, During, and After AI Use
The safest AI workflow has three doors: before, during, and after. Most lawyers focus on the middle door: the prompt. The real protection comes from the whole hallway.
Before: Classify the Task
Ask whether the task is generic, redacted, confidential, privileged, court-facing, client-facing, advice-producing, or administrative. A generic checklist and a confidential strategy memo do not belong in the same bucket.
During: Limit the Input
Use the minimum facts necessary. Do not upload documents merely because it is easy. Do not include attachments unless the tool is approved for that information. Keep prompts narrow.
After: Human Review Is the Product
Review for accuracy, tone, legal authority, factual fit, jurisdiction, deadlines, privilege, confidentiality, bias, and unintended admissions. If output goes to a court, client, opposing counsel, agency, or public website, review twice.
Let’s Be Honest: The Fastest Prompt Is Often the Most Expensive One
The dangerous prompt often appears at the end of a long day. It says, “Summarize this entire file.” It feels efficient. It may be reckless.
- Classify before prompting.
- Minimize during prompting.
- Verify after output.
Apply in 60 seconds: Add “Classify, Minimize, Verify” to your AI-use checklist.
Short Story: The Deadline, the Draft, and the Almost-Paste
A solo lawyer I knew had a motion due the next morning and a client email thread that looked like it had survived a small kitchen fire. The temptation was obvious: paste the thread into AI and ask for a clean timeline. Instead, she stopped for four minutes. She created a neutral timeline herself, replaced the client’s name with “Client,” removed the employer, changed exact dates to sequence markers, and stripped direct quotes. The AI helped organize the issues, but it never saw the raw thread. The final draft still required research and judgment, but the tool saved time without swallowing the client’s private facts. The lesson was not “never use AI.” The lesson was better: slow down just long enough to avoid doing the irreversible thing quickly.
The same “almost-paste” discipline applies outside law practice too, especially when a business is responding to incidents where data breach response steps can determine whether a bad afternoon becomes a public disaster.
Vendor Review for Solos: Questions to Ask Before You Trust the Tool
Vendor review sounds like something that happens in a glass conference room with three binders and a person named Brent. Solos do not need theater. They need a short, serious review that answers the questions most likely to matter.
NIST’s AI Risk Management Framework is designed to help organizations manage AI risks and think about trustworthiness, governance, measurement, and risk controls. A solo lawyer does not need to become a standards engineer, but the mindset helps: identify the risk, measure what you can, manage it, and document the choice.
Data Use Questions
- Does the vendor train models on user prompts or uploaded documents?
- Can vendor staff review prompts, outputs, or files?
- Are inputs shared with subprocessors?
- Can data be deleted on request?
- What is the retention period?
Security Questions
- Is encryption used in transit and at rest?
- Is multi-factor authentication available?
- Can access be limited by user or matter?
- Are audit logs available?
- Does the vendor provide security documentation, such as SOC 2 reporting or comparable controls?
Contract Questions
- Does the vendor promise confidentiality?
- Do the terms address professional-services use?
- What happens when the account is closed?
- Are limitations of liability extreme?
- Does the vendor’s marketing match the contract?
For solos evaluating platforms that touch signatures, approvals, or client records, it is worth thinking about how real-time e-signature compliance monitoring handles auditability, because AI tools need the same kind of paper trail energy.
Quote-Prep List: What to Gather Before Comparing AI Tools
- Your top 3 intended use cases
- The most sensitive information you might process
- Your required retention and deletion preferences
- Your budget range for monthly or annual plans
- Your state ethics guidance and court-rule concerns
Neutral action: Compare vendors against your actual matters, not their prettiest demo.
FAQ
Can a solo lawyer use AI tools for client work?
Yes, but the use must be controlled. Generic drafting, administrative support, and redacted issue organization may be reasonable in many settings. Raw client facts, privileged strategy, protected documents, and court-facing legal claims require much stronger caution, verification, and sometimes client consent.
Can using AI waive attorney-client privilege?
It depends on the facts, the tool, the disclosure, the jurisdiction, and the applicable rules. The safer approach is to avoid entering privileged communications or identifiable strategy into tools unless you have reviewed confidentiality protections and the use is appropriate for the matter.
Is anonymizing client information enough?
Not always. A prompt can identify a client through rare facts, exact dates, specific locations, unusual injuries, transaction amounts, direct quotes, or niche industry details. Good anonymization removes the identity trail, not just the client’s name.
Do I need client consent before using AI?
Sometimes. Consent may be needed when AI use involves confidential client information, materially affects representation, or creates risks the client should understand. Check your jurisdiction’s rules, the tool’s terms, the client agreement, and the sensitivity of the matter.
Can I use AI to summarize discovery?
Only with careful safeguards. Discovery may include privileged, confidential, personal, proprietary, or protective-order material. Use secure tools, check court orders, limit access, document the process, and verify the output before relying on it.
Can AI draft legal briefs?
AI may help with structure or brainstorming, but every legal assertion, citation, quote, and procedural rule must be independently verified by the lawyer. Submitting unverified AI-generated authority is one of the clearest danger zones.
Can I bill clients for AI-assisted work?
You can generally bill for your time, legal judgment, review, and responsibility, subject to fee reasonableness and your agreement. Do not bill for time not actually spent as if AI had not reduced the work.
What is the safest first AI use for a solo lawyer?
Start with generic, non-client-specific tasks: checklists, blog outlines, intake form structure, plain-language explanations of public concepts, internal workflow templates, and formatting help. Build confidence where confidentiality risk is low.
Next Step: Build a 15-Minute AI Risk Gate Today
The goal is not to become anti-AI. The goal is to stop making irreversible information decisions casually. A solo lawyer can use AI well, but only if the front door has a lock.
Create One Rule Before You Create One Prompt
Write this sentence and place it wherever you make technology decisions:
No identifiable client information goes into an AI tool unless I have reviewed the tool, confirmed safeguards, considered consent, and verified the output.
Then Make Three Lists
- Approved tools: Which AI tools may be used in your practice?
- Approved use cases: What tasks are allowed?
- Prohibited information: What must never be entered?
Save One Redaction Template
Create a reusable template with placeholders for client, opposing party, jurisdiction, date range, event sequence, dollar range, document type, and legal issue. This turns redaction from a heroic act into a habit.
- One rule prevents impulsive prompts.
- Three lists define your boundaries.
- One template makes safer use faster.
Apply in 60 seconds: Copy the quoted rule into your notes app now.
Conclusion: Make AI Boring, Useful, and Documented
Remember the opening scene: the tired solo lawyer, the late-night prompt box, the client email that looked harmless until it was not. The fix is not panic. It is design.
AI tools for solos can be valuable when they help you organize thought, improve clarity, draft faster, and reduce administrative drag. But the work has to pass through a gate: classify the task, redact the input, review the tool, verify the output, communicate when needed, and document the choice.
The highest-risk AI habit is not experimentation. It is unexamined convenience. The safest habit is not fear. It is a short checklist used every time, even when the day is loud and the deadline is breathing on your neck.
Within the next 15 minutes, create your three lists: approved tools, approved use cases, and prohibited information. Then write one redaction template. That small act gives your AI workflow a spine.
And if your broader practice includes AI writing, client-facing content, or automation, keep the same discipline when reviewing AI writing tools: the best tool is not the loudest one, but the one you can safely explain, verify, and control.
Last reviewed: 2026-04.