AI offers powerful capabilities for analyzing large volumes of data and automating routine legal tasks. However, it can also produce inaccurate or biased results, making careful oversight essential. Reducing bias depends heavily on the quality and diversity of the training data. Ethical use of AI in legal practice requires strict protection of client confidentiality and clear communication about the technology’s role. AI should enhance, not replace human judgment, with legal professionals receiving ongoing training and conducting regular reviews. Choosing reliable, law-specific AI tools and routinely auditing their performance helps ensure accuracy, uphold ethical standards, and drive continuous improvement.
Artificial Intelligence Fact Sheet [2025]
AI Terms for Legal Professionals, Lexis Insights, 3/20/23
One Useful Thing Prompt Library (prompts for education & general use)
How Should I Be Using A.I. Right Now? The Ezra Klein Show, 4/2/24 (“Give your A.I a personality, spend 10 hours experimenting, and other practical tips from Ethan Mollick”)
Key AI technologies used in law include:
Natural Language Processing (NLP): Enables machines to understand and generate human language, useful in legal research and contract analysis.
Machine Learning (ML): Allows systems to learn from legal data and improve predictions, such as in litigation analytics.
Large Language Models (LLMs): Such as ChatGPT or Claude, which can generate, summarize, and explain legal texts with varying degrees of accuracy.
Prompt Engineering Basics (Concise Guide)
While prompt patterns can help shape better outputs from large language models (LLMs), they’re not always required. Effective prompt engineering is about crafting clear, goal-oriented prompts through trial and refinement—there’s no one-size-fits-all approach.
Define your goal: Be clear about what you're asking for (e.g., draft a motion, write a memo).
Be specific: Use precise language and include detailed instructions to avoid ambiguity.
Use legal terminology: Specify jurisdiction, party names, key facts, and legal issues to give the LLM full context.
Indicate format: If you want a specific output format (bullets, table, paragraphs), state it clearly.
Include relevant facts: Provide all material facts necessary for accurate, context-aware responses.
Iterate: Refine results by following up or adjusting your prompt as needed.
Clarity – Make prompts clear and concise.
Specificity – Eliminate vagueness; detail improves results.
Context – Provide background to help the AI understand complex tasks.
When using AI tools for legal research, it’s important to stay vigilant for hallucinations—confidently stated but false or misleading information. Here are some of the most common types, with examples:
AI may invent case names, citations, or rulings that sound plausible but don’t exist.
Example:“In Smith v. Johnson, 542 U.S. 304 (2015), the Supreme Court held that AI-generated contracts are enforceable.”
No such case exists.
The tool might cite a statute that doesn’t exist, has been repealed, or misstate what a real statute says.
Example: “Under the Federal Technology and Law Act of 2017, all AI tools must be licensed by the DOJ.” There is no such federal act.
Sometimes the AI borrows rules or standards from the wrong jurisdiction or context.
Example: Applying California’s privacy law analysis to a federal constitutional law issue without clarification. Jurisdictional nuances matter—this can mislead a reader or court.
AI might attribute compelling legal language to a court or judge, but those words were never actually written.
Example: “Justice Ginsburg once wrote, ‘Artificial intelligence must serve justice, not replace it.’” A powerful quote—completely fabricated.
Referencing decisions allegedly from real courts that never made such rulings.
Example: “The Second Circuit recently struck down biometric surveillance laws in Jones v. U.S. Surveillance Board.” This case does not exist in any docket.
AI may conflate details from different cases with similar titles, leading to incorrect facts or outcomes.
Example: Confusing Miranda v. Arizona (1966) (about custodial interrogation) with another case involving a Miranda named party but unrelated subject matter. Misleading even though both cases are real.
The AI may assert conclusions without supporting authority, or based on flawed interpretations.
Example: “Using AI in legal writing is legally mandatory for firms practicing federal law.” There is no legal mandate requiring AI use in legal practice.
Hallucinating Law: Legal Mistakes with Large Language Models are Pervasive [Stanford, January 2024]
A new study finds disturbing and pervasive errors among three popular models on a wide range of legal tasks.