Are Lawyers Still Worth Their Billable Hours When AI Can Do It Faster?
As AI rewrites legal workflows, billing models and ethical standards are under pressure to catch up.
Artificial intelligence has entered the legal world not as a novelty but as a new kind of colleague. Machines now draft contracts, summarize case law, and assemble discovery notes before the first client call of the morning. Many firms view the shift as progress: faster research, lower costs, and cleaner documents produced in record time. Yet every advance comes with a hidden cost that the invoice does not show.
Hallucinated citations, phantom rulings, biased algorithms, and privacy breaches have already surfaced in real courtrooms. Judges are responding with sanctions, fee reductions, and new ethical boundaries around billing for machine-assisted work. The technology that promised to streamline law practice is now forcing lawyers to explain, line by line, what counts as labor and what belongs to the algorithm.
Ethical Ground Rules: The ABA Draws Its Lines
In July 2024, the American Bar Association released Formal Opinion 512, its first guidance on AI use in law. It reminds lawyers that under Model Rule 1.5, all fees must be “reasonable.” Billing for time spent using or reviewing AI output is allowed, but billing for time spent learning the tool is not.
The Opinion also warns that efficiency gains must benefit the client, not the firm’s profit margin. If AI cuts a 10-hour task to one, the bill must reflect that. The ABA allows firms to pass along AI subscription costs only with prior client consent, reinforcing transparency as a professional duty. Currently, AI cost savings are not widely being passed on to clients.
Several state bars echoed these points. The D.C. Bar’s Ethics Opinion 388 (April 2024) noted that lawyers cannot charge a second client full rate for recycled AI-generated research. The Florida Bar’s Opinion 24-1 went further, emphasizing confidentiality and banning “double billing” when AI delivers measurable time savings.
Courts Begin Cutting Bills That Lean on AI
The warning turned real in February 2024, when the Southern District of New York issued a sharp rebuke in Cuddy Law Firm v. New York City Department of Education. The firm cited ChatGPT in a fee petition to justify its hourly rates, but Judge Paul Engelmayer called the argument “utterly and unusually unpersuasive” and cut the request nearly in half (Reuters | ABA Journal | Greenberg Traurig Law Alert | Law360).
The ruling was more than symbolic. Fee petitions often set market norms, and this one signaled that AI cannot validate its own worth. Lawyers still need verifiable data, market surveys, affidavits, or expert declarations to justify rates. The case made clear that convenience is not a defense when billing reaches the bench.
Sanctions and Fee Shifting: When AI Gets Expensive
Courts have shown even less patience when AI goes rogue. In Mata v. Avianca (2023), two New York attorneys filed a brief with six nonexistent cases generated by ChatGPT. They were fined $5,000 and publicly reprimanded for relying on an unchecked machine (Reuters | CBS News | SSRN | Washington Post).
By 2025, sanctions had become creative and costly. In Puerto Rico, two lawyers in a FIFA-related dispute were fined $24,400 for submitting more than 50 false citations and saw their hourly rates cut before opposing counsel’s fees were awarded (Reuters | Bloomberg Law | Law360). In Nevada, a judge gave attorneys the choice of paying a fine or publishing an article about AI’s risks (Reuters).
Even modest penalties can wipe out client revenue. Courts increasingly order fee-shifting, forcing sanctioned lawyers to cover their opponents’ costs. What began as an efficiency tool has become, for some, the most expensive shortcut in the legal profession.
When Mistakes Cost Firms Real Money
The ripple effect is not limited to law. In October 2025, Deloitte Australia refunded part of a government contract after an AI-generated report included fabricated quotes and flawed data. The incident had nothing to do with litigation, but everything to do with billing integrity. Clients are making it clear that “AI error” is not a free pass.
The Deloitte case set a commercial precedent. Professionals cannot charge for work built on false information, even if the machine did the lying. That message resonates for lawyers, accountants, and consultants alike. In industries built on trust, automation does not dilute accountability, it concentrates it.
Global Ripples: The UK, Canada, and Beyond
Other countries are moving faster than the United States to impose consequences. In June 2025, a British High Court judge warned that lawyers who cite fake AI-generated authorities could face contempt charges or criminal prosecution for perverting justice (Reuters). British courts now require lawyers to certify that all citations have been verified by a human.
In Canada, bar associations are preparing pre-emptive guidance. The Law Society of British Columbia’s Technology Task Force has begun studying how AI affects professional responsibility and billing fairness. The common thread across jurisdictions is unmistakable. Lawyers remain liable for every word they submit, no matter who or what drafted it.
Billing Models Under Pressure
AI’s rise is undermining the logic of the billable hour. If a task that once took 10 hours now takes two, clients will demand new pricing models. Some firms are experimenting with flat fees or hybrid rates, charging lower amounts for AI-assisted drafting but maintaining full rates for human review and judgment. Analysts at the Thomson Reuters Legal Blog and the University of Washington Law Review have noted that automation is forcing firms to reimagine time-based billing, with many clients now expecting value-based pricing.
Others see an opportunity to rebuild client trust. By separating “AI time” from “attorney time,” firms can demonstrate transparency and efficiency in a single step. It also creates a defensible paper trail if a court later reviews how a bill was calculated. A recent SSRN study argues that the billable hour can survive only if lawyers clearly define and document human oversight in AI workflows. The ABA Law Practice Magazine reports that firms adopting AI oversight logs and alternative fee arrangements are already seeing stronger client retention.
Checklist: AI-Safe Billing Practices
• Disclose AI use in engagement letters and client communications.
• Track verification time, what was checked, by whom, and when.
• Do not bill clients for learning or experimenting with tools.
• Pass along AI subscription costs only with prior approval.
• Verify every citation before filing, never trust a raw output.
• Keep human oversight at every step.
• Revisit rates and fee structures regularly to reflect time saved.
Ethical billing is not just compliance, it is credibility. The firms that survive this transition will be those that treat AI as an assistant, not a scapegoat.
Key Cases and Rulings to Watch
Cuddy Law Firm v. New York City Department of Education (S.D.N.Y., Feb. 22, 2024) Judge Paul Engelmayer cut the firm’s requested fees by half after it cited ChatGPT to justify its rates. He called the argument “utterly and unusually unpersuasive.” (Reuters | ABA Journal)
Mata v. Avianca (S.D.N.Y., June 2023) Two lawyers were fined $5,000 for filing briefs with six nonexistent cases fabricated by ChatGPT, the first major U.S. sanction tied to AI misuse. (Reuters | CBS News)
FIFA Licensing Dispute (D.P.R., Sept. 2025) A Puerto Rico judge fined two attorneys $24,400 for using more than 50 fake citations and lowered their hourly rates before awarding fees to the opposing side. (Reuters)
Butler Snow Disqualification (N.D. Ala., July 2025) Three lawyers were removed from a case after citing AI-generated material they had not verified. The court referred the matter to the state bar. (AP News)
UK High Court Warning (June 2025) A senior British judge warned that citing fake AI-generated authorities could amount to contempt or criminal conduct. The statement marked the toughest stance yet on AI in legal work. (Reuters)
Morgan & Morgan Sanctions (2024) Three attorneys from Morgan & Morgan were sanctioned by a federal court after submitting filings containing AI-generated “hallucinated” case citations. The court found the lawyers failed to meet their duty of reasonable inquiry and verification before submitting documents to the court. (National Law Review)
$6,000 Sanction for AI-Generated Fake Citations (2025) A federal court sanctioned a lawyer $6,000 for submitting AI-generated citations that did not exist, reaffirming that reliance on unverified output constitutes misconduct. (Bloomberg Law)
Colorado Attorneys Sanctioned for 30 False Citations (2025) Two Colorado attorneys were sanctioned under Rule 11 for submitting a brief containing 30 AI-generated false citations, a warning that technology ignorance is no defense. (Barnes & Thornburg LLP)
Utah Court of Appeals Sanctions (May 2025) A Utah lawyer was reprimanded after filing a brief containing fake ChatGPT precedent. The state appellate court said the submission “undermines the integrity of judicial proceedings.” (Salt Lake Tribune)
Appellate Duty-to-Detect Case (Sept. 2025) An appellate ruling imposed a $10,000 sanction for two AI-tainted briefs and raised a new issue: whether attorneys have an affirmative duty to detect opponents’ AI-generated errors. (LawNext)
The Broader Question: What Is Expertise Worth?
As AI becomes standard, clients are asking a fundamental question: what are they paying for? If a program can draft a motion in 30 seconds, why does the lawyer’s bill still reflect three hours?
The answer lies in judgment, the uniquely human skill of knowing which cases matter, which do not, and when to push back against the machine. A Reuters report on Fennemore LLP’s AI-driven pricing strategy notes that firms integrating human oversight metrics into billing are finding clients more receptive to premium rates for verified accuracy.
Courts already recognize that distinction. The lawyer’s role is no longer about producing text but about ensuring truth. In the age of AI, billing is becoming a test of integrity as much as efficiency. Firms that disclose, verify, and adapt will thrive. Those that hide behind algorithms will eventually pay for it, literally and professionally. As one arXiv study observed, “the credibility of an invoice may soon depend as much on data verification as on professional judgment.”
My Take
Billing will (and is) evolve with AI. The hourly model will not disappear, especially for complex matters whose complexity cannot be predicted. This includes most litigation except in the criminal realm where the process for many cases such as DUIs is straight forward.
However, AI will have a considerable impact in the more routine transactional law services. Lawyers already compete with websites that spit out contracts, Wills, separation agreements, etc. I suspect AI will reduce transactional law costs many times over the next few years, so much so, that flat rate billing or productized law services will proliferate.
Billing models are slow to change because it’s risky for law firms. The hourly is a sure-thing; hourly rates are set so that the firm earns profits. However, in a free market system, transactional service law firms that fail to evolve may one day find themselves getting crushed by more progressive firms embracing flat rate fees.
What do you think? Leave a comment below.
Sources
ABA Journal | ABA Law Practice Magazine | ABA Litigation News | AP News | Barnes & Thornburg LLP | BBC | Bloomberg Law | Canadian Lawyer | CBS News | D.C. Bar | Financial Times | Law360 | LawNext | Law Society of British Columbia | Legaltech News | National Law Review | Reuters | Salt Lake Tribune | SSRN | Thomson Reuters | University of Washington Law Review | The Guardian | The Recorder | The Florida Bar | The Verge | Washington Post
Disclosure
This article was prepared for educational and informational purposes only. It does not constitute legal advice and should not be relied upon as such. All cases, sanctions, and sources cited are publicly available through court filings and reputable media outlets. Readers should consult professional counsel for specific legal or compliance questions related to AI use.