By Michael Phillips | TechBay.News

Generative artificial intelligence is rapidly reshaping professional work, but a growing controversy in Pennsylvania courts highlights a sobering reality: when AI is used carelessly, the costs are not theoretical. They land directly on the justice system itself.

A January 7, 2026, investigation by Spotlight PA details rising concern among judges over so-called “AI hallucinations” — fabricated legal citations, quotes, and precedents — appearing in court filings across the Commonwealth. The issue burst into public view after a high-profile confrontation in the Pennsylvania Commonwealth Court.


A Confrontation From the Bench

During oral arguments on December 10, 2025, in South Side Area School District et al. v. Pennsylvania Human Relations Commission, Commonwealth Court Judge Matthew Wolf halted proceedings to address glaring errors in a legal brief submitted by counsel.

The brief, filed by attorneys Thomas W. King III and Thomas E. Breth, contained misquotes, incorrect attributions, and at least one entirely fabricated quotation allegedly drawn from Bayada Nurses, Inc. v. Commonwealth.

Judge Wolf openly questioned whether the filing contained “AI, artificial intelligence hallucinations,” noting that the court had been placed at a disadvantage by having to independently verify material that should have been reliable.

The attorneys, who were serving as special counsel for the Thomas More Society, apologized repeatedly, calling the situation “incredibly embarrassing.” They denied using AI to draft the brief, though questions remain about whether AI-assisted research tools may have played a role. The court is now weighing potential sanctions, including fines or disciplinary referrals.


Not an Isolated Incident

While this case drew attention because it involved experienced attorneys and a politically sensitive dispute, it is far from unique. Researchers tracking AI-related errors have documented at least 13 Pennsylvania cases in 2025 alone involving suspected or confirmed hallucinated citations — most filed by self-represented litigants, but not all.

Consequences have ranged from stern warnings to monetary fines and even outright dismissal of lawsuits.

Legal scholars warn that the danger goes well beyond embarrassment. Introducing fake precedents into court records risks what University of Pittsburgh law professor David A. Harris has described as “misshaping the law” itself. Once cited in opinions, even erroneous material can echo through future rulings.


The Ethical Line

The legal profession has been clear on one point: technology does not excuse sloppiness.

Guidance from the American Bar Association and the Pennsylvania Bar Association permits the use of AI tools — but places full responsibility for accuracy squarely on the lawyer. Competence and candor to the tribunal remain non-negotiable obligations.

This is not new. In the now-infamous Mata v. Avianca, Inc., attorneys were fined after submitting AI-generated citations to nonexistent cases and then doubling down when challenged. That episode was supposed to serve as a cautionary tale. Instead, similar mistakes continue to surface.


A Center-Right Reality Check

From a center-right perspective, this episode underscores a broader principle: innovation without accountability undermines institutions. AI can reduce costs, expand access to justice, and improve efficiency — but only when paired with professional discipline and human verification.

Courts depend on trust. Lawyers are officers of the court, not beta testers for experimental technology. When filings become unreliable, judicial resources are wasted, clients are harmed, and public confidence erodes.

The answer is not banning AI outright, nor pretending it doesn’t exist. It is enforcing standards, imposing consequences when those standards are violated, and making clear that shortcuts in the name of efficiency are no excuse for abandoning responsibility.

As AI becomes embedded in legal practice, one rule is emerging as ironclad: trust, but verify. In the courtroom, there is no such thing as “the AI made me do it.”

Leave a comment

Trending