top of page
Search

Piercing Tax Attorney Confidentiality Privilege by using AI



Introduction


The integration of Large Language Models (LLMs) into tax compliance and advisory workflows has accelerated rapidly, raising fundamental questions regarding accuracy, regulatory alignment, and professional reponsibility. At the same time, a significant federal ruling in United States v. Heppner (S.D.N.Y., Feb. 10, 2026) now places privilege and confidentiality risks to AI-assisted legal advice as documents generated through a consumer version of anthropic Claude AI are not protected by the attorney-client privilege or the work product doctrine under the circumstances presented. Accordingly, this decision reinforces the importance of only using properly secured AI tools with confidential or privileged information and for decisions about using AI in the privileged context to be made by those who best appreciate the risks involved: i.e., lawyers.




Summary of the Facts


In United States v. Heppner, a criminal defendant used a consumer, non-enterprise version of Anthropic’s Claude AI to research legal issues after receiving a grand jury subpoena. Without direction from counsel, he input information learned from his attorneys and generated reports outlining defense strategy. These AI-generated materials were later shared with counsel. Judge Jed Rakoff held that neither attorney-client privilege nor the work-product doctrine applied. The court grounded its reasoning in traditional privilege principles: disclosure to a third party under circumstances undermining confidentiality results in waiver, and an AI tool is not an attorney. The platform’s terms allowed the provider to access and use user data, defeating any reasonable expectation of confidentiality. The court further held that because counsel did not direct the AI use, work-product protection did not attach.



The risks


As pointed out by the articles of Angela Yip & Jason Fong a major risks arises in the use of LLM language models which is the “fluency–accuracy” distortion effect. Which simply means that users frequently infer correctness from stylistic confidence. While it is true that Artificial Intelligence can imitate legal discourse with confidence it does not automatically make the outcomes of the language "correct" or "precise". Not necessarily only because of halluacinations creating fake data but due to the lack of capacity of the understanding of the overall circumstances regarding the norms or the intrinsic nature that they carry within our society. That and in addition the reliance may be in my opinion due to the fact that they may be biased to provide the best answer possible "in the opinion" of the user.


The Heppner ruling extends these findings into the evidentiary sphere. The court determined that use of a consumer-grade AI tool, and that is everything it is a tool, and under these terms permitting data access constitutes disclosure to a third party, eliminating confidentiality. It emphasized that privilege turns on communication with a lawyer, not a software system. Subsequent sharing of AI generated outputs with counsel does not retroactively create privilege. The court suggested, without deciding, that a different analysis might apply if AI use were directed by counsel under a Kovel-type arrangement, but left that question open.


Strategic Insight


The convergence of empirical performance limitations and privilege doctrine materially alters the risk profile of AI-assisted tax advisory.


  1. First, substantive accuracy risk and privilege risk now intersect. An AI-generated memorandum may be both technically flawed and discoverable. This dual vulnerability is particularly acute in tax controversy, cross-border restructuring, and criminal exposure contexts.

  2. Second, the confidentiality risk may be mitigated as pointed out by the court through a Kovel type arrangement in order to ensure it addresses AI.



How MCORE Can Help


MCORE advises clients at the intersection of tax law, regulatory compliance, and emerging technology. We assist in reviewing AI governance frameworks, drafting and updating Kovel agreements, structuring attorney-directed AI workflows, and aligning enterprise AI deployments with privilege-preservation standards.



References


  1. Angela Yip & Jason Fong, Responsible AI in Tax Filing: Legal and Ethical Challenges of LLM-Based Assistants, Frontiers in Artificial Intelligence Research, Vol. 2, Issue 1 (2025).

  2. United States v. Heppner, No. ___ (S.D.N.Y. Feb. 10, 2026) (Rakoff, J.).

  3. Margaret A. Dale et al., “Recent Federal Privilege Ruling Related to AI Tools Has Implications for Routine Tax Advisor Arrangements,” Feb. 20, 2026.

 
 
 

Recent Posts

See All
Recent Developments in Double Tax Treaties

The last quarter of 2025 and the beginning of 2026 saw several significant developments in bilateral double taxation treaties (DTTs). These developments include amendments through protocols, the entry

 
 
 

Comments


bottom of page