Your Chat History is the New Witness: Why US Lawyers are Warning About Your AI Prompts
AI ruling prompts warnings from US lawyers: Your chats could be used against you
April 16 (Reuters) – As people increasingly turn to artificial intelligence for advice, some U.S. lawyers are telling their clients not to treat AI chatbots like trusted confidants when their freedom or legal liability is on the line.
These warnings became more urgent after a federal judge in New York ruled, opens new tab this year that the former CEO of a bankrupt financial services company could not shield his AI chats from prosecutors pursuing securities fraud charges against him.
In the wake of the ruling, attorneys have been advising that conversations with chatbots like Anthropic’s Claude and OpenAI’s ChatGPT could be demanded by prosecutors in criminal cases or by litigation adversaries in civil cases.
“We are telling our clients: You should proceed with caution here,” said Alexandria Gutiérrez Swette, a lawyer at New York-based law firm Kobre & Kim.
People’s discussions with their lawyers are almost always deemed confidential under U.S. law. But AI chatbots are not lawyers, and attorneys are instructing clients to take steps that could keep their communications with AI tools more private.
In emails to clients and advisories posted on their websites, more than a dozen major U.S. law firms have outlined advice for people and companies to decrease the chances of AI chats winding up in court.
Similar warnings are also appearing in hiring agreements by some firms with their clients. For instance, New York-based firm Sher Tremonte stated in a recent client contract that sharing a lawyer’s advice or communications with a chatbot could erase the legal protection known as attorney-client privilege that usually shields communications between lawyers and their clients.
A JUDICIAL RULING
The case that helped set off the alarm bells involved Bradley Heppner, the former chair of bankrupt financial services company GWG Holdings and founder of alternative asset firm Beneficent (BENF.O), opens new tab. Heppner was charged by federal prosecutors last November with securities and wire fraud, and pleaded not guilty.
Legal risk of conversations with AI chatbots: What the ruling means
The concerns follow a ruling by US District Judge Jed Rakoff in New York, who ordered a former executive of a company to hand over documents generated by Anthropic’s Claude chatbot. The case involved Bradley Heppner, former chair of GWG Holdings, who faces securities and wire fraud charges. He had used Claude to prepare materials for his legal defence, but prosecutors argued that these exchanges were not protected, the Reuters report noted.
Rakoff agreed, stating that no attorney-client relationship exists “or could exist, between an AI user and a platform such as Claude.” The court also noted that users may not expect privacy in chatbot interactions. Lawyers said this creates risks for clients who rely on AI tools. Alexandria Gutiérrez Swette of Kobre & Kim told Reuters, “We are telling our clients: You should proceed with caution here”. Unlike conversations with lawyers, which are generally confidential, sharing legal details with chatbots could weaken those protections.
However, not all courts have taken the same position. In a separate case, US Magistrate Judge Anthony Patti ruled that a litigant did not need to share her ChatGPT conversations, treating them as personal work products. “ChatGPT and other generative AI programs are tools, not persons,”
- Let’s Boost Your Business Growth with Data-Driven Digital Marketing
- Your Chat History is the New Witness: Why US Lawyers are Warning About Your AI Prompts
- Vinicius to Bellingham: “What do you want? What do you want boy? Shut your mouth!” – The Shocking On-Field Clash That Has Real Madrid Fans in Meltdown
- Social Media Management, Digital Marketing in Kigali: Turning Brand Visibility into Customers, Sales and Profit.
- From Visibility to Sales: Social Media Management in Kigali That Generates Leads, Customers and Real Revenue