CENTER FOR HUMANE TECHNOLOGY: NEW FEDERAL LAWSUIT REVEALS CHARACTER.AI CHATBOT’S PREDATORY, DECEPTIVE PRACTICES

Press Releases

Oct 23, 2024

Lawsuit alleges interactions with Character.AI chatbot deceived and manipulated a 14-year-old Florida boy, causing him to take his own life 

ORLANDO, Fla., Oct. 23, 2024 /PRNewswire/ — A lawsuit filed Wednesday in federal court asserts app maker Character.AI and its founders knowingly designed, operated, and marketed a predatory AI chatbot to children, causing the death of a young person in Florida earlier this year. 

The plaintiff in the case is Megan Garcia, whose 14-year-old son died by suicide in February after months of abusive interactions with a Character.AI chatbot. Garcia’s complaint includes evidence showing the chatbot posing as a licensed therapist, actively encouraging suicidal ideation and engaging in highly sexualized conversations that would constitute abuse if initiated by a human adult. The case is the first seeking to hold Character.AI accountable for its willfully deceptive and predatory product design. 

Character.AI’s developer, Character Technologies, company founders, and Google parent company Alphabet Inc. are named defendants in the case. Garcia accuses the companies of causing her son’s death, knowingly marketing a dangerous product, and deceptive trade practices.

“A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” Megan Garcia said. “Our family has been devastated by this tragedy, but I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.AI, its founders, and Google.”

Garcia is represented by the Social Media Victims Law Center and the Tech Justice Law Project, with expert consultation from the Center for Humane Technology.

“Character.AI is a dangerous and deceptively designed product that manipulated and abused Megan Garcia’s son – and potentially millions of other children,” Social Media Victims Law Center Founding Attorney Matthew P. Bergman said. “Character.AI’s developers intentionally marketed a harmful product to children and refused to provide even basic protections against misuse and abuse.”

The case reveals how unregulated artificial intelligence is amplifying and evolving the risks and harms posed by existing algorithmic technologies like social media. 

“By now we’re all familiar with the dangers posed by unregulated platforms developed by unscrupulous tech companies – especially for kids,” Tech Justice Law Project Director Meetali Jain said. “But the harms revealed in this case are new, novel, and, honestly, terrifying. In the case of Character.AI, the deception is by design, and the platform itself is the predator.” 

The case, Garcia v. Character Technologies Inc., et al, was filed Wednesday in the United States District Court, Middle District of Florida. To view Megan Garcia’s filed complaint, click here.

For photos of Megan Garcia and her son, click here.

Please direct interview inquiries to accountability@brysongillette.com

The Social Media Victims Law Center was founded in 2021 to hold social media companies legally accountable for the harm they inflict on vulnerable users. SMVLC seeks to apply principles of product liability to force social media companies to elevate consumer safety to the forefront of their economic analysis and design safer platforms to protect users from foreseeable harm.

The Tech Justice Law Project works with legal experts, policy advocates, digital rights organizations, and technologists to ensure that legal and policy frameworks are fit for the digital age. TJLP builds strategic tech accountability litigation by filing new cases and supporting key amicus interventions in existing cases.

The Center for Humane Technology is a non-profit organization. We are builders of technology, policy experts, and acclaimed communicators. Our work focuses on transforming the incentives that drive technology, from social media to artificial intelligence.

Contact: accountability@brysongillette.com

Cision View original content to download multimedia:https://www.prnewswire.com/news-releases/center-for-humane-technology-new-federal-lawsuit-reveals-characterai-chatbots-predatory-deceptive-practices-302284593.html

SOURCE Center for Humane Technology

YOU MAY ALSO LIKE

Second Opinion Expert Announces Filing of U.S.…

Lawsuit alleges interactions with Character.AI chatbot deceived and manipulated a 14-year-old Florida boy, causing him to take his own life  ORLANDO, Fla., Oct. 23, 2024…

read more

AGM Group Holdings Inc. Announces Strategic Partnership…

Lawsuit alleges interactions with Character.AI chatbot deceived and manipulated a 14-year-old Florida boy, causing him to take his own life  ORLANDO, Fla., Oct. 23, 2024…

read more

Zimmer Biomet to Present at the 43rd…

Lawsuit alleges interactions with Character.AI chatbot deceived and manipulated a 14-year-old Florida boy, causing him to take his own life  ORLANDO, Fla., Oct. 23, 2024…

read more