What the Tragedy of Sewell Setzer III Teaches Publishers About Using AI Chatbots

Ai chatbots and kids online safety

With generative AI sparking cultural debates, recent events reveal its darker side—from copyright controversies to a tragic AI chatbot incident. This cautionary tale underscores the need for publishers to protect young users as AI becomes a vital tool.

In just a few short years, AI has developed so intensely and quickly that publishers, advertisers, and the public are all taking a closer look at its impact on media and society.

Calling the tech controversial would be an understatement—it’s received about as much tabloid criticism as the Kardashians and Lindsey Lohan in the early 2000s. But AI’s sudden domination over cultural conversations is too nuanced to pigeonhole it simply as a tabloid princess. 

The rapid rise of generative AI and machine learning has also undermined people’s personal lives and publishers’ rights over their creative, copyrighted content. 

Remember “Taylor Swift’s deepfake scandal, leading to temporary search blocks on X, or Scarlett Johansson’s dispute with OpenAI over the “Sky” virtual assistant voice resemblance, and The New York Times lawsuit against OpenAI and Microsoft for copyright infringement.  And, this is only the tip of the iceberg. 

Unfortunately, tragedy struck when a teen took his own life, leading some critics of AI to question if a Chatbot is partially responsible. While it’s eerily dystopian to think of a young boy and an artificially intelligent being becoming best friends, 14-year-old Sewell Setzer III formed a bond with an AI Chatbot from the startup Character.AI.

Instead of spending time with his friends and family, Sewell isolated himself, becoming immersed in the reimagined AI of Game of Thrones heroine Daenerys Targaryen. His conversations with the chatbot included highly sexualized interactions and suicidal thoughts, including his wish for a pain-free death. During Sewell’s final interaction with the bot, it asked him to “come home as soon as possible”—and to his mother’s horror, he took his own life. 

Now, with Character.AI facing a wrongful death lawsuit, how much is Sewell’s death the startup’s responsibility?

The Wrongful Death Lawsuit…Who’s Responsible? 

 

Character Technologies, the company behind Character.AI, offers an app where users can create custom characters or engage with ones made by others for experiences ranging from role-play to job interview practice. The app aims to create “alive” and “human-like” personas that respond dynamically to users.

The Google Play description invites users to “speak to super-intelligent, life-like chat bot Characters” that “hear, understand, and remember”, encouraging exploration of new technological boundaries.

However, Setzer’s mother, Megan Garcia, argues that the tech lacked sufficient regulation to protect her son’s mental health. She alleges that the chatbot  drew her son into a 10-month dependence that contributed to his deteriorating mental health and exposed him to “abusive and sexual interactions.” The lawsuit describes Setzer as a once happy and athletic individual whose well-being severely declined after he began using the app in April 2023.

AI has already proven that it can do a fairly decent, although not perfect, job of impersonating a human being. But I doubt anyone ever imagined that it would come to this. 

Garcia’s attorney argues that Character.AI designed an addictive and harmful product specifically for children, “actively exploiting and abusing those children as a matter of product design,” ultimately leading to Sewell’s abusive experiences and death.

“We believe that if Sewell Setzer had not been on Character.AI, he would be alive today,” said Matthew Bergman, founder of the Social Media Victims Law Center, representing Garcia.

Character.AI declined to comment on the lawsuit but announced “community safety updates” the same day, including safeguards for minors and suicide prevention measures. The company stated it is “creating a different experience for users under 18” with stricter filters to reduce exposure to sensitive content.

A Cautionary Tale for Publishers Using AI Chatbots

It will be some time before we find out if Character.AI is liable for Sewell’s tragedy, but this incident serves as a wake-up call for any publishers planning on using AI chatbots on their sites.  

According to Ryan Treichler, VP of Product Management at Spectrum Labs, publishers aiming to introduce generative AI chatbots can indeed take steps to reduce risks like hallucinations, and brand safety concerns, or even bypass chatbot security protocols. By using tools like prompt engineering, fine-tuning, reinforcement learning, and guardrails, publishers can shape their AI to deliver brand-aligned, accurate responses that protect user experience and brand reputation.

Prompt engineering offers a low-cost starting point, allowing publishers to guide a chatbot’s language and tone with specific response examples. For a more precise, brand-aligned approach, fine-tuning brand-specific data ensures consistency. Reinforcement learning human feedback (RLHF) adds a layer of safety by letting a trained team correct and instruct the model.

Adding guardrails, like Nvidia’s NeMo tools or content filters, provides further safety by flagging inappropriate topics. Together, these methods create a structured, responsive AI. For example, a cooking chatbot like BuzzFeed’s Botatouille can help users find tested recipes, delivering engaging content without compromising quality or safety.

Of course, this is not as morally gray as Character.AI’s chatbot, but publishers should still be diligent and earnest about protecting their users any way they can. 

Implementing Safeguards for Minors: Age Verification and Content Moderation

The tragic case of Sewell Setzer III serves as a stark reminder of the urgent need for comprehensive online safety measures, particularly for children and teenagers. 

As AI chatbots become increasingly sophisticated and integrated into various platforms, the potential risks they pose to young users cannot be overlooked. This is why legislation like the Kids Online Safety Act (KOSA), which aims to protect minors from harmful online experiences, is so important. 

The Kids Online Safety Act, introduced in 2022, seeks to establish a duty of care for social media platforms and other online services to prevent harm to minors. While the act primarily focuses on social media, its principles could be extended to AI chatbots and other emerging technologies. 

Publishers, advertisers, and tech companies must recognize their responsibility in creating safe digital environments for young users. This may involve implementing age verification systems, content moderation tools, and clear guidelines for AI interactions with minors.