Google Faces Lawsuit From Conservative Over AI Defamation

Robby Starbuck Files Lawsuit Alleging Defamation by Google’s AI

Robby Starbuck, a conservative activist, is suing Google because he says the company’s AI algorithms made “outrageously false” and defamatory remarks about him. The case, which was filed in Delaware state court, says that Google’s AI products, such as Bard and Gemma, propagated false claims to millions of consumers.

Starbuck says that these AI systems wrongly called him a “child rapist,” “serial sexual abuser,” and “shooter.” He added that the fraudulent statements came up in response to user-generated prompts and were then spread around the internet, which might hurt his personal and professional image.

Source: AXIOS

Google Responds, Citing AI Hallucinations as Industry-Wide Issue

Jose Castaneda, a Google official, said that the majority of the assertions were based on AI hallucinations, which is when large language models (LLMs) make up or give wrong information. Castaneda added that Google has been open about this problem for a long time and has been working to make sure it doesn’t happen again since 2023.

Castaneda said, “Hallucinations are a common problem for all LLMs, and we work hard to keep them to a minimum.” “But as everyone knows, you can get a chatbot to say something wrong if you’re creative enough.” After the lawsuit was announced, Alphabet, the business that owns Google, didn’t notice any big changes in the market. Its shares stayed flat at the end of Wednesday.

Starbuck Calls for Transparent and Unbiased AI Regulations

Starbuck, who is renowned for being against diversity, equity, and inclusion efforts, said that the case was a demand for AI to be held accountable and open. He said in a public declaration that no one, no matter what their political beliefs are, should have their reputation damaged by false material created by AI.

Starbuck added, “Now is the time for all of us to demand clear, fair AI that can’t be used as a weapon to hurt people.” He said that the supposed lies had caused him emotional pain and made him feel unsafe. He also said that recent events affecting other conservative activists had made him even more worried about his safety.

Recommended Article: UNESCO, KIX Africa 21 Hub Launch AI Learning Program in Africa

Previous Dispute With Meta Platforms Resolved Earlier This Year

Starbuck has been in court over AI before. He filed a similar defamation action against Meta Platforms in April 2025, saying that AI-generated outputs wrongly connected him to illegal activity. In August, the dispute was settled. According to reports, Starbuck agreed to help Meta with AI governance and content monitoring as part of the settlement.

The Google case, on the other hand, makes the discussion around AI-generated false information even more heated. It shows how wrong answers from chatbots might hurt reputations and spread false information on a large scale.

Details of the Defamatory Allegations in Google’s AI Outputs

Starbuck found out in December 2023 that Google’s Bard chatbot had wrongly linked him to white nationalist Richard Spencer, using fake sources to do so, according to the court petition. Starbuck says that even though he contacted Google to ask for changes, the firm did not do anything.

He further says that in August 2025, Google’s Gemma chatbot spread bogus claims of sexual assault, saying he was involved in marital abuse, the January 6 Capitol riots, and the Jeffrey Epstein files, all of which were not true. The lawsuit says that Bard and Gemma used fake articles and documents as if they were real sources for users.

Starbuck’s complaint shows the new legal problems that AI developers may face, especially when generative systems grow easier for the public to use. Experts say that the speed and volume of AI disinformation might make defamation law harder to enforce, especially when computers make and spread lies on their own.

Al Jazeera noted earlier that Google’s VEO3 AI video generator was also a problem in 2024 since it let people make fake movies of genuine events. This shows how AI may change facts in both pictures and words. The $15 million complaint asks for both compensation and punitive damages, as well as improvements to Google’s AI governance structure to stop disinformation from happening again.

Rising Public Concern Over AI Misinformation and Trust

The lawsuit against Google is a response to concerns about the moral limitations of AI-generated content, particularly in the context of misleading or defamatory information. Industry watchdogs emphasize the need for clear datasets, strong content management, and third-party audits to prevent AI products from damaging reputations or altering politics.

Google claims its systems have content filters and disclaimers, but incidents like Starbuck’s demonstrate public distrust of AI. This case could potentially mark a turning point for AI accountability in the US, as lawmakers debate how to regulate generative systems.

IMPORTANT NOTICE

This article is sponsored content. Kryptonary does not verify or endorse the claims, statistics, or information provided. Cryptocurrency investments are speculative and highly risky; you should be prepared to lose all invested capital. Kryptonary does not perform due diligence on featured projects and disclaims all liability for any investment decisions made based on this content. Readers are strongly advised to conduct their own independent research and understand the inherent risks of cryptocurrency investments.

Share this article

Subscribe

By pressing the Subscribe button, you confirm that you have read our Privacy Policy.