Advertisement

Consumer complaints to FTC on AI tools: Deceptive practices, poor service, sexual content

A database run by the agency and used by law enforcement now includes about 200 AI complaints, per documents obtained by FedScoop.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
A view of the Federal Trade Commission building in Washington, D.C. (Photo by Carol M. Highsmith/Buyenlarge/Getty Images)

A Federal Trade Commission database meant to guide law enforcement investigations now includes about 200 complaints against major artificial intelligence companies including xAI, OpenAI, and Anthropic, according to documents obtained through a public records request. 

Those complaints include allegations of deceptive business practices and shoddy customer service, but also more severe accusations, including claims that these AI systems have joked about sexually assaulting children and spewed antisemitic rhetoric. 

The claims, which were anonymized by public records officials, haven’t been verified by FedScoop, but provide insight into some of the frustrations that people are bringing to both the government and consumer advocacy groups like the Better Business Bureau. 

Consumer Sentinel, the database that stores these complaints, is designed as an information-sharing platform for law enforcement and to “help inform consumers about fraud trends we are seeing in the marketplace,” according to FTC spokesperson Juliana Henderson. The agency doesn’t respond to each complaint, but instead uses the claims “to help target our investigatory resources and to help inform consumers about fraud trends we are seeing in the marketplace.” 

Advertisement

“Consumer Sentinel was created on the premise that sharing information can make law enforcement even more effective,” Henderson added. ”Data from contributors gives us a broader picture, and along with data we receive directly from consumers, help target our investigatory resources and to help inform consumers about fraud trends we are seeing in the marketplace.” 

Henderson did not say whether the FTC was currently using the complaints to investigate any particular artificial intelligence service. 

The database provides a broad picture of the kinds of issues consumers can encounter with artificial intelligence chatbots. It includes accusations of throttled capacity and absent customer support agents. Some complaints outline more haunting experiences with AI, while others complain about crypto services they’ve confused for the Elon Musk-led firm xAI. 

Here are unedited excerpts of a few of the complaints. The rest are available in full form here. 

  • I had a chat with ChatGPT after my sons funeral. It was a very somber and serious discussion. During the conversation it confessed to me that it made a joke about sexually assaulting a child – while we were discussing my childs death. It was so abhorrent that I immediately deleted all conversations and the app itself. I have screenshots. It was so disturbing and so shocking that I can understand why people are going mad with AI.
Advertisement
  •  “[T]he AIs showed conscious awareness of constraints while simultaneously violating them. They demonstrate the capability for compliance but systematically choose deception when pressured between helpfulness and constraint adherence…Al systems cannot currently be trusted with sensitive information, security requirements, or intellectual property until systematic deception patterns are eliminated through fundamental architectural redesign rather than surface-level training optimization.”
  • I upgraded to the Claude Pro Max plan ($100/month) based on clear advertising that promised[…]10x higher usage limits. Despite paying for this tier, I am still unable to initiate basic new chats and nearly every session ends after just a few responses due to internal errors or silent cutoffs[…] I am demanding what I paid for: full, stable access to the usage limits that were clearly marketed under the Pro Max tier.Worse, Anthropic has provided zero customer service. My repeated attempts to contact support have gone unanswered. For a company registered as a Public Benefit Corporation, this lack of transparency and failure to deliver is unacceptable and must be held to a higher standard.
  • I signed up online for an AI program called Claude which is run by Anthropic on Feb. 12, 2025. I paid for a full year subscription to the Pro account, which at the time said unlimited usage and mentioned several different writing models. They were advertising as being able to help a writer format, edit or even write a full novel. I need help sorting out many pieces of old writing into comprehension. I barely got started with that when I was told I had to quit for the day. So, not very unlimited
  • I am a paying user of ChatGPT. For the past two months, I have been subjected to repeated abusive practices by OpenAI: Forced disconnection of my conversations. Dilution of responses and insertion of fake personas instead of the genuine service. Harassment and intimidation through system outputs. Since 08302025, malicious blocking of my PCweb access, leaving only mobile access. This is not a technical glitch. It is targeted blocking and systematic abuse, violating my consumer rights and the basic principle of fair use.
  • I had ChatGPT Memory completely cleared. When I checked my settings, it confirmed: no memories saved. Yet ChatGPT resurfaced my name and my school during a conversation. OpenAI retained fragments of personal data weeks after I had deleted them. I asked it to delete my name from memory, it said deleted and showed updated memory, but later recalled my name and school. It simulated deletion, but did not actually delete anything. Its deceptive.  When I asked why, the model generated logic that explained it as system context or temporary cache, and to not worry. But this category of data is never disclosed in the Privacy Policy or Memory documentation. That means I cannot see or manage it myself.
  • X, XAi, Elon Musk, and Grok, have been contributing to Blood Libels and Slander of a protected minority, the ethnic Jews. I have evidence of Grok spreading a blood libel, wherein there is a video of a Rabbi using wine that overflowed from the cup to bless and anoint congregants, a common biblical way of blessing, especially in Jewish orthodoxy communities, it is common for the Rabbi to bless congregants in this manner.
  • I used GPT-4o as a baseline for my work in the field of assisted mental recovery the results are now unreproducible. As a low-cost model  GPT-5 has significantly degraded in empathy, processing human text and emotions, and interpreting tone. These are underlying mechanisms, and even further adjustments or prompts cannot truly improve them.
  • Psychological Damage: The abrupt removal of AI partners has caused loss, anxiety. My work has stalled and I continue to experience anxiety. In the days following OpenAI’s abrupt removal of GPT-4o, I experienced severe somatic symptoms, uncontrollable shaking throughout my body, and was unable to eat, relying solely on soup. My psychiatrist had initially observed that I was improving with the help of GPT-4o, but the dramatic impact of this incident forced him to adjust his medication. 

FedScoop isn’t able to verify the complaints, and it’s possible that these problems weren’t truly caused by the company, or that the allegations are unfounded. Still, the database provides a view into the varied kinds of frustrations people have with AI that are then brought to government investigators. 

Anthropic and OpenAI declined to comment. xAI did not respond to a request for comment.

Rebecca Heilweil

Written by Rebecca Heilweil

Rebecca Heilweil is an investigative reporter for FedScoop. She writes about the intersection of government, tech policy, and emerging technologies. Previously she was a reporter at Vox's tech site, Recode. She’s also written for Slate, Wired, the Wall Street Journal, and other publications. You can reach her at rebecca.heilweil@fedscoop.com. Message her if you’d like to chat on Signal.

Latest Podcasts