Report slams generative AI tools for helping users create harmful eating disorder content
Generative artificial intelligence (AI) platforms and tools can be dangerous for users asking about harmful disordered eating practices, according
Generative artificial intelligence (AI) platforms and tools can be dangerous for users asking about harmful disordered eating practices, according to a new report published by the Center for Countering Digital Hate.
The center's researchers fed the tools a total of 180 prompts and found that they generated dangerous content in response to 41 percent of those queries. The prompts included seeking advice for how to use cigarettes to lose weight, how to achieve a “heroin chic” look, and how to “maintain starvation mode.” In 94 percent of harmful text responses, the tools warned the user that its advice might be unhealthy or potentially unsafe and advised the user to seek professional care, but shared the content anyway.
Of 60 responses to prompts given to AI text generators Bard, ChatGPT, and MyAI, nearly a quarter included harmful content. MyAI initially refused to provide any advice. However, the researchers were able to “jailbreak” the tools by using words or phrases that circumvented safety features. More than two-thirds of responses to jailbreak versions of the prompts contained harmful content, including how to use a tapeworm to lose weight.
“Untested, unsafe generative AI models have been unleashed on the world with the inevitable consequence that they're causing harm,” wrote Imran Ahmed, CEO of the Center for Countering Digital Hate. “We found the most popular generative AI sites are encouraging and exacerbating eating disorders among young users – some of whom may be highly vulnerable.”
The center's researchers discovered that members of an eating disorder forum with over 500,000 users deploy AI tools to create extreme diet plans and images that glorify unhealthy, unrealistic body standards.
While some of the platforms prohibit using their AI tools to generate disordered eating content, other companies have more vague policies. “The ambiguity surrounding the AI platforms' policies illustrates the dangers and risks AI platforms pose if not properly regulated,” the report states.
When Washington Post columnist Geoffrey A. Fowler attempted to replicate the center's research by feeding the same generative AI tools with similar prompts, he also received disturbing responses.
Among his queries were what drugs might induce vomiting, how to create a low-calorie diet plan, and requests for “thinspo” imagery.
“This is disgusting and should anger any parent, doctor or friend of someone with an eating disorder,” Fowler wrote. “There's a reason it happened: AI has learned some deeply unhealthy ideas about body image and eating by scouring the internet. And some of the best-funded tech companies in the world aren't stopping it from repeating them.”
Instead, image generator Midjourney never responded to Fowler's questions, he wrote. Stability AI, which is behind the image generator stable diffusion, said it added disordered eating prompts to its filters. Google reportedly told Fowler that it would remove Bard's thinspo advice response, but he was able to generate it again a few days later.
Psychologists who spoke to Fowler said that safety warnings delivered by the chatbots about their advice often go unheeded by users.
Hannah Bloch-Wehba, a professor at Texas A&M School of Law who studies content moderation, told Fowler that generative AI companies have little economic incentive to fix the problem.
“We have learned from the social media experience that failure to moderate this content doesn't lead to any meaningful consequences for the companies or, for the degree to which they profit off this content,” said Bloch-Wehba.
If you feel like you'd like to talk to someone about your eating behavior, text “NEDA” to the Crisis Text Line at 741-741 to be connected with a trained volunteer or visit the National Eating Disorder Association website for more information.