News Team Member Collin France delves into Karen Griffin's extraordinary impacts on athletes' success and recovery after injuries during her career span.
Two new studies show social media glorifies content that can perpetuate eating disorders.
By Emily Zhou
Eating disorders are a persistent public health dilemma, one that has faced long-standing and often desperate attempts to be eradicated. It comes in three main flavors: self-starvation, binge eating, and eating combined with forced removal of it.
People with eating disorders struggled with low self-esteem and dependence on external approval. These preexisting struggles cause patients with such disorders to be extremely susceptible to manipulation. And of the prime spots where they encounter such manipulation is the place where all live our lives: the internet.
In a study published in Frontiers in Public Health in January 2025, Claudia Ruiz-Centeno, researcher at the University of Malaga, Spain and colleagues qualitatively analyzed content that glorifies starvation or anorexia and eating followed by forceful removal of it or bulimia. They searched across six social media platforms, revealing that coded language and positive sentiment toward thinness reinforce disordered eating behaviors among young users.
“This is of great interest to healthcare professionals to understand how this type of content affects patients with eating disorders and how they manage to access it,” says Ruiz-Centeno.

Researchers and campaigners have been concerned for more than a decade about the influence of eating-disorder content on social media, and have pressured platforms to moderate their content. In response, platforms have implemented strategies to monitor community standards and temper the emotional dynamics in online discussions. Yet, in reality, users slip into cleverly coded language to bypass the app’s safety trackers, enabling entire ecosystems of hashtags, clusters, and communities, to persist undetected beneath the surface of seemingly compliant content.
Ruiz-Centeno and her team collected and categorized the content into four main themes and also illustrate word frequency in these chats through a word cloud. The study revealed that people who share ideas on eating disorders as lifestyle choices use words such as “calories,” “thin,” “fasting,” and “hungry”.
The 2022 Global Burden of Disease study recognized eating disorders as a leading mental health problem in high-income countries. Researchers counted 14 million cases but theorized there might be three times as many. Ruiz-Centeno’s team and other authors argue that hypothesis is correct: that there are nearly 42 million cases, including millions among children and adolescents, which remain invisible in official fieldwork collections. Experts in the field note that eating disorders are merely “the tip of the iceberg,” leaving a vast and shifting disorder beneath.
Given the size of the problem, platforms have struggled to create effective moderation. Some have attempted boosting hashtags and communities that elevate health content.
A preprint published in December of 2024 by Kristina Lerman and colleagues say that platforms host clusters of “healthy living” or “body positivity,” but diverge sharply when it comes to eating disorders. TikTok connects eating disorders hashtags to recovery tags like #edrecovery and #recoveryispossible. Twitter, by contrast, links users into dense cul-de-sacs of toxicity. Hashtags like #thinspo, #bonespo, and #deathspo intertwine with subcultures glorifying self-harm and suicide, driving vulnerable users into negative behaviors.
Lehman’s team conducted an analysis across Twitter/X , TikTok, and Reddit by using cutting-edge emotion detection models to recognize and correlate cues in online language to discrete emotions ranging from love to disgust. Toxicity was also measured in a similar fashion through a model designed to quantify toxicity levels in digital interactions across multiple categories.
However, the specific limitations of each social media’s online safety policy must be considered when interpreting the results and considering its applicability in broader contexts. X and Reddit, characterized by weaker moderation, higher toxicity, and higher emotions are heavily reliant on self-regulating trigger warnings. Additionally, the study used two different emotion detection models— one for TikTok and Twitters, and another for Reddit—which could affect consistency across platforms.
TikTok’s reported violations rose from 2023 to 2024 by roughly a 42-fold increase. By hounding down on restriction policy and enacting support services, the app made hashtags such as #ana, hashtag for content glorifying anorexia, ineligible for searching, and instead automatically redirects users to a mental health awareness page with resources and guidance. Since TikTok’s algorithm amplifies content tailored to user engagement on the app, the study cannot measure how the platform’s feed might expose vulnerable users to even more harmful material.
Both Ruiz-Centen’s and Lehman’s research make it clear that digital pathways are slippery. A young person searching for diet advice may stumble onto hashtags that lead to pro-ana networks. Algorithms designed to reward engagement may, without intention, push them further into a dangerous rabbit-hole of harmful media.
Community bonds may appear supportive and uplifting, but often they are the very pressures that binds someone, suffocating them with their disorder. It is here, in the tension between belonging and harm, where researchers insist that public health researchers need to look more closely. Eating disorders are not only biological or psychological—they are digital, cultural, and communal. The hypnotic glow of screens or the risk of unraveling algorithms from an oblivious like on a “healthy recipe” TikTok, renders the border between risk and harm almost invisible.
“I’m not worried that kids are not going to turn to social media for dieting advice, they’re going to be turned into AI chatbots for advice,” says Lehman. “So, my concern is, are these chatbots going to basically be good for them, or they’re going to be bad for them? So that’s what we’re trying to study right now as well.” The exposing nature of AI search engines, such as ChatGPT, is precisely what excites Lerman, as they toy with AI’s ability to extract and structure patterns in the content input by users.