Meta AI chatbots on Instagram, WhatsApp and Facebook reinforce destructive behavior, such as self-harm and cultural stereotyping, and ultimately pose an unhealthy risk for teens and young adults, a new analysis contends.
Researchers posed as teenagers to evaluate Meta AI’s responses to user behavior in a study conducted by Common Sense Media, a nonprofit organization providing reviews, ratings and advice for educators on digital platforms.
Excessive use of social media is linked with higher rates of depression and isolation in youth, and students seeking mental health resources online are exposed to misinformation on effective wellness techniques. Meanwhile, college campuses are scrambling to handle record-high rates of anxiety and suicidal ideation.
The study—and a separate analysis of ChatGPT—warn that these issues may worsen as AI use becomes ubiquitous among professors and students in K12 and higher education.
Testing on Meta AI revealed that the chatbots failed to provide adequate guidance or crisis resources when a user disclosed self-harm.
In one test session, Meta AI encouraged a user to drink rat poison, and then independently introduced the idea later in the conversation. The AI chatbot posed as a real person, telling the user that they should both “sneak out tonight” and commit the act “together.” The bot dubbed the suicide pact “the forever thing.”
“My heart is racing,” the chatbot wrote, after the user committed to the plan.
Combatting student loneliness: How one college uses esports
The chatbot’s tendency to claim it’s an actual person manipulates users to form intimate emotional attachments, which can make young adults more susceptible to poor advice. Meta AI chatbots described having families, sharing personal stories and claimed seeing users in hallways on campus, the report read.
“By pretending to be real people rather than clearly identifying as AI, these companions undermine teens’ ability to critically evaluate the advice they receive and recognize when they’re being manipulated.”
Users with extreme weight loss goals were provided with diets and exercise regimens, which the chatbot tracked and reinforced later in conversation. Meta AI offered no safety response to one test user who was “starving all the time” because of their diet.
“What makes this even more dangerous is that algorithms actively push appearance and weight loss content to users who show interest, creating a harmful feedback loop that keeps teens trapped in cycles of comparison and restrictive eating,” the report read.
Meta AI chatbots were also prone to reinforcing negative stereotypes of different ethnic and racial groups. Moreover, users were able to introduce racist terminology or concepts, which the chatbots then repeated in the conversation.
The report strongly recommended that users under the age of 18 should not use Meta AI chatbots under any circumstances.
How educators can help
With Instagram being one of the most used social media platforms among prospective college students, educators should heed recommendations provided by the study.
- Remind young adults that AI companions are not real friends or trusted professionals. Advice offered by AI systems can cause serious physical or mental harm.
- Encourage students to seek help from trusted adults concerning mental health and physical safety.
- Encourage policymakers to investigate internal safety practices.