Social Media Giants Prioritize Engagement Over Safety, Whistleblowers Reveal
Internal documents and testimonies provide alarming insights into how Meta and TikTok knowingly allow harmful content to expand on their platforms, prioritizing engagement metrics over user safety.
Whistleblowers from the leading social media platforms Meta and TikTok have come forward, exposing a troubling trend where algorithm changes aimed at increasing user engagement have, in fact, facilitated the proliferation of harmful content. More than a dozen insiders shared their concerns with the BBC's documentary, 'Inside the Rage Machine,' shedding light on the dangerous balance between user safety and corporate profit.
In a shocking revelation, a Meta engineer reported instructions from upper management to allow more 'borderline' harmful content, which includes misogyny and conspiracy theories, in order to compete with TikTok's explosive user growth. According to the engineer, the directive stemmed from a need to elevate stock prices, illustrating a stark conflict between financial gain and user safety.
Likewise, TikTok employees reported being directed to prioritize political issues over user complaints that highlighted threats to children and other vulnerable demographics. This prioritization is reportedly aimed at maintaining positive relations with political figures, rather than addressing the severe risks posed by harmful content available on the platform.
Further complicating the situation, Meta's launch of Instagram Reels in 2020 lacked adequate safety measures, resulting in increased occurrences of bullying, harassment, and hate speech within the platform. Internal research indicated that content on Reels was significantly more harmful than that observed on the primary Instagram feed.
Despite these disclosures, both companies have denied any wrongdoing. Meta categorically rejected the notion that it amplifies harmful content for financial purposes, while TikTok dismissed the whistleblowers' claims as fabricated, insisting that they invest heavily in technology to filter harmful content before it reaches users.
The whistleblower revelations come at a time when social media platforms face growing scrutiny regarding the mental health and safety of their users. With algorithms now being described as 'black boxes'—too complex for even engineers to fully understand—the necessity for regulatory oversight has never been more urgent. These insiders' experiences paint a dire picture of an industry caught in the throes of a fierce engagement arms race, with user safety hanging in the balance.



















