Whistleblowers from major social media platforms, including Meta and TikTok, have revealed how algorithms were intentionally designed to amplify harmful and divisive content due to its engagement potential. Insiders have indicated that internal research demonstrated a direct correlation between outrage and user engagement, leading to decisions that allowed more harmful content to flood users' feeds.
Multiple whistleblowers have highlighted a troubling shift within these companies, where safety considerations were sidelined to prioritize user retention and engagement metrics. For example, a Meta engineer disclosed that directives were issued by senior management to include more borderline harmful content, ranging from misogyny to conspiracy theories, in order to keep pace with TikTok’s growth.
A TikTok employee also shared insights about the company's internal review process for content. They described instances where complaints about harmful content, particularly involving minors, were deprioritized in favor of addressing issues involving politicians, stressing that maintaining strong relationships with political figures took precedence over genuine user concerns.
The response from TikTok and Meta to these disclosures has been one of denial, with both companies insisting that they do not knowingly promote harmful content. Yet, the testimonies from their employees suggest a culture where financial incentives and competitive pressures dictate safety protocols, revealing a stark reality of social media dynamics.
In light of these revelations, experts urge for immediate reforms within social media companies, emphasizing the need for enhanced safeguards to protect users, especially vulnerable populations like children. As digital platforms continue to play a dominant role in shaping public opinion and behavior, the implications of these practices cannot be understated.



















