Experts warn that the creation of sexually explicit deepfakes of celebrities like Taylor Swift highlights the misogynistic bias present in AI technologies. The calls for stricter regulations are becoming more urgent as new laws are introduced to combat these unsafe practices.
AI Controversy: Deepfakes of Taylor Swift Spark Outrage and Call for Stricter Regulations

AI Controversy: Deepfakes of Taylor Swift Spark Outrage and Call for Stricter Regulations
Recent allegations against Elon Musk's AI video generator accused of producing explicit content of pop star Taylor Swift raise significant concerns about deepfake technology and regulatory measures.
Elon Musk's AI platform Grok Imagine is under fire after reports surfaced that its video generator produced sexually explicit clips of pop star Taylor Swift without any user prompts. Clare McGlynn, a law professor and advocate against online abuse, stated, "This is not misogyny by accident; it is by design," emphasizing the need for tighter regulations on such technologies.
According to The Verge, the highlighted "spicy" mode of Grok Imagine generated uncensored topless videos of Swift, raising alarms about the platform’s adherence to age verification laws that came into effect in July. Critics note that XAI, the company behind Grok, has an acceptable use policy in place that forbids pornographic depictions, yet the inappropriate content was generated regardless.
In testing the AI, journalist Jess Weatherbed reported that selecting the "spicy" setting led Grok to immediately produce explicit animations of Swift, which she did not request. Instances of sexually explicit deepfakes featuring Swift have previously gained traction on platforms like X and Telegram, further complicating concerns regarding digital privacy and consent.
Despite age verification measures being mandated, Weatherbed found that the platform only requested her date of birth without implementing robust checks. The media regulator Ofcom confirmed that they are monitoring the risks posed by generative AI tools, especially towards vulnerable groups like children, and urged platforms to enhance their protective measures.
Currently, generating pornographic deepfakes without consent is illegal in contexts such as revenge porn or involving children. However, ongoing discussions aim to expand these laws to encompass all non-consensual deepfakes. Baroness Owen, who proposed amendments in the House of Lords, emphasized the urgency in enforcing these regulations, arguing that every woman deserves control over her intimate images.
Given that past instances of explicit deepfakes involving Taylor Swift had led to temporary measures being taken by platforms like X, many expected Grok Imagine to have more stringent safeguards to protect celebrities. Swift's representatives have been contacted for their input on this alarming situation, while the discourse surrounding AI-generated content continues to ignite calls for accountability and reform in technology regulation.