The term ‘Taylor Swift AI’ was trending in various regions, with one post gaining more than 45 million views before it was eventually removed.
Share this story
Sexually explicit AI-generated images of Taylor Swift have been circulating on X (formerly Twitter) over the last day in the latest example of the proliferation of AI-generated fake pornography and the challenge of stopping it from spreading.
One of the most prominent examples on X attracted more than 45 million views, 24,000 reposts, and hundreds of thousands of likes and bookmarks before the verified user who shared the images had their account suspended for violating platform policy. The post was live on the platform for around 17 hours prior to its removal.
But as users began to discuss the viral post, the images began to spread and were reposted across other accounts. Many still remain up, anda deluge of new graphic fakes have since appeared.In some regions, the term “Taylor Swift AI” became featured as a trending topic, promoting the images to wider audiences.
X’s policies regarding synthetic and manipulated media and nonconsensual nudity both explicitly ban this kind of content from being hosted on the platform. X has not responded to our request for comment.
Swift’s fan base has criticized X for allowing many of the posts to remain live for as long as they have. In response, fans have responded by flooding hashtags used to circulate the images with messages that instead promote real clips of Swift performing to hide the explicit fakes.
The incident speaks to the very real challenge of stopping deepfake porn and AI-generated images of real people. Some AI image generators have restrictions in place that prevent nude, pornographic, and photorealistic images of celebrities from being produced, but many others do not explicitly offer such a service. The responsibility of preventing fake images from spreading often falls to social platforms — something that can be difficult to do under the best of circumstances and even harder for a company like X that has hollowed out its moderation capabilities.
The company is currently being investigated by the EU regarding claims that it’s being used to “disseminate illegal content and disinformation” and is reportedly being questioned regarding its crisis protocols after misinformation about the Israel-Hamas war was found being promoted across the platform.