(The Verge) In the spring of 2022, Twitter considered making a radical change to the platform. After years of quietly allowing adult content on the service, the company would monetize it. The proposal: give adult content creators the ability to begin selling OnlyFans-style paid subscriptions, with Twitter keeping a share of the revenue.
Had the project been approved, Twitter would have risked a massive backlash from advertisers, who generate the vast majority of the company’s revenues. But the service could have generated more than enough to compensate for losses. OnlyFans, the most popular by far of the adult creator sites, is projecting $2.5 billion in revenue this year — about half of Twitter’s 2021 revenue — and is already a profitable company.
Some executives thought Twitter could easily begin capturing a share of that money since the service is already the primary marketing channel for most OnlyFans creators. And so resources were pushed to a new project called ACM: Adult Content Monetization.
Before the final go-ahead to launch, though, Twitter convened 84 employees to form what it called a “Red Team.” The goal was “to pressure-test the decision to allow adult creators to monetize on the platform, by specifically focusing on what it would look like for Twitter to do this safely and responsibly,” according to documents obtained by The Verge and interviews with current and former Twitter employees.
What the Red Team discovered derailed the project: Twitter could not safely allow adult creators to sell subscriptions because the company was not — and still is not — effectively policing harmful sexual content on the platform.
“Twitter cannot accurately detect child sexual exploitation and non-consensual nudity at scale,” the Red Team concluded in April 2022. The company also lacked tools to verify that creators and consumers of adult content were of legal age, the team found. As a result, in May — weeks after Elon Musk agreed to purchase the company for $44 billion — the company delayed the project indefinitely. If Twitter couldn’t consistently remove child sexual exploitative content on the platform today, how would it even begin to monetize porn?
Launching ACM would worsen the problem, the team found. Allowing creators to begin putting their content behind a paywall would mean that even more illegal material would make its way to Twitter — and more of it would slip out of view. Twitter had few effective tools available to find it.
Taking the Red Team report seriously, leadership decided it would not launch Adult Content Monetization until Twitter put more health and safety measures in place.
The Red Team report “was part of a discussion, which ultimately led us to pause the workstream for the right reasons,” said Twitter spokeswoman Katie Rosborough.
But that did little to change the problem at hand — one that employees from across the company have been warning about for over a year. According to interviews with current and former staffers, as well as 58 pages of internal documents obtained by The Verge, Twitter still has a problem with content that sexually exploits children. Executives are apparently well-informed about the issue, and the company is doing little to fix it.
“Twitter has zero tolerance for child sexual exploitation,” Twitter’s Rosborough said. “We aggressively fight online child sexual abuse and have invested significantly in technology and tools to enforce our policy. Our dedicated teams work to stay ahead of bad-faith actors and to help ensure we’re protecting minors from harm — both on and offline.”
While the Red Team’s work succeeded in delaying the Adult Content Monetization project, nothing the team discovered should have come as a surprise to Twitter’s executives. Fifteen months earlier, researchers working on the team tasked with making Twitter more civil and safe sounded the alarm about the weak state of Twitter’s tools for detecting child sexual exploitation (CSE) and implored executives to add more resources to fix it.
“While the amount of CSE online has grown exponentially, Twitter’s investment in technologies to detect and manage the growth has not,” begins a February 2021 report from the company’s Health team. “Teams are managing the workload using legacy tools with known broken windows. In short (and outlined at length below), [content moderators] are keeping the ship afloat with limited-to-no-support from Health.”
Employees we spoke to reiterated that despite executives knowing about the company’s CSE problems, Twitter has not committed sufficient resources to detect, remove, and prevent harmful content from the platform.
Part of the problem is scale. Every platform struggles to manage the illegal materials users upload to the site, and in that regard, Twitter is no different. The platform, a critical medium for global communication with 229 million daily users, has the content moderation challenges that come with operating any large space on the internet and the added struggle of outsized scrutiny from politicians and the media.
But unlike larger peers, including Google and Facebook, Twitter has suffered from a history of mismanagement and a generally weak business that has failed to turn a profit for eight of the past 10 years. As a result, the company has invested far less in content moderation and user safety than its rivals. In 2019, Mark Zuckerberg boasted that the amount Facebook spends on safety features exceeds Twitter’s entire annual revenue.
Meanwhile, the system that Twitter heavily relied on to discover CSE had begun to break.
For years, tech platforms have collaborated to find known CSE material by matching images against a widely deployed database called PhotoDNA. Microsoft created the service in 2009, and though it is accurate in identifying CSE, PhotoDNA can only flag known images. By law, platforms that search for CSE are required to report what they find to the National Center for Missing and Exploited Children (NCMEC), a government-funded nonprofit that tracks the problem and shares information with law enforcement. An NCMEC analysis cited by Twitter’s working group found that of the 1 million reports submitted each month, 84 percent contain newly-discovered CSE — none of which would be flagged by PhotoDNA. In practice, this means Twitter is likely failing to detect a significant amount of illegal content on the platform.
The 2021 report found that the processes Twitter uses to identify and remove CSE are woefully inadequate — largely manual at a time when larger companies have increasingly turned to automated systems that can catch material that isn’t flagged by PhotoDNA. Twitter’s primary enforcement software is “a legacy, unsupported tool” called RedPanda, according to the report. “RedPanda is by far one of the most fragile, inefficient, and under-supported tools we have on offer,” one engineer quoted in the report said.
Twitter devised a manual system to submit reports to NCMEC. But the February report found that because it is so labor-intensive, this created a backlog of cases to review, delaying many instances of CSE from being reported to law enforcement.
The machine learning tools Twitter does have are mostly unable to identify new instances of CSE in tweets or live video, the report found. Until February 2022, there was no way for users to flag content as anything more specific than “sensitive media” — a broad category that meant some of the worst material on the platform often wasn’t prioritized for moderation. In one case, an illegal video was viewable on the platform for more than 23 hours, even after it had been widely reported as abusive.
“These gaps also put Twitter at legal and reputation risk,” Twitter’s working group wrote in its report.
Rosborough said that since February 2021, the company has increased its investment in CSE detection significantly. She noted that it currently has four open positions for child safety roles at a time when Twitter has slowed down its pace of hiring.
Earlier this year, NCMEC accused Twitter of leaving up videos containing “obvious” and “graphic” child sexual abuse material in an amicus brief submitted to the ninth circuit in John Doe #1 et al. v. Twitter. “The children informed the company that they were minors, that they had been ‘baited, harassed, and threatened’ into making the videos, that they were victims of ‘sex abuse’ under investigation by law enforcement,” the brief read. Yet, Twitter failed to remove the videos, “allowing them to be viewed by hundreds of thousands of the platform’s users.”
This echoed a concern of Twitter’s own employees, who wrote in a February report that the company, along with other tech platforms, has “accelerated the pace of CSE content creation and distribution to a breaking point where manual detection, review, and investigations no longer scale” by allowing adult content and failing to invest in systems that could effectively monitor it.
To address the issue, the working group called on Twitter executives to work on a series of projects. The group recommended that the company finally build a single tool to process CSE reports, collect and analyze related data, and submit reports to NCMEC. It should create unique fingerprints (called hashes) of the CSE it finds and share those fingerprints with other tech platforms. And it should build features to protect the mental health of content moderators, most of whom work for third-party vendors, by blurring the faces of abuse victims or de-saturating the images.
But even in 2021, before the company’s tumultuous acquisition by Musk began, the working group acknowledged that mustering the necessary resources would be a challenge.
“The task of ‘fixing’ CSE tooling is daunting,” they wrote. “[The Health team]’s strategy should be to chip away at these needs over time starting with the highest priority features to avoid the too-big-to-prioritize trap.”
The project may have been too big to prioritize after all. Aside from enabling in-app reporting of CSE, there appears to have been little progress on the group’s other recommendations. One of the research teams that had been most vocal about fixing Twitter’s CSE detection systems has been disbanded. (Twitter’s Rosborough says the team has been “refocused to reflect its core purpose of child safety” and has had dedicated engineers added to it.) Employees say that Twitter’s executives know about the problem, but the company has repeatedly failed to act.
The years-long struggle to address CSE ran into a competing priority at Twitter: greatly increasing its user and revenue numbers. In 2020, the activist investor Elliott Management took a large position in Twitter in an effort to force out then-CEO Jack Dorsey. He survived the attempt, but to remain as CEO, Dorsey made three hard-to-keep promises: that Twitter would increase its user base by 100 million people, speed up revenue growth, and gain market share in digital advertising.
Dorsey quit as CEO in November 2021, having made little progress toward reaching those milestones. It was left to his hand-picked successor, former chief technology officer Parag Agrawal, to fulfill Elliott’s demands.