Advertisers Quit Twitter Over Child Pornography Placements

Dan Meier 29 September, 2022 

Advertisers are leaving Twitter due to their ads appearing alongside tweets trading child pornography, Reuters reported on Wednesday.

According to research by cybersecurity firm Ghost Data, more than 30 brands ended up on the profile pages of accounts soliciting exploitative content, using keywords such as “rape”, “teens” and “13+”.

The brands include Disney, NBCUniversal, Coca-Cola and Scottish Rite Children’s Hospital. Following the revelations, Dyson, Mazda, Forbes and PBS Kids told Reuters they have suspended or removed their marketing campaigns from the social platform.

“Twitter needs to fix this problem ASAP, and until they do, we are going to cease any further paid activity on Twitter,” said a Forbes spokesperson – a sentiment echoed by the other brands.

Hours before the story came out, Twitter warned advertisers it “discovered that ads were running within Profiles that were involved with publicly selling or soliciting child sexual abuse material.”

Twitter spokesperson Celeste Carswell issued a statement saying the company “has zero tolerance for child sexual exploitation” and is investing further resources into child safety, investigating the situation and taking steps to prevent it from happening again.

However she also downplayed the extent of the issue, arguing that “recent reports about Twitter provide an outdated, moment in time glance at just one aspect of our work in this space, and is not an accurate reflection of where we are today.”

Twitter blames its tools

The latest scandal to hit Twitter lends further credence to Elon Musk’s accusations that the platform is riddled with nefarious accounts, and follows allegations by whistleblower Peiter “Mudge” Zatko that the company’s lax security protocols pose serious risks to users.

Although Reuters points out that the exchange of explicit images is hardly unique to Twitter, the scale of the problem and lack of action by the company appears staggering. More than 18 months ago, a Twitter report concluded that the tech giant’s technology for detecting child exploitation material was insufficient, and the global business had a backlog of cases to review and potentially report to law enforcement.

“While the amount of [child sexual exploitation content] has grown exponentially, Twitter’s investment in technologies to detect and manage the growth has not,” found the report. Ghost Data said that its team of five researchers with no access to Twitter’s internal resources identified over 500 offending accounts within 20 days.

Evidently the social media giant has been slow to act. Twitter failed to remove 70 percent of the accounts during the 20-day study period, according to Ghost Data. After Reuters shared sample findings with the company, it removed 300 more accounts – but over 100 others remained on the site the following day.

The company’s website says it suspended more than 1 million accounts last year for child sexual exploitation, while the National Center for Missing and Exploited Children said it has received 87,000 reports from Twitter.

These figures are unlikely to appease regulators, concerned citizens or the brands whose ad dollars contribute 90 percent of Twitter’s revenue. If the company does indeed underreport or ignore spam accounts in order to attract advertising, that tactic seems to be backfiring.

With days to go until the social media firm goes to court against Musk – whose case leans heavily on Twitter’s negligent attitude to bots and spam accounts – it would do well to get its house in order.

2022-09-29T12:08:30+01:00

About the Author:

Reporter at VideoWeek.
Go to Top