Blog content

TikTok is blocking influencers from creating paid political content before midterms – here’s how social media platforms are preparing


TikTok unveiled a strategy for fighting misinformation ahead of the midterm elections, including banning paid political content from influencers, after Facebook and Twitter also outlined a series of steps to prepare for the November election after facing a barrage of criticism for fueling the spread of false information in previous elections.


TikTok will launch an election hub this week to “connect people” with “authoritative information” about voting in more than 45 languages, which the company will link to via tags, wrote TikTok chief security officer Eric. Han, in a blog. Publish.

TikTok is expanding its 2019 ban on paid political advertising to also include paid content from influencers, after challenges arose during the 2020 election to educate influencers on the rule, Han said.

The plans come a day after Facebook’s parent company, Meta, briefly share some of his preparations for the election, including banning “political, election and social advertisements” the week before voting begins on Nov. 8, as he did in 2020.

Twitter last week, meanwhile, said it would strengthen its “Civic Integrity Policy” it launched in 2020, which includes not to recommend or amplify “harmful” and “misleading” information, and will create new labels for such content, including links to credible information.


TikTok’s new strategy comes three days after the New York Times reported the platform has the potential to become an “incubator” for misinformation during the midterm elections, according to interviews with researchers who monitor the spread of misinformation online. TikTok’s recommendation algorithm, short video length and millions of users could all contribute to the spread of misinformation, while videos containing false allegations of potential voter fraud in November have already reached many users , the Time reported.

Key context

Social media companies have launched a series of new steps in the wake of the 2020 election, as misinformation and false claims of voter fraud quickly spread online. Several platforms, including Twitter, Facebook, Instagram and Youtube, moved to block former President Donald Trump from posting after the January 6 uprising. Despite these new policies, social media posts containing false claims of voter fraud are still prevalent, experts say. Washington Post reported last week, and companies have faced critical experts and lawyers for not taking more action to quell this misinformation. Twitter sparked controversy earlier this year because the company said it stopped enforcing its Civic Integrity Policy — which would sometimes suspend or ban users for spreading false information about the 2020 election — in March 2021. other experts, meanwhile, worry that TikTok’s video and audio content will be harder to moderate only text. Twitter said this week it had had some success with its new policies: the company said the labels – which it first introduced last year – have helped reduce replies to tweets containing misinformation by 13% and to reduce the retweets and likes of these tweets by 10% and 15%. respectively.

Further reading

TikTok bans paid videos of political influencers ahead of US midterms (Bloomberg)

Twitter turns on Election Politics app for US midterms (CNN)

On TikTok, electoral disinformation thrives ahead of midterms (New York Times)

US midterms bring little change to social media companies (Associated Press)