Mapping Shadow-Ban on TikTok:
Expose hidden censorship with a cross-national research
* * *
* * *
Internal documents leaked by The Intercept have shown that TikTok moderators were explicitly instructed to ban politically sensitive content for the Chinese Communist Party, such as posts referring to the Tiananmen Square protests.
The company declared that these policies are no longer in place. Though, no independent study has been able to confirm their claims.
* * *
The report indicated that TikTok engages in censorship on various political and social topics such as the discussion on LGBTQ+ related topics and diverse political issues and leaders.
Specifically, the research found that shadowban and censorship are affected by country-specific content guidelines.
How can we spot banned and shadow-banned content? How can we automate the process of detecting these forms of censorship?
Which criteria does the platform implement to censor content? Which of these criteria is location-sensitive?
Investigate the algorithmic neutrality regarding political content: which content is promoted or demoted on the platform? To answer this question, we could focus on the upcoming presidential elections in France.
A browser extension to passively scrape TikTok and collect algorithm's outputs (such as videos suggested in the "search" page, "for you" page ...)
Guardoni.js bots to automate the data collection.
Network analysis with Gephi to show relation across queries.
Data analysis with Python to show relation across queries.
Expose the hidden censorship, monitor and document demotion.
Understand and measure the search result pollution and the automatic demotion of sensitive topics. Map the country-specific strategies of censorship and content moderation.