The Accuracy and Biases of AI-Based Internet Censorship in China
Keywords:
AI censorship, China, internet regulation, automated content moderation, digital authoritarianismAbstract
AI-driven censorship has become a central mechanism for controlling online discourse in China, allowing for rapid detection and suppression of politically sensitive content. This paper explores the accuracy and biases of AI-based internet censorship, focusing on its evolution from manual to automated moderation, its effectiveness in identifying dissent, and its systemic biases that reinforce government narratives. While AI models are highly efficient in filtering explicit political speech, they struggle with disguised dissent, satire, and coded language, leading to inconsistent enforcement and unintended suppression of neutral content. Case studies, including COVID-19 information control, the Hong Kong protests, and labor rights discussions, illustrate both the strengths and limitations of AI censorship. Additionally, a comparative analysis with global AI moderation models highlights key differences between China’s state-controlled digital governance and Western approaches to content moderation. The study further examines the challenges of censorship precision, including over-blocking, under-blocking, and the balance between suppressing political dissent and combating misinformation. Finally, the research discusses public and international reactions to China’s AI censorship model, evaluating its impact on digital sovereignty, global internet governance, and the future of AI-driven speech regulation. As AI censorship continues to evolve, the paper underscores the ethical dilemmas surrounding algorithmic transparency, state influence, and the exportation of China’s digital control model to other countries.