In this blog post, we examine whether fake news blocking systems can strike a balance between freedom of information and accuracy. Let’s analyze the effectiveness and limitations of blocking systems together.
Introduction
Over the past few years, fake news has emerged as a major social issue around the world. In particular, the influence of fake news became a major issue during the 2016 US presidential election, prompting giant social media platforms such as Facebook and Google to introduce fake news blocking systems. These systems filter out fake news by identifying and blocking news based on user reports and their own algorithms. However, there is still ongoing debate about how effective these blocking systems are and what their side effects are.
The influence of fake news
During the 2016 US presidential election, fake news accounted for 8.7 million “likes, shares, and comments” among the top 20 most mentioned news stories on Facebook, surpassing the 7.36 million for real news. This sparked intense debate about the impact of fake news on the election results, with Facebook and Google being identified as the main sources. In response, both companies introduced fake news blocking systems and began taking various measures to restore public trust.
Following the US presidential election, the influence of fake news attracted attention in elections in other countries as well. For example, in the French presidential election, Facebook and Google focused on curbing the spread of fake news, and as a result, analysis showed that the spread of fake news was less than in the US presidential election. This data suggests that fake news blocking systems are effective to some extent, but also demonstrates that the effectiveness of such systems can vary depending on each country’s media environment and citizens’ news consumption patterns.
Effectiveness of fake news blocking systems
Initially, fake news blocking systems relied on user reports to take subsequent action, but they have gradually shifted to proactive measures that block fake news before it spreads. Through this system, Facebook has suspended over 30,000 accounts that spread fake news, and Google has reduced the visibility of fake news sites through its search algorithms, among other methods.
However, there remains significant debate over the actual effectiveness of these blocking systems. For example, fake news blocked on Facebook can still be shared on other social media platforms like Twitter, or spread discreetly through private messages or closed groups. Furthermore, if the criteria for identifying fake news are unclear, there is a risk that legitimate news or opinions could also be blocked. This has raised concerns that the free flow of information may be hindered.
In addition, even after Facebook and Google introduced blocking systems, it has been pointed out that fake news has not completely disappeared and is still being spread through other channels. This suggests that blocking systems are not perfect and that simply suspending accounts or blocking news is not enough to solve the problem of fake news.
Limitations and side effects of blocking systems
The biggest problem in blocking fake news is that it is difficult to immediately determine the truth of a news story. Even traditional media outlets often publish false reports or write articles without clear evidence, and such news stories may be classified as fake news. Furthermore, when there is room for debate over the content of news, blocking systems may reflect a biased perspective and suppress certain opinions. For example, some users tend to report news that does not align with their political views as fake news. If such reporting systems are abused, the opinions of certain groups may be overly suppressed, or opposing views may not be adequately expressed.
Facebook founder Mark Zuckerberg has commented on this, stating, “People often tend to report content they disagree with as fake news.” This demonstrates that users’ subjective judgments can influence the blocking system.
In addition, preemptive blocking systems can deprive important discussions of opportunities by blocking issues that require social debate in advance. In a situation where the definition of fake news itself is unclear, preemptively blocking news can violate the basic principles of freedom of information and democracy.
Conclusion and Recommendations
Currently, fake news blocking systems still have many problems and their effectiveness is limited. It is difficult to block fake news completely, and there is a possibility of biased blocking depending on the discretion of system operators. For fake news blocking systems to function more effectively, criteria and algorithms that can more objectively determine the authenticity of news are necessary. Additionally, blocking fake news alone has its limitations, so it is important to educate the public to enhance their ability to judge the truthfulness of information.
Furthermore, Facebook and Google must restore trust by increasing the transparency of their blocking systems and clearly explaining to users the reasons why news has been blocked. Fake news is not merely a technical issue but also a social and political one. Therefore, fake news blocking systems must evolve in a direction that ensures freedom of information while minimizing the negative impact of misinformation on society.
Facebook and Google’s blocking systems require continuous improvement. Beyond simply filtering and blocking fake news, efforts are needed to gain users’ trust through more comprehensive and sophisticated methods. To achieve this, platform operators must collaborate with external experts to verify the accuracy of news, clearly communicate the reasons for blocking content, and establish technical and ethical standards to effectively suppress fake news.