Mark Zuckerberg, Chairman and Chief Executive Officer of Facebook, arrives to testify during the House Financial Services hearing on An Examination of Facebook and Its Impact on the Financial Services and Housing Sectors on Wednesday, Oct. 23, 2019.
Bill Clark | CQ-Roll Call, Inc. | Getty Images
Facebook CEO Mark Zuckerberg on Thursday criticized rival Twitter for its decision to factcheck President Trump’s tweets, telling CNBC that he doesn’t “think that Facebook or internet platforms in general should be arbiters of truth.”
Zuckerberg’s argument is that Facebook should give people a voice and free expression. It even allows misinformation in Facebook ads from politicians.
“Political speech is one of the most sensitive parts in a democracy, and people should be able to see what politicians say,” Zuckerberg told CNBC.
But despite Zuckerberg’s comments, there have been instances throughout Facebook’s history where the company has played the role of arbiter of truth.
Here are some examples.
Requiring real names
For much of the company’s history, Facebook has required that people use their real names on their profiles. The company’s policy states that “the name on your profile should be the name that your friends call you in everyday life. This name should also appear on an ID or document from our ID list.”
Most notably, this rule has been abused by trolls who target and harass transgender Facebook users who are reported for not using their legal names. In some instances, these users have had their accounts suspended. The harassment got so bad that, at one point in 2014, former Facebook executive Chris Cox issued an apology.
Since the coronavirus pandemic began spreading, Facebook has taken a proactive role guiding its users to accurate Covid-19 and hiding or removing misinformation about the virus.
Facebook announced in January that it would “remove content with false claims or conspiracy theories that have been flagged by leading global health organizations and local health authorities that could cause harm to people who believe them.” In March, the company launched a coronavirus information center to appear at the top of users’ News Feeds.
Earlier this month, Facebook said it had put warning labels on 50 million pieces of misinformation concerning Covid-19. The company has said that these labels dissuade users from clicking on inaccurate content 95% of the time.
When the “Plandemic Movie” began to go viral earlier this month, Facebook decided to remove posts that included the video because the movie suggested “that wearing a mask can make you sick and could lead to imminent harm.”
In March, Facebook also removed a post from Brazil President Jair Bolsonaro. In the post, Bolsonaro claimed that hydroxychloroquine could be used as a treatment for Covid-19.
“We remove content on Facebook and Instagram that violates our Community Standards, which do not allow misinformation that could lead to physical harm,” Facebook said at the time.
Removal of far-right conspiracy theorists
Earlier this month, Facebook made the decision to remove pages dedicated to the QAnon conspiracy theory. QAnon is a far-right group who believe there is a “deep state” plot against Trump.
Facebook removed five pages, which collectively had 133,000 followers. The pages violated Facebook’s policies against coordinated inauthentic behavior, which is defined as the use of multiple fake accounts working together to spread content that misleads people.