“You have two guarantees in this space,” Mr. Gleicher said. “The first guarantee is that the bad guys are going to keep trying to do this. The second guarantee is that as us and our partners in civil society and as our partners in industry continue to work together on this, we’re making it harder and harder and harder for them to do this.”
Facebook does not want to be an arbiter of what speech is allowed on its site, but it said it wanted to be more transparent about where the speech is coming from. To that end, it will now apply labels to pages considered state-sponsored media — including outlets like the broadcaster Russia Today — to inform people whether the outlets are wholly or partially under the editorial control of their country’s government. The company will also apply the labels to the outlet’s Facebook Page, as well as make the label visible inside the social network’s advertising library.
“We will hold these Pages to a higher standard of transparency because they combine the opinion-making influence of a media organization with the strategic backing of a state,” Facebook said in a blog post.
The company said it developed its definition of state-sponsored media with input from more than 40 outside global organizations, including Reporters Without Borders, the European Journalism Center, UNESCO and the Center for Media, Data and Society.
The company will also more prominently label posts on Facebook and on its Instagram app that have been deemed partly or wholly false by outside fact-checking organizations. Facebook said the change was meant to help people better determine what they should read, trust and share. The label will be displayed prominently on top of photos and videos that appear in the news feed, as well as across Instagram stories.
How much of a difference the labels will make is unclear. Facebook and Instagram are home to more than 2.7 billion regular users, and billions of pieces of content are shared to their respective networks daily. Fact-checked news and posts represent a fraction of that content. A wealth of information is also spread privately across Facebook’s messaging services like WhatsApp and Messenger, two conduits that have been identified as prime channels for spreading misinformation.
Renee DiResta, the technical research manager for the Stanford Internet Observatory, called Facebook’s new measures at fighting disinformation “commendable.” But she also said it was “incongruous” for Facebook “to reiterate a commitment to fighting misinformation” even as it has permitted political leaders to put false information in posts and ads.