Facebook Expands Fake News-Fighting Efforts
Facebook announced it is “increasing” its “efforts to fight false news” in a blog post published Thursday.
Perhaps most notable in the update is the detail that the company is “beginning” to use machine learning to identify and demote or remove misinformation, and the accounts that spread such content.
Facebook is also expanding their fact-checking program to 14 countries.
“These certified, independent fact-checkers rate the accuracy of stories on Facebook, helping us reduce the distribution of stories rated as false by an average of 80%,” wrote Tessa Lyons, product manager at Facebook. “One challenge in fighting misinformation is that it manifests itself differently across content types and countries. To address this, we expanded our test to fact-check photos and videos to four countries. This includes those that are manipulated (e.g. a video that is edited to show something that did not really happen) or taken out of context (e.g. a photo from a previous tragedy associated with a different, present day conflict).”
The tech giant has provided several updates on it fact-checking endeavors in recent months. Facebook provided further clarification earlier in June on how it combats the alleged outbreak of deceitful news, one strategy that is to try to give users further context to certain stories featured on the platform.
But the utilization of algorithms in attempts to pinpoint misleading or false information has stoked the criticism from some, like The New York Times CEO Mark Thompson. During a speech given at a recent event titled “Breaking the News: Free Speech & Democracy in the Age of Platform Monopoly,” Thompson lambasted artificial intelligence — a key part of machine learning — for this purpose.
“The process of citizens making up their own mind which news source to believe is messy, and can indeed lead to ‘fake news,’ but to rob them of that ability, and to replace the straightforward accountability of editors and publishers for the news they produce with a centralized trust algorithm will not make democracy healthier but damage it further,” Thompson said.
Google, the most powerful search engine, and potentially company, in the world, displayed fact checks that were incorrect, technically making the feature itself “fake news.”
The attempts to verify certain misattributed claims were riddled with errors, rendering that key aspect of the sidebar widget faulty. After some back-and-forth, Google eventually agreed with TheDCNF’s investigation, suspending the feature and blaming it on a flawed algorithm. Officials within the tech giant, however, declined to explain further as algorithms are deemed proprietary.
“The underlying danger — of the agency of editors and public alike being usurped by centralized algorithmic control — is present with every digital platform where we do not fully understand how the processes of editorial selection and prioritization take place,” Thompson, who is calling for transparency if companies feel the need to use algorithms, said in his speech.
Another example of imperfectly designed or implemented algorithms — which are for the most part reflections of their creators — is Facebook’s new initiative to label political ads — a response to the clamoring over Russia’s influence in the 2016 election.
Those new rules, also a point of contention during the event Thompson spoke at, appear to be unrefined. The automated system has been scooping up content that is not political advertising, but rather just content that technically relates to politics (which is arguably almost anything). (RELATED: Facebook Ditches Another Fake News-Fighting Initiative)
Nevertheless, Facebook is pushing on and is proud of it.
“Over the last year and half, we have been committed to fighting false news through a combination of technology and human review, including removing fake accounts, partnering with fact-checkers, and promoting news literacy,” wrote Lyons. “This effort will never be finished and we have a lot more to do.”
Send tips to email@example.com.