Somewhat perversely, there was little that dominated the headlines in 2016 more than “fake news”. Spurred by the divisive presidential elections, ideologues weaponized the internet to spread information that was knowingly false in order to harm their opponents. From there, the term took on a life of its own, to the point where people now seem to use it to describe any piece of information they don’t like.
Despite the fact that its meaning has been watered down, fake news is still a big problem. The main issue stems from the fact that an article posted on Facebook looks like it has the same authority whether it was written by a reputable newspaper with a long history of reporting or by an unknown blogger who started a website that looks like a reputable news site. As The Guardian put it, “The spread of false information is not a new problem, but has been amplified by the ease of publishing and the vast reach provided by search engines and expansive social networks. The barrier to entry is small and the volume of content high, leading to an expanding issue across multiple platforms and outlets.”
After the election, a lot of time and energy has gone into discussing just how we can debunk fake news and restore accuracy back into our democracy. Many critics looked to the big tech titans—Facebook, Google, and Twitter among them—to say that they weren’t doing enough to police their own networks for false information. This point is underlined by the fact that research from the Reuters Institute for the Study of Journalism “found that Facebook was the primary news source for 18-to-24-year-olds.”
Despite this data, those companies had previously held onto the idea that they were “tech companies, not media companies” and thus filtering the information that appears on their platforms is not their job. However, in a positive development it seems that this year these big tech companies are stepping up to the challenge of helping readers determine what’s real and what’s fake.
The first to make the move was Facebook, who published a page on the network helping its users learn how to spot fake news when it appears in their newsfeeds. “Be sceptical of headlines. False news stories often have catchy headlines in all caps with exclamation points. If shocking claims in the headline sound unbelievable, they probably are,” Facebook warned, in addition to saying: “Look closely at the URL. A phony or look-alike URL may be a warning sign of false news. Many false news sites mimic authentic news sources by making small changes to the URL. You can go to the site to compare the URL to established sources.”
While these kinds of tips put the onus on the user to do due diligence, Facebook is also rolling out more systematic ways to address the spread of fake news, including “disrupting economic incentives because most false news is financially motivated” and “building new products to curb the spread of false news.” Overall, Facebook’s choice to wade into the terrain is a reversal of Mark Zuckerberg’s previous denial that fake news on his network influenced the election, so it can be seen as a step forward in that sense.
Following in Facebook’s footsteps, Google has also rolled out fact-checking features. The Guardian reported that the search engine will “start displaying fact-checking labels in its search results to highlight news and information that has been vetted and show whether it is considered to be true or false, as part of its efforts to help combat the spread of misinformation and fake news.” This move came after the UK and German governments both put pressure on big internet companies to take responsibility for the content that appears on their websites.