How Facebook Can Better Fight Fake News: Make Money Off the People Who Promote It
Facebook and other platforms are still struggling to combat the spread of misleading or deceptive “news” items promoted and spread across social networks. Just this week, a faked interview with a campaigning politician was seen by over a million Facebook users before the creator clarified that it was satire, while Facebook the company is enduring howls of criticism for refusing to ban pages of notorious, inciteful conspiracy sites.
As is often the case, the underlying problem is more about economics than ideology. Sites like Facebook depend on advertising for their revenue, while media companies depend on ads on Facebook to drive eyes to their websites, which in turn earns them revenue. Within this dynamic, even reputable media outlets have an implicit incentive to prioritize flash over substance in order to drive clicks.
Less scrupulous publishers sometimes take the next step, creating pseudo news stories rife with half-truths or outright lies that are tailor-made to emotionally target audiences already inclined to believe them. Compounding this problem are the high costs to Facebook as a corporation: It’s likely not feasible to hire massively large teams of fact checkers to review every deceptive news item that’s advertised or promoted on its platform.
I believe Facebook could leverage the aggregate insights of its own users to root out false or deceptive news, and then, remove the profit motive by charging publishers who try to promote it.
The first piece involves user-driven content review, a process that’s been successfully implemented by numerous Internet services. The dot-com era dating site Hot or Not, for instance, ran into a moderation problem when it debuted a dating service. Instead of hiring thousands of internal moderators, Hot or Not asked a series of select users if an uploaded photo was inappropriate (nudity, inappropriate content, spam, etc).
Users worked in pairs to vote on photos until a consensus was reached. Photos flagged by a strong majority of users were removed, and users who made the right decision were awarded points. Only photos which garnered a mixed reaction would be reviewed by company employees, to make a final determination — typically, just a tiny percentage of the total.
Facebook is in an even better position to implement a system like this, since it has a truly massive user base which the company knows about in granular detail. They can easily select a small subset of users (several hundred thousand) to conduct content reviews, chosen for their demographic and ideological diversity. Perhaps users could opt in to be moderators, in exchange for rewards.
Applied to the problem of Facebook ads which promote deceptive news, this review process would work something like this:
- A news site pays to advertise an article or video on Facebook
- Facebook holds this payment in escrow
- Facebook publishes the ad to a select number of Facebook users who’ve volunteered to rate news items as Reliable or Unreliable
- If a supermajority of these Facebook reviewers (60% or more) rate the news to be Reliable, the ad is automatically published, and Facebook takes the advertising money
- If the news item is flagged as Unreliable by 60% or more reviewers, it’s sent to Facebook’s internal review board
- If the review board determines the news to be Reliable, the ad for the article is published on Facebook
- If the review board deems it to be Unreliable, the ad for the article is not published, Facebook returns most of the ad payment to the media site — keeping 10–20% to reimburse the social network’s review process
I’m confident a diverse array of users would consistently identify deceptive news items, saving Facebook countless hours in labor costs. And in the system I am describing, the company immunizes itself from accusations of political bias. “Sorry, Alex Jones,” Mark Zuckerberg can honestly say, “We didn’t reject your ad for promoting fake news — our users did.” Perhaps more key, not only will the social network save on labor costs, they will actually make money for removing fake news.
This strategy could also be adapted by other social media platforms, especially Twitter and YouTube. To make real headway against this epidemic, the leading Internet advertisers, chief among them Google, would also need to implement similar review processes. This filter system of consensus layers should also be applied to suspect content that’s voluntarily shared by individuals and groups, and the bot networks that amplify them.
To be sure, this would only put us somewhat ahead in the escalating arms race against forces still striving to erode our confidence in democratic institutions. Seemingly every week, a new headline reveals the challenge to be greater than what we ever imagined. So my purpose in writing this is to confront the excuse Silicon Valley usually offers, for not taking action: “But this won’t scale.” Because in this case, scale is precisely the power social networks have, to best defend us.
A variation of this post first appeared on TechCrunch