Fighting Fake News With Calm Design Principles

Image for post
Image for post
Image via TechCrunch

TechCrunch just ran my editorial which proposes a solution that Facebook could implement to greatly curb the continued spread of deceptive, inflammatory “news”:

How Facebook Can Better Fight Fake News: Make Money Off the People Who Promote It

[Facebook should ] leverage the aggregate insights of its own users to root out false or deceptive news, and then, remove the profit motive by charging publishers who try to promote it.

The first piece involves user-driven content review, a process that’s been successfully implemented by numerous Internet services. Instead of hiring thousands of internal moderators, Hot or Not asked a series of select users if an uploaded photo was inappropriate (pornography, spam, etc). Only photos which garnered a mixed reaction would be reviewed by company employees, to make a final determination — typically, just a tiny percentage of the total.

Facebook is in an even better position to implement a system like this and easily select a small subset of users (several hundred thousand) to conduct content reviews, chosen for their demographic and ideological diversity

There are of course many more pieces to the user experience I am proposing, but at the core, it follows a key set of principles in calm technology:

Technology should amplify the best of technology and the best of humanity:

  • Design for people first.
  • Machines shouldn’t act like humans.
  • Humans shouldn’t act like machines.
  • Amplify the best part of each.

The best part of humans is curation; the best part of machines is helping sift through mountains of content, while also connecting humans across distance to each other.

In the context of fighting fake news, this means not relying on the power of automated algorithms to filter out spurious items. It also means not relying on humans who act like algorithms themselves, mechanically downvoting any content that disagrees with their worldview. The happy medium is a human-designed judgement system selected for its diversity, driving a machine-powered content filtering system that responds to aggregates of human judgement.

Here’s how I explore this concept in my book, also touching on the relationship of AI to human work:

Technology should amplify the best of technology and the best of humanity. A person’s primary task should not be computing, but being human. The best automation systems work when we work in a symbiotic relationship with technology.

Google is successful because bots filter results for us, allowing us to make the final decision. Todd Huffman’s company 3Scan employs a tissue scanning robot to do the work of humans at fifteen hundred times the speed of human hands. This frees up time for biologists to do important cancer research, and the scanned results are sent to voting system that lets doctors train machine learning techniques for better cancer prediction.

Newspapers can use algorithms to compile information about a sports events, and technological aids can be helpful for research, but important news stories should always be written by humans. You cannot automate Dostoevsky. This content comes from lived experiences as a human being.

The viral spread of fake news is just one challenger harming social networks and our very democracy. In a follow-up editorial, I’ll explore the subtle design features which can cause social media to be so corrosive. Until then, I that hope you connect with me on Twitter.

Written by

Design advocate, speaker and author of Calm Technology + Designing With Sound. Research Fellow at Institute for the Future. Caseorganic.com

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store