Fixing Facebook: Two Simple UI Tweaks to Curb Stress & Fight Fake News
Over the last few weeks, during the height of outrage over Facebook and its policies, I wrote a couple TechCrunch essays which approach the problem from a UX perspective. I’m convinced that Facebook’s user interface, rife with subtle visual and interactive cues which encourage spikes of frenetic engagement, greatly contributes to the larger challenges the social network is dealing with. Fortunately, I’m also convinced that small changes to the UI can also help curb our worst interactions on the platform. Expanding on ideas from the TechCrunch posts, here’s a couple mock-ups to illustrate what I mean:
A Rest Option for the Facebook Feed
The Problem: Imagery accelerates and magnifies user engagement; it also encourages users to take and spread screencaps of incendiary private discussions and inflammatory discussions from other social networks. The ever-increasing speed of wireless broadband further exacerbates this problem, encouraging emotional engagements wherever and wherever we might be with a device in our hands. It’s rare that you can scroll down a Twitter or Facebook feed without getting emotionally hooked by something. Unlike an analog conversation, which might hook you emotionally one part at a time, social media feeds offer multiple barbs per page. Scroll long enough and there is no escape.
The Solution: Implement timeline “rest” options: To address the cascade of emotional hooks created by timeline feeds, Facebook and Twitter should experiment with a pause button that imposes user-set resting periods — during which, users wouldn’t receive notifications or comments associated with their timeline.
A User-Driven Content Review System to Curb Ads for Fake News
The Problem: Sites like Facebook depend on advertising for their revenue, while media companies depend on ads on Facebook to drive eyes to their websites, which in turn earns them revenue. Within this dynamic, even reputable media outlets have an implicit incentive to prioritize flash over substance in order to drive clicks. Less scrupulous publishers sometimes take the next step, creating pseudo news stories rife with half-truths or outright lies that are tailor-made to emotionally target audiences already inclined to believe them... Compounding this problem are the high costs to Facebook as a corporation: It’s likely not feasible to hire massively large teams of fact checkers to review every deceptive news item that’s advertised on its platform.
The Solution: Instead of hiring thousands of internal moderators, Hot or Not asked a series of select users if an uploaded photo was inappropriate (pornography, spam, etc). Users worked in pairs to vote on photos until a consensus was reached. Only photos which garnered a mixed reaction would be reviewed by company employees, to make a final determination — typically, just a tiny percentage of the total.
Facebook is in an even better position to implement a system like this, since it has a truly massive user base which the company knows about in granular detail. They can easily select a small subset of users (several hundred thousand) to conduct content reviews, chosen for their demographic and ideological diversity.
Rest options and user moderation are two ways to help, but there are always more. Ultimately it’s up to us to insist on a better, more humane social media experience — and not let the inertia of our everyday surroundings dull us back into our usual, templated routines.
My next post will be about how to take control of your own relationship with social media. Because until social networks implement features like these, it’s up to us to manage our own relationship to media consumption.