How to Identify “Truthy” Tech Trends

Amber Case
8 min readJul 11, 2023

--

Telling Alexa to tell a branded sink to dispense a robotic-amount of water in Amazon’s Smart Home of the Future. Image credit: Mashable

Do you remember Amazon’s Connected Homes? Only a few years ago, the tech giant touted it as the latest in “smart home” technology — where, for example, you could easily bring your cup to the kitchen, and ask Alexa to fill it. In a video promo, that feature looked pretty cool.

When I was invited to preview Amazon’s “Home of the Future” in person, however, I quickly realized you couldn’t just hold your cup to the faucet. You also had to say, “Alexa, tell [brand name of the sink] to fill the cup with 8 ounces of water”. Telling your branded faucet exactly how much water you’d like is not something you’d actually want to do when you’re groggy and thirsty at 2am.

But there’s a whole set of technologies like these — devices or platforms promised as innovative, which break down when inserted into our everyday lives. While watching yet another promo for Connected Homes that depicted a San Francisican in a minimalist apartment talking to his AI as the “future” I said to a friend, “Seems pretty truthy.”

Originally coined by Stephen Colbert to satirize deceptive beliefs in politics, the tech world is rife with truthiness — attempts to market us a new product or evangelize emerging technology that’s exciting at first glance, but quickly loses its luster when real world considerations creep in. (In Amazon’s case, that includes the FTC hitting the company with fines for Alexa violating user privacy.)

Spotting the latest outbreak of truthiness in technology has become at least as important to sniffing it out in the political arena. While there are always exceptions, truthy tech usually comes wrapped in these aspects:

TV & Movies are sources and inspirations for truthy tech

Future med tech is always blue and has exciting robot arms in a glass-walled room!

Any time the latest movie in the Fast and the Furious franchise debuts, I enjoy spotting all the flashy high tech devices, like an augmented reality X-Ray to scan Idris Elba’ character, which make little practical sense. That’s usually the case across all movies and TV shows: Technology props are designed to be visually engaging and serve the needs of the script, not to be functional.

Imagine working with this for 8 hours a day as a government worker, processing paperwork! My arms feel sore just thinking about it! Source: Minority Report, 2002.

But thanks to that very visual and emotional impact, truthy technology from Hollywood has an outsized influence on real world product designers. The hand gesture controls of Minority Report — which would cause tremendous arm strain in the real world — are a notorious example. And I’ve previously written about the tech industry’s truthy fixation with adding “futuristic” blue light sources to products, primarily due to the recent availability of cheap blue LED and the influence of science fiction franchises like Blade Runner.

Cinematic effects add to on-screen drama and are less present in everyday life. Above: From the movie Patriot Games

You can see this stark difference when you compare technology as depicted in movies versus its actual deployment in real life. In big budget thrillers, the counter-terrorist command center is glowing with giant viewing screens controlled by batteries of high-powered computer terminals.

Obama’s Osama Bin Laden raid, sans the blue lighting and cinematic effects! Photo by Pete Souza.

In real life, Obama oversaw the Bin Laden raid in what looks like a Holiday Inn meeting room next to someone on an old laptop. The institutions themselves often engage in this self-mythologizing: The CIA’s museum for tourists looks vastly more exciting than its actual, drab corporate office in Langley.

When we’re drawn to technology first depicted in TV and movies, we need to be especially mindful that its value is much less than its surface appearance. It distracts from the realization that generally, technology should be a pass-through interface. Instead of noticing the device, we want to just use it. Xerox PARC’s Mark Weiser has a quote about UI which still rings strong:

“A good tool is an invisible tool. By invisible, we mean that the tool does not intrude on your consciousness; you focus on the task, not the tool.”

You don’t notice a window when it is clean; you look through it. You only notice the window when it is dirty. In the same way, when a technology works, you don’t notice it; when it doesn’t, you do.

Which takes me to another principle:

Many people are suddenly rushing into it without any clearly defined purpose, for fear of being left behind or missing out.

No one makes good decisions in a panic mode. Without understanding the fundamentals of what a new technology is, many executives rush to say that they’re including a new technology in their offerings. Then the offerings themselves are created in a rush, are brittle, and break down.

Above: The rise and fall of “web3” on Google Trends

Cryptocurrency and blockchain is largely an example of this, as millions bought into crypto with little more than vague, panic-inducing notions like “it’s the future of money” or “get in now to get rich or lose out”. Numerous major organizations, too, touted their web3 and blockchain offerings without clear explanation for why they were needed. (And consequently, have attracted few customers.) With many nascent technologies, we have to peel away many layers before clear and sustainable use cases emerge — and sometimes, they don’t.

This is not to say crypto and blockchain is simply just a fad or waste of time, by the way — by all means, play with it (cautiously!) and learn from it, while keeping an open mind. But when you hear too many grandiose “it’s the future” pronouncements, look for indicators of real adoption and usage.

Truthy tech promises to do too many major things too soon.

Part of processing a new technology is to decide on its best use; it also involves carefully weighing the difficulties and trade-offs inherent with implementing any new tech. (As the late media theorist Marshall Mcluhan put it, “every extension is also an amputation”.)

By contrast, truthy tech comes with a battery of evangelists broadcasting a crazy cohort of applications without any consideration for how to actually implement them. Twitter is currently drowning in viral posts of the “Top 10 use cases for ChatGPT that will change everything by next year!” variety.

But the revelation of the idea is just a tiny part of the implementation, which tends to be bureaucratic, expensive, and slow. We’re often told, for example, that generative AI will totally disrupt health care — but are not also told how difficult and time-consuming HIPAA compliance and FDA approval alone can actually take.

Technologies are more likely to be truthy when they inspire fear in the popular imagination.

New technology often excites some early adopters while filling most others with deep unease that can quickly become fear. The very act of presenting a given technology as “new” and “disruptive” discourages people from seeing it on a continuum of what’s already been developed, or the history of its origins.

“Artificial intelligence”, for instance, suggests something that is threatening to the very fabric of being human, but it’s been in common usage for decades, in video games and other products.

The very term “artificial intelligence” was actually first created for a very human reason: Back in the 1950s, academics wanted a new name for their upcoming cybernetics conference, because MIT professor Norbert Wiener, who wrote the seminal book on that topic, often showed up at any conference labeled “cybernetics” and dominated the conversation. People might be less fearful about “AI” if they realized it was originally coined in order to avoid an overbearing conference participant.

Just as movies and TV excite us about visually appealing if unrealistic technology, they also terrorize us by presenting technology at its most dystopian. While there is ample reason to be concerned about the negative externalities of AI, much of our public conversation dwells on the most highly speculative, implausible dystopian scenarios, like those depicted in The Terminator and The Matrix movies.

The far more tangible concern is AI being abused and misunderstood right now. People have already died or been seriously hurt due to the mistaken belief that cars really can “self-drive”, or that ChatGPT “knows” how to diagnose medical conditions. We’re distracted from discussing the real and current problems with AI due to the seductive power of truthy depictions of AI.

Truthy tech fades quickly; truthful technology wins out over the long haul.

It’s difficult to remember a time when Google Search was a pervasive part of the web, but at the start, it seemed too odd and alienating for most people. Launched in 1998, it was first a quirky college project competing against heavily-funded and promoted search giants like Yahoo! and Ask Jeeves, which presented the web in clear, easy to read link summaries.

Google, by contrast, displays its search results as an infinite wall of text. It took years of slow but steady growth among early adopters to realize, despite (or rather because) of its messy, no-frills approach, Google simply worked.

As you can see in this hypnotic timeline video, it took nearly a decade for the company to eclipse its much larger, truthier competitors. Throughout that history, Google’s user experience has largely remained unchanged — it’s just that we the users have learned to see its effectiveness through the chaos.

This is likely why, ironically enough, we rarely see people using Google in movies and TV shows — its search results look too cluttered to read well on-screen. (A fake search engine called “SpyderFinder” shows up on multiple TV shows.) Google search, in other words, is not truthy enough for Hollywood!

In summary, tech is more likely to be “Truthy” if:

  • It was first depicted in TV & movies.
  • Seemingly everyone is rushing into it for fear of missing out.
  • It promises to do too many major things too soon.
  • It inspires fear in the popular imagination.

Truthiness in tech is not new. It goes back to notions from the 1890s of a culture filled with flying cars and jetpacks. Many generations grew up with the exciting Truthy covers of Popular Science magazines. After seeing outbreaks of truthiness repeatedly come and go for several decades, I’ve started to wonder if there is any reliable way for us to immunize ourselves against technology’s most grandiose, unsustainable promises. Maybe it’s just human nature for us to get momentarily excited about the latest bauble. Perhaps all we can do is keep in mind that being dazzled by truthiness is basically part of our DNA. Truthful tech, by contrast, wins out over the long haul!

Amber Case is author of Calm Technology and a Research Director at the Metagovernance Project. A frequent design consultant across many industries, she was previously a fellow at MIT’s Center for Civic Media and Harvard’s Berkman Klein Center for Internet & Society,.

--

--

Amber Case
Amber Case

Written by Amber Case

Design advocate, founder of the Calm Technology Institute, speaker and author of Calm Technology. Former Research Fellow at MIT Media Lab and Harvard BKC.

Responses (60)