Why We Need to Stop Worrying About AI “Taking Over”
Designing a Future for AI & Automation We Can Live With, Part II
“AIs will soon steal our jobs and control our lives!” is an increasingly common theme raised by the media, usually accompanied by an image like the one above. Hollywood has mostly muddied this conversation by confusing artificial intelligence with robotics, and Silicon Valley hasn’t done much to clarify this. Movie references are easy to understand, and add drama to an otherwise dry conversation. (Not for nothing do people like Elon Musk make continued warnings about the imminent approach of Skynet.) Tech news outlets in need of dramatic, click-worthy headlines also have little incentive to add much clarity.
But I don’t mean to dismiss the basic concern — it’s real, and legitimate. As I wrote in part one, both my parents eventually lost their careers due to computer automation, starting in the 90s. But my family’s perspective has also helped sharpen my thoughts on automation’s underlying problems, and what we might do to address them. But first, we should consider a few clarifications.
“AI” Rarely Refers to Actual Artificial Intelligence
My first understanding about AI came from my father Jim Case, an engineer who developed one of the first nodes of ARPAnet (the first Internet).
“If you have a human body,” he told me, “you feel blades of grass, and you know what it means to feel blades of grass. You have memories associated with them. You have a history with them. There are traditions and histories around grass, whether it’s a local tradition like the 4th of July, or a summer festival. This is culture. There isn’t ‘one’ experience of a blade of grass, or one definition of it. Even if one hundred thousand diverse people put their perspectives into an AI, you would get a reflection of those people and their distinct backgrounds. But you still wouldn’t get real time data, and you still wouldn’t be able to get the AI to think.”
So he was decidedly bearish on the term: “AI isn’t real. It’s just a term that the industry uses to get people excited. And then due to inflated expectations, it goes away, and all the AI developers get fired and we get an AI winter again.”
Deep Mind possibly qualifies as higher order AI. More limited examples include Siri, for its natural language processing, and the machine learning in Tesla’s self-driving cars. Even then, many systems described as AI are simply a vast collection of passive inputs connected to a massive database of if-then scenarios — and often come with some front end sleight of hand to impress the end user.
Earlier this year, we were all so impressed by Google Duplex’s ability to manage restaurant reservations, even though it was a narrow AI processing a limited range of variables. We missed that, because the program said “uh” and “um” like a human. We artificially trained it to add some imperfections, and in doing so, made it sound “smarter”!
For a vivid illustration of AI’s actual limitations, try out any number of the computer programs which use an AI to replicate a deeply human skill — creating a realistic painting. The results are comical, often disturbing, and wildly, wildly off-base. Despite using some of the most powerful “AIs” on the market, the infantile results convey just how little we have to worry about computers replacing us anytime soon.
Which brings me to a related point:
“AI” Works Best With Human Collaboration: Consider Cybernetics & Centaurs
Our concerns around AI typically assume that in the future, they’ll operate without human intervention. But these systems will still need to be built and maintained — by people. Otherwise, who will re-calibrate them? Correct for false positives?
We know that machine learning works better with human collaboration. This is a concept inherent in cybernetics: A feedback loop ensuring that human input and machine input go hand in hand. When done well, it creates a “living”, more organic system.
And we know this tandem relationship works well, because we use it everyday. Facebook uses this to profit off of our screen time. Google search categorizes human knowledge and provides a series of choices for those who search. Chess champion Gary Kasparov refers to a human relationship with a good automated system as a “Centaur”. A mediocre chess player working with a good machine learning program, he argues, can outperform chess masters who only relied on their minds.
Centaurs work well in closed systems with a narrow range of outcomes. We are centaurs when we purchase plane tickets online, relaying on machine learning to churn through millions of flight scenarios that we continuously filter through until we have a handful of options to choose from.
As with centaur chess, the final decision is made by the human (who usually takes into account subtle variables that machine learning is likely to miss).
Centaurs are a useful reference to lean on when rolling out “AI” — to emphasize that it’s not about replacing humans, but arming them with the best intelligence the machine can give.
Let’s Talk About Automation, Not AI
“AI” is often a term tinged with broad, cinematic, and loaded expectations, but automation is a far more concrete concept, and has been changing our workplace and our culture in very real ways since the Industrial Revolution. And when we focus on automation, the challenge is no longer hypothetical, but practical: How can we design automated systems that help us become heroic, elevated centaurs, rather than demoralized, disposable gnomes working within the gears of an automated but inefficient machine?
That’s the topic for the next post in this series. Until then, I’d love to hear your thoughts on Twitter!