O.K., Google

Uh, Did Google Fake Its Big A.I. Demo?

The tech press has questions, and Google isn’t providing any answers.
Image may contain Sundar Pichai Human Person Clothing and Apparel
By David Paul Morris/Bloomberg/Getty Images.

Google C.E.O. Sundar Pichai’s demonstration of the company’s new virtual-assistant technology, unveiled at the company’s annual developer conference last week, was more unnerving than Pichai presumably intended it to be. Google Duplex, as the technology is called, represents a major leap forward in Silicon Valley’s efforts to produce robots that sound like people. It can make phone calls to schedule appointments, say, or to reserve a table at a restaurant, using familiar human verbal tics and filler words—“uhm,” “mmhmm,” and “gotcha”—that make it eerily hard to tell that the voice on the other line is an artificial intelligence. To show the tech in action, Pichai played a recording of the Google Assistant device—Google’s answer to Apple’s Siri and Amazon’s Alexa—calling and interacting with someone who was purportedly an employee at a hair salon to make an appointment. “What you’re going to hear is the Google assistant actually calling a real salon to schedule an appointment for you,” Pichai told the audience. “Let’s listen.”

The demo was indeed impressive. It was also pretty unsettling, as many people quickly noted. (“Horrifying,” wrote one critic.) But is it possible that the promise of Google’s advanced artificial-intelligence tech is too good to be true? As Axios noted Thursday morning, there was something a little off in the conversations the A.I. had on the phone with businesses, suggesting that perhaps Google had faked, or at least edited, its demo. Unlike a typical business (Axios called more than two dozen hair salons and restaurants), the employees who answered the phone in Google’s demos don’t identify the name of the business, or themselves. Nor is there any ambient noise in Google’s recordings, as one would expect in a hair salon or a restaurant. At no point in Google’s conversations with the businesses did the employees who answered the phone ask for the phone number or other contact information from the A.I. Further, California is a two-party consent state, meaning that both parties need to consent in order for a phone conversation to be legally recorded. Did Google seek the permission of these businesses before calling them for the purposes of the demo? Was it staged in the simulated manner of reality TV?

Google isn’t saying. When Axios reached out for comment to verify that the businesses existed, and that the calls weren’t set up in advance, a spokesperson declined to provide names of the establishments; when Axios asked if the calls were edited (even just to cut out the name of the business, to avoid unwanted attention), Google also declined to comment. The company did not immediately respond to a series of questions from the Hive.

Of course, it’s entirely possible Google has successfully made a lifelike virtual assistant that can replicate human interactions over the phone, and it’s possible we’ll all be using and interacting with this kind of A.I. sooner than we might like. (Google responded to some of the controversy over Duplex’s features by promising that the bot would include a disclosure identifying itself as not human.) The snippets of conversation during Pichai’s demo, which can be heard in this clip, seem too polished and unrealistic to be real. But advancements in artificial intelligence are also progressing rapidly. Tesla and Uber are developing self-driving cars. Amazon is replacing human cashiers with A.I. in automated grocery stores. Facebook is mining your personal data to predict your future actions for advertisers. And the technological arms race is just beginning. Global spending on artificial intelligence and machine learning is predicted to grow from $12 billion in 2017 to $57.6 billion by 2021, and venture capital investments in A.I. companies is skyrocketing.

Some in Silicon Valley are justifiably wary of these head-spinning developments. And robots mimicking human speech patterns are the least of their concerns. Social-media bots have been weaponized to spread propaganda and misinformation. Voice-activated devices like Google Assistant can be hijacked by bad actors, as a team of researchers at the University of California, Berkeley, recently demonstrated by using audio commands undetectable to the human ear—hidden in a YouTube video—to hijack Amazon’s Alexa and order it to make purchases. In a world where almost all consumer appliances—televisions, refrigerators, light switches, cars, door locks—will soon be WiFi-enabled, the capacity for these A.I.s to be manipulated—or to go rogue—is a terrifying prospect. As my colleague Nick Bilton has reported, rapid advances in machine learning will also have profound implications for the way public opinion is shaped. The same sort of software that allows Google Assistant to call up your salon without alarming the hairdresser can also be used to make it sound like Barack Obama is endorsing the technology firm putting those words in his mouth—or like Donald Trump is threatening nuclear war with North Korea. Those tools will soon be in everybody’s hands, from bored high-schoolers to Syrian rebels and Russian spies. Whether or not Google Assistant’s own reality-distortion tech is yet ready for prime time, it will be soon.