From The Verge, Oct. 4:
Sundar Pichai says the future of Google is AI. But can he fix the algorithm?
‘We feel huge responsibility’ to get information right
Unbeknownst
to me, at the very moment on Monday morning when I was asking Google
CEO Sundar Pichai about the biggest ethical concern for AI today,
Google's algorithms were promoting misinformation about the Las Vegas shooting.
I was asking in the context of the aftermath of the 2016
election and the misinformation that companies like Facebook, Twitter,
and Google were found to have spread. Pichai, I found out later, had a
rough idea that something was going wrong with one of his algorithms as
we were speaking. So his answer, I think it's fair to say, also serves
as a response to the widespread criticisms the company faced in the days
after the shooting.
"I view it as a big responsibility to get it right," he
says. "I think we'll be able to do these things better over time. But I
think the answer to your question, the short answer and the only answer,
is we feel huge responsibility." Later, he added, "Today, we
overwhelmingly get it right. But I think every single time we stumble. I
feel the pain, and I think we should be held accountable."
Learning about Google's "stumble" after we talked put
some of our conversation in a different light. I was there to talk about
how Pichai’s project to realign the entire company to an "AI-first"
footing was going in the lead-up to Google's massive hardware event.
Google often seems like the leader in weaving AI into its products;
that’s certainly Pichai’s relentless focus. But it’s worth questioning
whether Google’s systems are making the right decisions, even as they make some decisions much easier.
When the subject isn't the failure of its news
algorithms, Pichai is enthusiastic about AI. There’s not much difference
between an enthusiastic Sundar Pichai and a quiet, thoughtful Sundar
Pichai, but you get a sense of it when he names a half-dozen Google
products that have been improved by its deep learning systems off the
top of his head.
Google's lead in doing clever, innovative things with AI
is impressive, and the examples Pichai cites can sometimes even verge on
inspiring — but there's clearly still work to do.
Most
executives talk about AI like it's just another thing that's included in
the box or in its cloud; it's a buzzword, a tick box on a spec sheet
slotted in right after the processor. But Pichai is intent on pressing
Google's advantage in AI — not just by integrating AI features into
every product it makes, but by making products that are themselves
inspired by AI, products that wouldn't be conceivable without it.
There's no better example of that than Google Clips, a
tiny little camera that automatically captures seven-second moving
photos of things it finds "interesting." It's a new way to think about
photography, one that leverages Google's ability to do lots of different
AI tasks: recognize faces, recognize "bad" photos, recognize
"interesting" content. It's simply applied to your own pictures instead
of content on the internet.
Clips does all this locally: nothing is sent to the
cloud, and nothing integrates with whatever Google Photos knows about
you. As much as Google is known for doing its AI in the cloud, many of
the devices it's releasing are doing AI locally. Pichai says that's by
design, and that both kinds of AI are necessary. "A hybrid approach
absolutely makes sense," he says. "We will thoughtfully invest in both.
Depending on the context, depending on what you're dealing with, it'll
make sense to deploy it differently."
Clips is the kind of thing Pichai wants Google to do more
of. "I made a deliberate decision to name the hardware product with [a]
software name," he says. "The reason we named it Clips is that the more
exciting part of it is … the machine learning, the computer vision work
we do underneath the scenes."
For Google, making hardware is about selling products,
but it's also about learning how hardware can better integrate AI. "It's
really tough to drive the future of computing forward if you're not
able to think about these things together," Pichai says. Fundamentally,
his question about every hardware product is "how do we apply AI to
rethink our products?" He doesn't want to make AI just another feature,
he wants AI to fundamentally alter what each device is.
Some of those half-dozen AI examples Pichai cites are solutions to
problems you might not realize could be solved with AI. Recently, Google Maps added the ability to find parking near your destination. What you might not know is that Google isn't just canvasing local parking garages; it's using AI.
"It's fascinating," Pichai says. The Maps team applied AI
to see whether Google Maps users were finding parking easily when they
arrived at their destinations. "They have to distinguish between people
who have just shown up in a Lyft and gotten out, versus actually driving
the car and getting parking quickly."
We've gotten used to lots of online services quietly
getting better thanks to AI, but Pichai wants to drive that even more
aggressively into the devices we're using. In short, he wants to have
AI change the user interface of our phones.
"The product can learn and adapt over time," Pichai says.
"You see very little of that today. My favorite [example] is I open
Google Fit [every day] to a certain view, and I navigate to a different
view." One wonders why he doesn't just wander over to the Google Fit
team and ask them to change it. Instead, apparently, he would like AI to
realize what you're doing with your phone "300 times a year" and make
it simpler....
MUCH MORE
Meanwhile, in Australia, a Machine Learning treasure trove:
Australia approves national database of everyone's mugshots