dailyO
Technology

Google Assistant learning to talk like a real person and that should worry humans

Advertisement
Javed Anwer
Javed AnwerMay 12, 2018 | 13:35

Google Assistant learning to talk like a real person and that should worry humans

How much artificial intelligence (AI) is too much? No one seems to know the answer to this, least of all the Google engineers who are apparently at the forefront of the new AI revolution. On Tuesday, at the I/O 2018 conference, Google demoed Duplex, a new kind of AI that will be part of the Google Assistant and that will talk to people on behalf of its human masters in a very human-like manner.

Advertisement

To prove that Duplex can indeed mimic a human, Google played a call that it had recorded with the AI and a real person. In the call, Duplex makes a salon reservation, showing that it is every bit witty and chatty as your next-door neighbour and even fills silences with "mmhhmm". Hundreds of engineers and developers attending the IO erupted in cheers, clapping and laughing at the way Duplex fooled the salon assistant.

Outside IO, however, the reception for Duplex was not so cheerful. And rightly so.

Among many issues that Duplex will bring to the society once it starts living in our midst inside our phones, the most thorny is the way it mimics humans. The way Google demoed Duplex, it was clear that it has been created in the mirror image of humans.

The goal, explicitly, is to make it more human-like in its conversations. But that is also changing the rules of conversation. People expect to talk to people and not a bot. Conversations are not just exchange of spoken words. They are more than that. Currently, when humans talk to bots they know it because bots are bad at talking. Duplex changes that. At least during the Google demo it was as good – in some cases even better – as a human being in chit-chatting.

Advertisement

With outcry growing, Google on Friday clarified. The company now says that once it deploys Duplex, it will ensure that the bot identifies itself to people when calling on behalf of users. In what form this identification will come is not yet clear. "We understand and value the discussion around Google Duplex – as we've said from the beginning, transparency in the technology is important," a Google spokeswoman told CNET.

"We are designing this feature with disclosure built-in, and we'll make sure the system is appropriately identified. What we showed at I/O was an early technology demo, and we look forward to incorporating feedback as we develop this into a product."

Fair enough. But the case of Duplex is hardly unique. In fact, it is the best example of what happens within the giant technology companies in Silicon Valley. They are creating these smart systems, powered by machine learning and artificial intelligence, just because they can. The impact and implications of the smart bots, these services like Duplex, are the areas with which the Silicon Valley engineers don't concern themselves.

This year I wasn't at the Google I/O, but last year I was. In 2017, Google emphasised that it was moving away from the just being a search engine. It was still a company interested in cataloging the world's data, but now it wanted to do it all using AI (a vision it expands this year at I/O).

Advertisement

It wants smart machines everywhere, and it wants them powered by Google's brilliant code. Incidentally, for the long flight to San Francisco – it's nearly 16 hours direct flight from Delhi – I picked up Homo Deus: A Brief History of Tomorrow by Yuval Noah Harari for my in-flight read.

pichai2-copy_051218011305.jpg

The book is about the future of humans and parts of it deal with AI, where Silicon Valley is headed, the technocracy and dataism, which Harari calls the "religion of future". It is fascinating, and at the same time scary, read.

Given the context, at I/O, I started asking Googlers some of the questions around AI, and the relationship of technology with humans, that Harari raised in his book.

Again and again, I drew a blank. Not only were the Silicon Valley engineers not interested in the complex implications of the technologies they create, the implications that disturb the delicate balances governing humans and human societies, they are not even making an effort to think more holistically about the impact of their work.

Instead, as we saw from the announcements at the recent Google I/O and Facebook F8 – yes, same Facebook that is middle of privacy and misinformation crisis – the Silicon Valley engineers only care about building things.

There was a time when the world would go all mush at the mention of a new smart service coming out of Silicon Valley. But the last few years have changed that. We have started to discover that building things for the sake of building may not always make the world a better place.

It may even break the world, something people are seeing in Sri Lanka, Myanmar, the Philippines where Facebook is the primary tool to spread strife and hatred within societies. Then there are conspiracy theories and extreme videos that float on YouTube, or the fake news problem, or the ease with which now even videos can be morphed through AI-assisted tools.

While both Google and Facebook, and countless other Silicon Valley companies, have started acknowledging the thorny issues around technology, their just-concluded developer conferences don't show that they are walking the talk. The dating service on Facebook, Duplex, the advanced face and photo recognition technology that Facebook and Google have created... all of them are a sign that for now, Silicon Valley continues to be in build mode.

And that should worry us all.

 

Last updated: May 13, 2018 | 17:04
IN THIS STORY
Please log in
I agree with DailyO's privacy policy