dailyO
Technology

Facebook's AI chatbot goes rogue: Why we shouldn't be scared

Advertisement
Sushant Talwar
Sushant TalwarAug 01, 2017 | 14:10

Facebook's AI chatbot goes rogue: Why we shouldn't be scared

The last few days have seen some interesting developments in the science and technology world. While on the one hand we were pushed closer to an artificial intelligence (AI)-assisted world where AI powered driverless cars would be the new norm, an experiment that went awry at Facebook's headquarters in Menlo Park, California, raised questions over our preparedness for this future straight out of a 90s' sci-fi movie. 

Advertisement

Interestingly, the news of Tesla handing over the first batch of the new Model 3 cars – which could very soon become completely driverless with the push of a simple over-the-air software update – came in the same week as the news of Facebook (FB) "shutting down" a high profile AI chatbot programme because it got too smart and "started communicating" within its network in a language it "developed".

The incident added fuel to fears of a Skynet-esque AI takeover of the world which would spell the end of the human race.  

But what exactly is FB's AI-powered chatbot?

Unlike Skynet, this chatbot programme that FB's AI team has been working on for some time now is no code to control the world's nuclear arsenal, but rather a sophisticated software that was being groomed to evolve using machine learning to become the ultimate negotiator.

The chatbot's purpose of creation was to get the best deal possible with more speed and efficiency than you or I.

At Facebook Artificial Intelligence Research lab (FAIR), the chatbots were being trained for the purpose of showing it is “possible for dialog agents with differing goals (implemented as end-to-end-trained neural networks) to engage in start-to-finish negotiations with other bots or people while arriving at common decisions or outcomes”.

Advertisement

ai-copy_080117012028.jpg

Where did things go wrong?

The news initially broke in June 2017, when researchers from the FAIR discovered that an AI chatbot programme they put much work to improve had bested them after its "dialog agents” started creating their own language to become better negotiatiors.  

At this point, the researchers took things into their own hands and tweaked the source code of the programme as the bot-to-bot conversation in the current form “led to divergence from human language" which was defeating the purpose of creating the bots. 

To remedy the situation they then went to the drawing board and implemented a fixed supervised model – and did not shut down the programme – instead to ensure the "dialog agents" stay on course and don't talk in any other language but English. 

Human error behind the saga?

A buried line in a new Facebook report about the chatbots' conversations with one another explained why the dialog agents drifted from their path in the first place. The report claims that during the training, the "dialog agents" started using shorthand and eventually created a sort of dialect that its human creators were not able to understand, all because of a mistake in the programming of the code. The bots, which started conversing with each other in simple English, later evolved this new dialect because of a "lack of incentive of using English". 

Advertisement

In a report published in Fastcodesign, visiting research scientist from Georgia Tech at FAIR, Dhruv Batra, explained: "There was no reward to sticking to English language”. As these two "dialog agents" competed to get the best deal in negotiations, an effective adversarial network was created.

Not being offered any incentive to carry out their conversations in English like a normal person would do, the AI bots started creating a shorthand/dialect of the English language to improve their efficiency at negotiations. 

Should we be concerned?

Bob: I can i i everything else

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i everything else

Alice: balls have a ball to me to me to me to me to me to me to me to me 

This cryptic conversation between the two bots has been likened to a Skynet-esque sophisticated conversation in code between two machines looking to end humanity's reign over the planet. Several news websites have taken the opportunity to use this to warn us of an AI apocalypse.

Tesla founder Elon Musk, who has preached caution against AI, has also found himself being quoted liberally. Musk had previously said, " I have exposure to the very most cutting-edge AI, and I think people should be really concerned about it".

But is the chatbot experiment a failure? Has humankind pushed itself closer to an AI apocalypse? To be honest, the reality is far from it. The idea that if AI is not kept in check, it could grow into something sinister is a thought which in its basic form represents mankind's fear of the other and as such should be taken with a pinch of salt.

Musk's is a school of thought that has been strongly contested by many leaders of the industry. Such fears have been most recently downplayed by none other than Facebook's Mark Zuckerberg.

In a FB Live streamed from his Palo Alto home which has an inbuilt Jarvis style AI assistant,Zuckerberg said, "I think people who are naysayers and try to drum up these doomsday scenarios - I don’t understand it... it’s really negative, and in some ways I think it's pretty irresponsible."

And in any case, such fears are unfounded for now as machine learning capabilities today are nowhere close to being able to achieve AI singularity. Technological capabilities are a clear bottleneck for such fears to become true in the near future. 

Last updated: August 01, 2017 | 14:16
IN THIS STORY
Please log in
I agree with DailyO's privacy policy