Article originally published in Voice Tech Podcast.

Three years ago, an interesting event occurred at Facebook AI labs, which had triggered a huge debate in the scientific community.

Researchers from the Facebook Artificial Intelligence Research lab (FAIR) made an unexpected discovery while trying to improve chatbots. The bots — known as “dialog agents” — were creating their own language.

Using machine learning algorithms, dialog agents were left to converse freely in an attempt to strengthen their conversational skills. Over time, the bots began to deviate from the scripted norms and in doing so, started communicating in an entirely new language — one they created without human input. In a language sense, this one is mostly gibberish. But it’s interesting that AI, if given the opportunity, begins to deviate from the script to create something new.

At first sight, it looks just some jiberrish interchange of words due to some software bug. But, while probing a bit deeper, it was seen that in an attempt to better converse with humans, chatbots took it a step further and got better at communicating without them — in their own sort of way.

And it’s not the only interesting discovery.

Researchers also found these bots to be incredibly crafty negotiators. After learning to negotiate, the bots relied on machine learning and advanced strategies in an attempt to improve the outcome of these negotiations. Over time, the bots became quite skilled at it and even began feigning interest in one item in order to “sacrifice” it at at a later stage in the negotiation as a faux compromise.

Few years back, Google’s AI lab also claimed a similar breakthrough while employing general purpose AI which used neural networks on their Google Translate app. By general purpose AI, we mean that if a neural network has been taught to translate between English and Japanese, and English and Korean, it can also translate between Japanese and Korean without first going through English.

Though it didn’t perform very well, Google’s researchers think their system achieved a breakthrough by finding a common ground whereby sentences with the same meaning are represented in similar ways regardless of language — which they say is an example of an “interlingua”. In a sense, that means the neural network created a new common language, albeit one that’s specific to the task of translation and not readable or usable for humans.

Now if we compare this behaviour among humans, don’t we type in LMAO, lol, IMHO and WTF on whatsapp chats as we are too lazy to — laugh or express opinion or share feelings?

Interestingly, these AI bots are mostly using algorithms which are driven by reward-punishment mechanism (e.g. Google’s deep mind AI uses Neural Stack). So, in layman term, if the bots are not rewarded or punished for using language which human’s too can understand, they would find an optimal language to communicate between themselves which minimize punishment and maximize rewards indirectly (e.g. processor usage, execution time, communication delays).

So, are these AI as lazy and creative as their creators, who are motivated by rewards?

Reference:

a) Facebook’s AI accidentally created its own language — Bryan Clark,thenextweb.com

b) Google Translate AI invents its own language to translate with — Sam Wong,November 2016,newscientist.com

c) Screenshot: Courtesy Facebook — fastcodesign.com