"There was no reward to sticking to English language", explained Dhruv Batra, a researcher at Facebook AI Research.
Facebook now requires its bots to speak in plain old English. Reports have been flying around of these robots creating their own sinister coded language, along with incomprehensible snippets of intriguing exchanges between the two of them. Because Facebook researchers have found that chatbots trained in negotiations will sometimes invent unusual new ways to use language to improve their odds of success. The experiment still served its goal, the bots actually ended up coming up with very human like strategies for negotiating what they wanted. Sentences like "Balls have zero to me to me to me to me to me to me to me to me to" were unintelligible to him and his peers, but to the conversing AIs they were a shortcut to speedier understanding in the negotiation process.
Alice: balls have a ball to me to me to me to me to me to me to meBob: i i can i i i everything else.
Bob: you i i i i i everything else. The researchers subsequently changed approach, as their main interest was in developing bots that could talk to people, which means the conversation needs to be legible.More news: After failure of 'Skinny Repeal,' Haslam still wants health care addressed
Although the shocked Facebook Artificial Intelligence Researchers (FAIR) were able to kill the activity by shutting the system down, this advanced level brings the fantasy of humans losing control of intelligent technology much closer to reality.
Some say there's no problem that AI systems are trying to create their own language, but that humans are not willing to learn it. The matter is still up for debate, with some believing machines should be allowed to develop their own language independent of complicated human ones to make things more efficient.
Up to now, most bots or chatbots have had only the ability to hold short conversations and perform simple tasks like booking a restaurant table.
In any case, the obsession with bots "inventing a new language" misses the most notable part of the research in the first place: that the bots, when taught to behave like humans, learned to lie - even though the researchers didn't train them to use that negotiating tactic. The flip side, though, is that we then can't really understand what it is they're discussing.