AI good, AI bad (part 2): the Facebook bot dialect scare

[grow_thumb image=”https://telecareaware.com/wp-content/uploads/2017/08/ghosty.jpg” thumb_width=”175″ /]Eeek! Scary! Bots develop their own argot. Facebook AI Research (FAIR) tested two chatbots programmed to negotiate. In short order, they developed “their own creepy language”, in the words of the Telegraph, to trade their virtual balls, hats, and books. “Creepy” to FAIR was only a repetitive ‘divergence from English’ since the chatbots weren’t limited to standard English. The lack of restriction enabled them to develop their own argot to quickly negotiate those trades. “Agents will drift off understandable language and invent codewords for themselves,” said Dhruv Batra, visiting research scientist from Georgia Tech at Facebook AI Research. “This isn’t so different from the way communities of humans create shorthands.” like soldiers, stock traders, the slanguage of showbiz mag Variety, or teenagers. Because Facebook’s interest is in AI bot-to-human conversation, FAIR put in the requirement that the chatbots use standard English, which as it turns out is a handful for bots.

The danger in AI-to-AI divergence in language is that humans don’t have a translator for it yet, so we’d never quite understand what they are saying. Batra’s unsettling conclusion: “It’s perfectly possible for a special token to mean a very complicated thought. The reason why humans have this idea of decomposition, breaking ideas into simpler concepts, it’s because we have a limit to cognition.” So this shorthand can look like longhand? FastCompany/Co.Design’s Mark Wilson sees the upside–that software talking their own language to each other could eliminate complex APIs–application program interfaces, which enable different types of software to communicate–by letting the software figure it out. But for humans not being able to dig in and understand it readily? Something to think about as we use more and more AI in healthcare and predictive analytics.

NYeC Digital Health Conference 2013: the trends

Updated 21 November

The third annual New York eHealth Collaborative (NYeC) Digital Health Conference in New York City attracted several hundred people from the worlds of hospitals, public health, academia, policy makers and health insurers–and the myriad related products and services which will enable these entities to improve their health IT, organization and engage patients in their own health. If there were three buzzword phrases setting the tone, they were interoperability, patient portals and technological innovation. All relate to data–data transfer of patient records between providers to be available regionally (RHIOs) and throughout the state via the SHIN-NY health information exchange (HIE); using data to help people visualize and improve their health;  putting data into ‘whole person’ context for providers, integrating it into workflows and to save lives; using data to serve process improvement and tougher standards. And finally there is that old devil cost: reducing the cost of care, reducing expensive readmissions plus co-morbidities and making those tools to do this job more affordable for providers and patients.

NYeC has developed considerably since its early days seven years ago (more…)