Eeek! Scary! Bots develop their own argot. Facebook AI Research (FAIR) tested two chatbots programmed to negotiate. In short order, they developed “their own creepy language”, in the words of the Telegraph, to trade their virtual balls, hats, and books. “Creepy” to FAIR was only a repetitive ‘divergence from English’ since the chatbots weren’t limited to standard English. The lack of restriction enabled them to develop their own argot to quickly negotiate those trades. “Agents will drift off understandable language and invent codewords for themselves,” said Dhruv Batra, visiting research scientist from Georgia Tech at Facebook AI Research. “This isn’t so different from the way communities of humans create shorthands.” like soldiers, stock traders, the slanguage of showbiz mag Variety, or teenagers. Because Facebook’s interest is in AI bot-to-human conversation, FAIR put in the requirement that the chatbots use standard English, which as it turns out is a handful for bots.
The danger in AI-to-AI divergence in language is that humans don’t have a translator for it yet, so we’d never quite understand what they are saying. Batra’s unsettling conclusion: “It’s perfectly possible for a special token to mean a very complicated thought. The reason why humans have this idea of decomposition, breaking ideas into simpler concepts, it’s because we have a limit to cognition.” So this shorthand can look like longhand? FastCompany/Co.Design’s Mark Wilson sees the upside–that software talking their own language to each other could eliminate complex APIs–application program interfaces, which enable different types of software to communicate–by letting the software figure it out. But for humans not being able to dig in and understand it readily? Something to think about as we use more and more AI in healthcare and predictive analytics.
Guest columnist Dr Vikrum (Sunny) Malhotra attended ATA 2015 last week. This is the second of three articles on his observations on trends and companies to watch.
During the course of the ATA conference, I was inundated with the concept of “dumb” data whereby biosensors track patient clinical data and will alarm to clinical staff if outside designated parameters. However, the call center filter between the patient’s data and physician is often a primary cause of increased unnecessary admissions. The Sentrian Remote Patient Intelligence Platform (Sentrian RPI) received recognition for its advancement in utilization of sensors, enabling healthcare providers to utilize this “dumb” data and make it “smart”. For clinicians like myself, this was a new way of looking at an age old problem: “How do we safely and comprehensively support physician decision making at a standard high enough to detect pathologies earlier and more accurately?”
Sentrian has used machine learning to support the work of a dedicated clinical team by monitoring patient data 24/7 to detect subtle signs that warn a family member or care provider of future problems through biometric patterns of thousands of patients, comparing their medical histories, vitals and health information. This novel approach to remote monitoring won Sentrian the ATA President’s Innovation Award. (more…)
The New York eHealth Collaborative’s fourth annual Digital Health Conference is increasingly notable for combining both local concerns (NYeC is one of the key coordinators of health IT for the state) and nationally significant content. A major focus of the individual sessions was data in all flavors: big, international, private, shared and ethically used. Another was using this data in coordinating care and empowering patients. Your Editor will focus on this as reflected in sessions she attended, along with thoughts by our two guest contributors, in Part 2 of this roundup.
The NYeC Conference was unique in presenting two divergent views of ‘Future IT’ and how it will affect healthcare delivery. One is a heady, optimistic one of powerful patients taking control of their healthcare, personalized ‘democratized medicine” and innovative, genetically-powered ‘on demand medicine’. The other is a future of top-down, regulated, cost-controlled, analyzed and constrained healthcare from top to bottom, with emphasis on standardizing procedures for doctors and hospitals, plus patient compliance.
First to Dr Topol in Monday’s keynote. The good side of people ‘wired’ to their phones is that it is symptomatic, not of Short Attention Span Theatre, but of Moore’s Law–the time technology is now taking for adoption by at least 25 percent of the US population is declining by about 50 percent. That means comfort with the eight drivers he itemizes for democratizing medicine and empowering the patient: sensors, labs, imaging, physical examination, records, costs, meds and ‘Uber Doc’.