Are AI’s unknown workings–fed by humans–creating intellectual debt we can’t pay off?

Financial debt shifts control—from borrower to lender, and from future to past. Mounting intellectual debt may shift control, too. A world of knowledge without understanding becomes a world without discernible cause and effect, in which we grow dependent on our digital concierges to tell us what to do and when.

Debt theory and AI. This Editor never thought of learning exactly how something works as a kind of intellectual paydown of debt on what Donald Rumsfeld called ‘known unknowns’–we know it works, but not exactly how. It’s true of many drugs (aspirin), some medical treatments (deep brain stimulation for Parkinson’s–and the much-older electroconvulsive therapy for some psychiatric conditions), but rarely with engineering or the fuel pump on your car. 

Artificial intelligence (AI) and machine learning aren’t supposed to be that way. We’re supposed to be able to control the algorithms, make the rules, and understand how it works. Or so we’ve been told. Except, of course, that is not how machine learning and AI work. The crunching of massive data blocks brings about statistical correlation, which is of course a valid method of analysis. But as I learned in political science, statistics, sports, and high school physics, correlation is not causality, nor necessarily correct or predictive. What is missing are reasons why for the answers they provide–and both can be corrupted simply by feeding in bad data without judgment–or intent to defraud.

Bad or flawed data tend to accumulate and feed on itself, to the point where someone checking cannot distinguish where the logic fell off the rails, or to actually validate it. We also ascribe to AI–and to machine learning in its very name–actual learning and self-validation, which is not real. 

There are other dangers, as in image recognition (and this Editor would add, in LIDAR used in self-driving vehicles):

Intellectual debt accrued through machine learning features risks beyond the ones created through old-style trial and error. Because most machine-learning models cannot offer reasons for their ongoing judgments, there is no way to tell when they’ve misfired if one doesn’t already have an independent judgment about the answers they provide.

and

As machines make discovery faster, people may come to see theoreticians as extraneous, superfluous, and hopelessly behind the times. Knowledge about a particular area will be less treasured than expertise in the creation of machine-learning models that produce answers on that subject.

How we fix the balance sheet is not answered here, but certainly outlined well. The Hidden Costs of Automated Thinking (New Yorker)

And how that AI system actually gets those answers might give you pause. Yes, there are thousands of humans, with no special expertise or medical knowledge, being trained to feed the AI Beast all over the world. Data labeling, data annotation, or ‘Ghost Work’ from the book of the same name, is the parlance, includes medical, pornographic, commercial, and grisly crime images. Besides the mind-numbing repetitiveness, there are instances of PTSD related to the images and real concerns about the personal data being shared, stored, and used for medical diagnosis. A.I. Is Learning from Humans. Many Humans. (NY Times)

IBM gives sensor-based in-home behavioral tracking a self-driving car ‘spin’ in the UK with Cera Care

In-home behavioral tracking of older adults, which was a significant portion of telecare circa 2007 up until a few years ago, may be getting a new lease on life. The technology in this round is the same as what guides self-driving vehicles–LiDAR or Light Detection and Ranging, which uses laser light pulses to map images of movement and surroundings. 

In this model, IBM Research will use the LiDAR information and their machine learning to establish normal patterns and also to observe behaviors that may indicate a potentially dangerous condition or situation. The LiDAR pilot will be in 10-15 households in the UK starting in June. IBM is partnering with early-stage UK home care company Cera Care on the reporting and linking with care staff on alerts on changes in behavior that may predict a more acute condition. 

Many of the privacy issues that dogged predictive behavioral telemonitoring via networked infrared motion sensors, as well as in-home cameras, are present with LiDAR monitoring. Unlike 2007, five states have ‘nanny cam’ laws that prohibit cameras within skilled nursing facilities without patient consent (Senior Housing News) Another issue: expense. LiDAR sensor setups cost up to $1,000 each, and at least one per room is needed. Far cheaper setups are available from the Editor’s long-ago former company, QuietCare, if one can still purchase them for the home from Care Innovations; Alarm.com, UK’s Hive Link, and Google may get into the act with their Nest connected home tech.

Senior housing may open up a new market for LiDAR, which is wilting in the autonomous vehicle (AV) area as it’s proven to be rather buggy on real roads with real drivers. Certainly the housing and care market is growing and destined to be huge, with over-60s growing from 900 million in 2015 to 2 billion worldwide in 2050, while for-hire caregivers are shrinking by the millions.  Business Insider, Reuters