Are AI’s unknown workings–fed by humans–creating intellectual debt we can’t pay off?

Financial debt shifts control—from borrower to lender, and from future to past. Mounting intellectual debt may shift control, too. A world of knowledge without understanding becomes a world without discernible cause and effect, in which we grow dependent on our digital concierges to tell us what to do and when.

Debt theory and AI. This Editor never thought of learning exactly how something works as a kind of intellectual paydown of debt on what Donald Rumsfeld called ‘known unknowns’–we know it works, but not exactly how. It’s true of many drugs (aspirin), some medical treatments (deep brain stimulation for Parkinson’s–and the much-older electroconvulsive therapy for some psychiatric conditions), but rarely with engineering or the fuel pump on your car. 

Artificial intelligence (AI) and machine learning aren’t supposed to be that way. We’re supposed to be able to control the algorithms, make the rules, and understand how it works. Or so we’ve been told. Except, of course, that is not how machine learning and AI work. The crunching of massive data blocks brings about statistical correlation, which is of course a valid method of analysis. But as I learned in political science, statistics, sports, and high school physics, correlation is not causality, nor necessarily correct or predictive. What is missing are reasons why for the answers they provide–and both can be corrupted simply by feeding in bad data without judgment–or intent to defraud.

Bad or flawed data tend to accumulate and feed on itself, to the point where someone checking cannot distinguish where the logic fell off the rails, or to actually validate it. We also ascribe to AI–and to machine learning in its very name–actual learning and self-validation, which is not real. 

There are other dangers, as in image recognition (and this Editor would add, in LIDAR used in self-driving vehicles):

Intellectual debt accrued through machine learning features risks beyond the ones created through old-style trial and error. Because most machine-learning models cannot offer reasons for their ongoing judgments, there is no way to tell when they’ve misfired if one doesn’t already have an independent judgment about the answers they provide.

and

As machines make discovery faster, people may come to see theoreticians as extraneous, superfluous, and hopelessly behind the times. Knowledge about a particular area will be less treasured than expertise in the creation of machine-learning models that produce answers on that subject.

How we fix the balance sheet is not answered here, but certainly outlined well. The Hidden Costs of Automated Thinking (New Yorker)

And how that AI system actually gets those answers might give you pause. Yes, there are thousands of humans, with no special expertise or medical knowledge, being trained to feed the AI Beast all over the world. Data labeling, data annotation, or ‘Ghost Work’ from the book of the same name, is the parlance, includes medical, pornographic, commercial, and grisly crime images. Besides the mind-numbing repetitiveness, there are instances of PTSD related to the images and real concerns about the personal data being shared, stored, and used for medical diagnosis. A.I. Is Learning from Humans. Many Humans. (NY Times)

Categories: Latest News and Opinion.

Leave a Reply

Your email address will not be published. Required fields are marked *