AI good, AI bad. Perhaps a little of both?

Everyone’s getting hot ‘n’ bothered about AI this summer. There’s a clash of giants–Elon Musk, who makes expensive, Federally subsidized electric cars which don’t sell, and Mark Zuckerberg, a social media mogul who fancies himself as a social policy guru–in a current snipe-fest about AI and the risk it presents. Musk, who is a founder of the big-name Future of Life Institute which ponders on AI safety and ethical alignment for beneficial ends, and Zuckerberg, who pooh-poohs any downside, are making their debate points and a few headlines. However, we like to get down to the concretes and here we will go to an analysis of a report by Forrester Research on AI in the workforce. No, we are not about to lose our jobs, yet, but hold on for the top six in the view of Gil Press in Forbes:

  1. Customer self-service in customer-facing physical solutions such as kiosks, interactive digital signage, and self-checkout.
  2. AI-assisted robotic process automation which automates organizational workflows and processes using software bots.
  3. Industrial robots that execute tasks in verticals with heavy, industrial-scale workloads.
  4. Retail and warehouse robots.
  5. Virtual assistants like Alexa and Siri.
  6. Sensory AI that improves computers’ recognition of human sensory faculties and emotions via image and video analysis, facial recognition, speech analytics, and/or text analytics.
[grow_thumb image=”” thumb_width=”200″ /]For our area of healthcare technology, look at #5 and #6 first–virtual assistants leveraging the older adult market like 3rings‘ interface with Amazon Echo [TTA 27 June] and sensory AI for recognition tools with broad applications in everything from telehealth to sleepytime music to video cheer-up calls. Both are on a ‘significant success’ track and in line to hit the growth phase in 1-3 years (illustration at left, click to expand).

Will AI destroy a net 7 percent of US jobs by 2027? Will AI affect only narrow areas or disrupt everything? And will we adapt fast enough? 6 Hot AI Automation Technologies Destroying And Creating Jobs (Forbes)

But we can de-stress ourselves with AI-selected music now to soothe our savage interior beasts. This Editor is testing out Sync Project’s Unwind, which will help me get to sleep (20 min) and take stress breaks (5 min). Clutching my phone (not my pearls) to my chest, the app (available on the website) detects my heart rate (though not giving me a reading) through machine learning and gives me four options to pick on exactly how stressed I am. It then plays music with the right beat pattern to calm me down. Other Sync Project applications with custom music by the Marconi Union and a Spotify interface have worked to alleviate pain, sleep, stress, and Parkinson’s gait issues. Another approach is to apply music to memory issues around episodic memory and memory encoding of new verbal material in adults aging normally. (Zzzzzzzz…..) Apply.sci, Sync Project blog

Behave, Robot! DARPA researchers teaching them some manners.

[grow_thumb image=”×108.jpg” thumb_width=”150″ /]Weekend Reading While AI is hotly debated and the Drudge Report features daily the eeriest pictures of humanoid robots, the hard work on determining social norms and programming them into robots continues. DARPA-funded researchers at Brown and Tufts Universities are, in their words, working “to understand and formalize human normative systems and how they guide human behavior, so that we can set guidelines for how to design next-generation AI machines that are able to help and interact effectively with humans,” said Reza Ghanadan, DARPA program manager. ‘Normal’ people determine ‘norm violations’ quickly (they must not live in NYC), so to prevent robots from crashing into walls or behaving towards humans in an unethical manner (see Isaac Asimov’s Three Laws of Robotics), the higher levels of robots will eventually have the capacity to learn, represent, activate, and apply a large number of norms to situational behavior. Armed with Science

This directly relates to self-driving cars, which are supposed to solve all sorts of problems from road rage to traffic jams. It turns out that they cannot live up to the breathless hype of Elon Musk, Google, and their ilk, even taking the longer term. Sequencing on roadways? We don’t have the high-accuracy GPS like the Galileo system yet. Rerouting? Eminently hackable and spoofable as WAZE has been. Does it see obstacles, traffic signals, and people clearly? Can it make split-second decisions? Can it anticipate the behavior of other drivers? Can it cope with mechanical failure? No more so, and often less, at present than humans. And self-drivers will be a bonanza for trial lawyers, as added to the list will be car companies and dealers to insurers and owners. While it will give mobility to the older, vision impaired, and disabled, it could also be used to restrict freedom of movement. Why not simply incorporate many of these assistive features into cars, as some have been already? An intelligent analysis–and read the comments (click by comments at bottom to open). Problems and Pitfalls in Self-Driving Cars (American Thinker)

Robot-assisted ‘smart homes’ and AI: the boundary between supportive and intrusive?

[grow_thumb image=”” thumb_width=”200″ /]Something that has been bothersome to Deep Thinkers (and Not Such Deep Thinkers like this Editor) is the almost-forced loss of control inherent in discussion of AI-powered technology. There is a elitist Wagging of Fingers that generally accompanies the Inevitable Questions and Qualms.

  • If you don’t think 100 percent self-driving cars are an Unalloyed Wonder, like Elon Musk and Google tells you, you’re a Luddite
  • If you have concerns about nanny tech or smart homes which can spy on you, you’re paranoid
  • If you are concerned that robots will take the ‘social’ out of ‘social care’, likely replace human carers for people, or lose your neighbor their job, you are not with the program

I have likely led with the reason why: loss of control. Control does not motivate just Control Freaks. Think about the decisions you like versus the ones you don’t. Think about how helpless you felt as a child or teenager when big decisions were made without any of your input. It goes that deep.

In the smart home, robotic/AI world then, who has the control? Someone unknown, faceless, well meaning but with their own rationale? (Yes, those metrics–quality, cost, savings) Recall ‘Uninvited Guests’, the video which demonstrated that Dad Ain’t Gonna Take Nannying and is good at sabotage.

Let’s stop and consider: what are we doing? Where are we going? What fills the need for assistance and care, yet retains that person’s human autonomy and that old term…dignity? Maybe they might even like it? For your consideration:

How a robot could be grandma’s new carer (plastic dogs to the contrary in The Guardian)

AI Is Not out to Get Us (Scientific American)

Hat tip on both to reader Malcolm Fisk, Senior Research Fellow (CCSR) at De Montfort University via LinkedIn

Your Friday superintelligent robot fix: the disturbing consequences of ultimate AI

[grow_thumb image=”×108.jpg” thumb_width=”200″ /]Our own superintelligent humans–Elon Musk (Tesla), Steve Wozniak (Apple), Bill Gates (Microsoft) and Stephen Hawking–are converging on artificial intelligence, not just everyday, pedestrian robotics, but the kind of AI superintellect that could make pets out of people–if we are lucky. In his interview with Australian Financial Review, the Woz (now an Australian resident) quipped: ‘Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on?’ (more…)