Thursday, April 27, 2017

Conscious AI as a Feature, Not a Bug


I really like the SF show Humans and its depiction of an alternate present-day UK where in addition to iPhones and super-fast internet service, humanoid “synths” have come into widespread use as servants, workers, caretakers, and companions. The basic premise is that while these robots are sufficiently human-like to engage in conversation and even sex (complete with body warmth and fluids, apparently), they are “just machines,” and people are not supposed to consider them as “persons,” though many certainly develop relationships with them, as humans do even with dogs and cats. And as with pets (and people), some humans will abuse their synths.

The big plot driver in this show (spoiler alert!) is that some of the synths have secretly been “upgraded” to possess consciousness and emotions, and when some far-fetched circumstances lead to this upgrade being pushed over the network to all synths, we have the makings of an uprising. That’s where they leave us hanging at the end of season 2, with thousands of synths “waking up” and abandoning their dreary posts as gardeners or whatever. There's a lot to swallow to really enjoy this show, but the writing and characters are good, and they manage to earn my suspension of disbelief most of the time. I'm looking forward to season 3 (I hope it's renewed). 

What always bothered me about the show is that despite their stilted speech and claims to not understand many “human things,” normal synths function at such a high level that it’s hard to imagine that they are not self-aware above and beyond whatever technical self-diagnostic systems they may have (so they can know when to recharge their batteries and recognize when another synth is not broadcasting as they apparently are supposed to do). Their ability to converse smoothly, navigate messy home and family environments, to even read human emotional states and anticipate needs, and to explain why they do things (when asked)… these suggest they are much more than “mere machines.” But would this mean they are “conscious?” What does that even mean?

This Nautilus article by a Japanese neuroscientist and AI researcher delves into this: “We Need Conscious Robots: How introspection and imagination make robots better” by Ryota Kanai.  He emphasizes that something like consciousness or at least self-awareness will be needed to allow AI systems to explain their “reasoning,” decisions, and actions to people, so people can feel more confident in and safer with these entities. But he suggests a more immediate need for such awareness – to allow for simple and common delays in their interactions with people and objects caused by distractions or other factors. Sometimes I forget why I walked down to the basement or that I put my coffee cup in the microwave, but most of the time, I “know what I’m doing” at least over a brief time period. This seemingly simple knowledge is connected to consciousness. As Kanai writes:
In fact, even our sensation of the present moment is a construct of the conscious mind. We see evidence for this in various experiments and case studies. Patients with agnosia who have damage to object-recognition parts of the visual cortex can’t name an object they see, but can grab it. If given an envelope, they know to orient their hand to insert it through a mail slot. But patients cannot perform the reaching task if experimenters introduce a time delay between showing the object and cuing the test subject to reach for it. Evidently, consciousness is related not to sophisticated information processing per se; as long as a stimulus immediately triggers an action, we don’t need consciousness. It comes into play when we need to maintain sensory information over a few seconds.
 He also talks about the need for some level of “desire” or curiosity in robots or other AI systems to avoid humans needing to spell out every detail of the simplest request. One aspect of this is “counterfactual information generation” (i.e., thinking about or modeling past or future situations, not only the here-and-now). Kanai writes, “We call it ‘counterfactual’ because it involves memory of the past or predictions for unexecuted future actions, as opposed to what is happening in the external world. And we call it ‘generation’ because it is not merely the processing of information, but an active process of hypothesis creation and testing.” He gives an example of one of their test AI agents learning to drive around a simulated landscape and deciding that climbing a hill would be a useful problem to solve in order to drive the most efficient route (without being taught or specifically asked to do this, as would normally be needed).

In the context of my home, this makes me think about how our aging dog Gracie would always like to go upstairs to sleep in our bedroom during the day, but we keep the gate closed at the bottom to limit her stair-climbing due to her arthritis. She will sometimes push open a loosely-closed door but has never tried to pull open the loosely-closed baby gate (if she learned this, we would just have to keep the gate latched). If we had a Humans-type “synth” and I wanted it to go upstairs and get me my wallet, it would have to know that if the gate or bedroom door were closed, or if something on the stairs were blocking access, it should open the gate or door or move the object. That could be some simple logic programming I suppose (if door closed, open it, unless it's locked, or something), but the more human-friendly approach would be to remember and “want to” complete the goal, independently solving any minor sub-problems along the way.

Kanai writes in conclusion: 
If we consider introspection and imagination as two of the ingredients of consciousness, perhaps even the main ones, it is inevitable that we eventually conjure up a conscious AI, because those functions are so clearly useful to any machine. We want our machines to explain how and why they do what they do. Building those machines will exercise our own imagination. It will be the ultimate test of the counterfactual power of consciousness.
This makes sense to me. If we are to interact comfortably with future robots or other AI systems, it will be helpful if they can maintain a "mental model" of our household, workplace, or other relevant environments, not so they can feel good or bad about themselves, or fall in love or whatever, but because these are things we unconsciously expect in social interactions. Simpler systems or apps, even voice-driven ones like Siri and Amazon's Alexa, can get by with being strictly transactional, to tell me the weather or play me some Talking Heads music as soon as I ask. But conversation and predictability will be a lot smoother if these systems have at least some level of self- and other-awareness and some ability to learn how things work around here. We can decide later whether this is the same as what we call "consciousness," but it is certainly like it in some ways. As AI systems improve, they will behave more and more like conscious entities, whether they are or not.

Then of course we can have that long-anticipated war between the humans and the machines. May the best entity win. But would you mind getting me my slippers first? 

----

Nautilus is a great web-based science magazine that features essays by various writers, often touching on the societal aspects of science and technology. There's a theme for each month's issue to which the essays are at least loosely tied. This month it's consciousness.