I really like the SF show Humans and its depiction of an alternate
present-day UK where in addition to iPhones and super-fast internet service,
humanoid “synths” have come into widespread use as servants, workers,
caretakers, and companions. The basic premise is that while these robots are
sufficiently human-like to engage in conversation and even sex (complete with
body warmth and fluids, apparently), they are “just machines,” and people are
not supposed to consider them as “persons,” though many certainly develop
relationships with them, as humans do even with dogs and cats. And as with pets
(and people), some humans will abuse their synths.
The big plot driver in this show (spoiler alert!) is that some of the synths
have secretly been “upgraded” to possess consciousness and emotions, and when
some far-fetched circumstances lead to this upgrade being pushed over the
network to all synths, we have the makings of an uprising. That’s where they
leave us hanging at the end of season 2, with thousands of synths “waking up”
and abandoning their dreary posts as gardeners or whatever. There's a lot to swallow to really enjoy this show, but the writing and characters are good, and they manage to earn my suspension of disbelief most of the time. I'm looking forward to season 3 (I hope it's renewed).
What always bothered me about the show is that despite their
stilted speech and claims to not understand many “human things,” normal synths
function at such a high level that it’s hard to imagine that they are not
self-aware above and beyond whatever technical self-diagnostic systems they may have (so they
can know when to recharge their batteries and recognize when another synth is
not broadcasting as they apparently are supposed to do). Their ability to
converse smoothly, navigate messy home and family environments, to even read
human emotional states and anticipate needs, and to explain why they do things
(when asked)… these suggest they are much more than “mere machines.” But would
this mean they are “conscious?” What does that even mean?
This Nautilus article by a Japanese neuroscientist and AI
researcher delves into this: “We Need Conscious Robots: How introspection and
imagination make robots better” by Ryota Kanai. He emphasizes that
something like consciousness or at least self-awareness will be needed to allow
AI systems to explain their “reasoning,” decisions, and actions to people, so people can feel more confident in and safer with these entities. But
he suggests a more immediate need for such awareness – to allow for simple and
common delays in their interactions with people and objects caused by
distractions or other factors. Sometimes I forget why I walked down to the
basement or that I put my coffee cup in the microwave, but most of the time, I “know what
I’m doing” at least over a brief time period. This seemingly simple knowledge is connected to
consciousness. As Kanai writes:
In fact, even our sensation of the present moment is a construct of the conscious mind. We see evidence for this in various experiments and case studies. Patients with agnosia who have damage to object-recognition parts of the visual cortex can’t name an object they see, but can grab it. If given an envelope, they know to orient their hand to insert it through a mail slot. But patients cannot perform the reaching task if experimenters introduce a time delay between showing the object and cuing the test subject to reach for it. Evidently, consciousness is related not to sophisticated information processing per se; as long as a stimulus immediately triggers an action, we don’t need consciousness. It comes into play when we need to maintain sensory information over a few seconds.
In the context of my home, this makes me think about how our aging dog Gracie would always like to
go upstairs to sleep in our bedroom during the day, but we keep the gate closed
at the bottom to limit her stair-climbing due to her arthritis. She will
sometimes push open a loosely-closed door but has never tried to pull open the
loosely-closed baby gate (if she learned this, we would just have to keep the gate latched). If we had a Humans-type “synth” and I wanted it to go upstairs
and get me my wallet, it would have to know that if the gate or bedroom door
were closed, or if something on the stairs were blocking access, it should open the gate
or door or move the object. That could be some simple logic programming I
suppose (if door closed, open it, unless it's locked, or something), but the more human-friendly approach would be to
remember and “want to” complete the goal, independently solving any minor sub-problems along the way.
Kanai writes in conclusion:
If we consider introspection and imagination as two of the ingredients of consciousness, perhaps even the main ones, it is inevitable that we eventually conjure up a conscious AI, because those functions are so clearly useful to any machine. We want our machines to explain how and why they do what they do. Building those machines will exercise our own imagination. It will be the ultimate test of the counterfactual power of consciousness.
This makes sense to me. If we are to interact comfortably with future robots or other AI systems, it will be helpful if they can maintain a "mental model" of our household, workplace, or other relevant environments, not so they can feel good or bad about themselves, or fall in love or whatever, but because these are things we unconsciously expect in social interactions. Simpler systems or apps, even voice-driven ones like Siri and Amazon's Alexa, can get by with being strictly transactional, to tell me the weather or play me some Talking Heads music as soon as I ask. But conversation and predictability will be a lot smoother if these systems have at least some level of self- and other-awareness and some ability to learn how things work around here. We can decide later whether this is the same as what we call "consciousness," but it is certainly like it in some ways. As AI systems improve, they will behave more and more like conscious entities, whether they are or not.
Then of course we can have that long-anticipated war between the humans and the machines. May the best entity win. But would you mind getting me my slippers first?
Then of course we can have that long-anticipated war between the humans and the machines. May the best entity win. But would you mind getting me my slippers first?
----
Nautilus is a great web-based science magazine that features essays by various writers, often touching on the societal aspects of science and technology. There's a theme for each month's issue to which the essays are at least loosely tied. This month it's consciousness.
5 comments:
Great question -- what *is* consciousness? Always a stimulating thought to consider. We often talk about machines and animals re: their consciousness or lack thereof but rarely do we question whether all humans are conscious, and what that really entails. When do we consider human infants "conscious"? Immediately upon birth? Before birth? Sometime after? What behaviors do they have to exhibit for us to award them the designation "conscious"? Is being "unconscious" (i.e., knocked out or in a coma) the only way that humans consider other humans *not* conscious? Someday folks will have to tease apart the intertwined notions of consciousness and humanness. - CHC
Thank you for the links, it's very interesting. I will definitely come to the site.
write my essays online.
Thank you for sharing this information. I know what you need is a homework writing service .
Thanks for this detailed description of the show. After reading I also want to be a part of it. But now I have to prepare for my classes and to write identity essays examples for my topic.
I must say that I am quite surprised with how effectively you run your website and how educational your writings are. It appears like you have truly been able to grab people's attention. Keep it up! I recently saw a blog regarding how to measure reaction time. If you want to read the complete story, go to reaction time tester and leave your comments.
Post a Comment