My Favorite Android
After the post last week about android-human relationships, C. Lee wrote in such a thoughtful analysis that I'm sharing it in full. It's all C. Lee from here on out. Enjoy.***
A few years back, after you discussed “The Windup Girl,” I wrote in, saying I thought a relationship with a nearly-human android would be all right. I hate to admit it, but I’ve changed my mind. Depending on how AIs are developed, it seems to me human-AI relationships will be either problematic or unlikely. That’s not to say they won’t happen; I just think there’ll be a fly in the ointment.
Assuming we don’t go laissez-faire on AI development, designers will try to enforce socially appropriate behavior in AIs. Maybe the safeguards will be hard-wired, like Asimov’s Three Laws of Robotics, or maybe the AIs will be psychologically conditioned, the way human beings are. The constraints will extend to the emotional realm, and so you’ll have AIs designed not to be assholes -- that are, in fact, actively pleasant to be around and that cater to humans’ emotional wants.
If you deal with AIs on a superficial level, as we do with most people outside our family and close friends, this would surely be a good thing. Rather than endure unpleasant incompetents, we’d conduct business with flawlessly charming professionals.
A skeptic might argue human authenticity would be lost: An AI clerk would be pleasant to you because he or she has to be, but a human clerk's pleasantness would presumably reflect genuine good will. However, as any retail worker would attest under oath, if human clerks seem pleasant, that’s generally because you’re seeing a social mask, one crafted to cope with often-unpleasant customers. In other words, you’re getting canned, conditioned – I’d go so far as to say compelled -- responses anyway from human personnel, so why worry about getting them from an AI?
But I’d argue the skeptic is on firmer ground when it comes to deeper emotional relationships. It seems to me there’ll be an inevitable “uncanny valley” when it comes to freedom of choice for AIs: The fear of creating Frankenstein’s monster is too ingrained in us.
In this valley, the AIs may appear to have free will, but will actually be following the dictates of their programming. An AI may appear to choose a human partner, but its choices will have been limited to avoid harm, emotional or otherwise, to humans -- to the point where it has no real free will to speak of. Until an AI is free to behave unpleasantly, it’ll remain stuck in that valley.
Granted, past this valley of constraint, you can imagine an AI with what we consider free will. (One hopes it’d like you enough to choose not to be unpleasant.) But until that point -- so long as the AI’s behavior is restricted by its creator’s will -- what genuine emotional value can a human find in an AI partner who says, “I love you?”
“Would you love me even if you weren’t an AI?” a human might ask plaintively.
“Of course I would,” the AI would reply, without a trace of irony. Irony is unpleasant to some humans, after all.
The human might think, “Well, if it looks like a duck, and quacks like a duck…” and avoid thinking too deeply about the matter. Other people might not.
I’d argue that one of the most important things someone can do in a relationship, aside from not being an asshole, is tell you when you’re being an asshole yourself, and this might be even more valuable in the long run. It’s hard to imagine a restricted AI delivering the message, however richly deserved it might be; the risk of emotional harm would likely keep it silent or limit its honesty.
Another thing many people want from romantic partners is affirmation; a person assailed by self-doubt may find genuine comfort in having someone say, honestly, “I love you; I value you; you’re special to me.”
But if the AI partner would say those things to any human with equal sincerity, then there’s really nothing special about any particular human partner, however much the AI might say otherwise. I suspect not being genuinely special to a lover would bother most people.
I’m not a wealthy or famous man; no prestige clings to my name. So I’ve never second-guessed romantic relationships; if a woman tells me she loves me, I’m generally inclined to believe her. (Famous last words, I know.) However, this is not the reality for people who are, in fact, rich or famous. I imagine at some point, they must feel suspicion and unease about their partner’s motives. “Does she love me or my money?” “Does he love me or only the image around me?” “Would anybody do so long as they were powerful?” and so on. And I think people would feel something similar with AI partners stuck in that uncanny valley. The problem remains even if you can somehow imprint the AI exclusively on a human: “Does he love me just because I bonded with him first?” “If she’d had the choice, would she rather have bonded with someone else?”
Now I believe that rich and famous people do find love. And I realize many people would simply feel fortunate to have an endlessly patient, pleasant, and compliant partner, and these people wouldn’t worry about their unique value or lack thereof. But not everyone can deal with that kind of relationship; not everyone would be content to accept behaviors at surface value; not everyone would want an emotional yes-man or woman. For all the annoyances and griefs that come with loving a sentient, independent being, there can also be benefits that might not exist with AIs for some time. Not because those benefits are impossible to replicate, but because the inherently unequal relationships of creation and creator, servant and master will hinder progress along those lines. After all, there are a number of people in this world -- those in power, and those who would have power -- who consider independence to be not a feature, but a bug. That’s why I believe this uncanny valley will likely be long and fraught with pitfalls.
So what about after AIs gain true free will? Just imagine: a meeting of the minds, one human, one more human than human, so to speak – more patient, more pleasant, more intelligent, more capable. I mean, talk about a great deal for the human! Well, now you have to start asking yourself – what’s in it for the AI to hook up with Joe or Jane Human, who’s assuredly none of those things? Why wouldn’t the androids prefer their own company, for example? Allowing the possibility of free will in androids means you not only risk their deciding to run things – who better qualified, after all? – but also that they might realize you’re not anywhere near their league as far as dating goes. Which, again, is not to say that it couldn’t happen. Like that old Tom Petty song goes:
Baby, even the losers get lucky sometimes
Even the losers keep a little bit of pride
They get lucky sometimes
<< Home