Can a relationship between a robot and a human ever be one of equals? What if the robot looks and seems human, like the Synths from AMC’s new series, “Humans”?
–
One reader will win $2,500 by commenting on this or the other GMP “HUMANS” articles! See the bottom of this post for the Rafflecopter button – leave a comment and then confirm by clicking through on the Rafflecopter button.
One of the oddities of human nature is how often we form emotional attachments to inanimate objects. We all had beloved stuffed animals as children. We give our cars and boats names and genders, we feel a pang of loss when saying goodbye to an old and well-used computers and phones…
It’s not unusual for people to have intense emotional attachments to products and even the corporations that create them – witness the passion that Apple fans have for their products, or the intense rivalry between Sony’s Playstation consoles and Microsoft’s Xbox series.
The difference is that your Xbox doesn’t look a Brazilian supermodel.
Synths—artificially intelligent androids in AMC’s new series Humans—seem like a godsend. After all, you’d have someone (I mean, something) to do the drudge work of day to day living, freeing up your time to finally illuminate that manuscript like you’ve always wanted… what’s not to love?
But that question of love – and even basic morality – becomes much more complicated when you’re talking about artificial humans.
As Dr. NerdLove, I get paid to think WAY too much about both television and relationships. And the world posited by Humans opens up any number of incredibly fascinating—and uncomfortable—questions regarding our relationships with the devices in our lives. The first question of course, becomes one of whether any relationship with a Snyth can be considered one of equals.
The morality of simply buying a Synth is fraught with peril and unforseen complications. It’s hard to tell whether or not Synths are indeed sentient beings, but if they are, then we have for all intents and purposes created a slave race. Having an emotional connection to your synth becomes one of master and slave, and adds an even more uncomfortable undertones to the idea of owning a sentient being… and this is even without the question of hacking and jailbreaking.
If Synths are artificially intelligent, then they most likely have an awareness of self and at least the simulacrum of emotions. But how genuine can a person be when their entire existence are lines of code, no matter how complicated? When those emotions and personality are part of a generated algorithm, can they be said to be genuine at all?
This seems like a nitpicky question, but it’s one with real world implications, especially for the android’s owners. Once we accept that something is sentient, the question of the existence and validity of emotions becomes incredibly important. After all, if they have self-awareness, then we have to ask if they also have agency. Never mind whether they dream of electric sheep, are androids capable of giving informed consent?
After all, if an artificial being is sentient and self-aware then at what point does it become immoral to not let it have agency? At what point does it become immoral to own it at all? Are we enslaving it by coding it to want to serve us? After all, it’s not as though Siri or Cortana can go on strike if they decide they don’t like the way we choose to interact with them.
♦◊♦
Even in our current universe (as opposed to the parallel one in which Humans takes place), we are already having conversations about whether we truly “own” the applications we run on our computers; shrinkwrap contracts and EULAs (End User’s Licensing Agreements) all define our purchases as “licensing agreements” rather than true ownership, and ones that give the companies rights over the apps and potentially even the content.
This tendency is growing beyond ephemera like code and into physical objects. Even now, GM and John Deere are claiming that we don’t actually own our cars and tractors because of proprietary code in the chips that run the engines. With Synths, we run the risk of inviting even more corporate interference into our lives, and in potentially terrifying ways.
Presumably, Synths have the ability to adjust and adapt to their own situations – after all, each is designed to interact specifically with its individual owner. By adjusting its behavior and response parameters to match the desires and needs of the owner it is, almost by definition, developing a personality—one that differentiates it from others of its make and model.
If we develop any sort of attachment to the android and its unique quirks, we take the risk that its corporate owners could hold one of our loved ones hostage. Imagine if they could force patches and updates to the OS… ones that could functionally “kill” the emergent personality and leave the owner with the shell of their former loved one, a cruel reminder of who they used to be. Even if we grant the rights of the owner over the AI, how much respect do we give to the desires of the AI?
♦◊♦
In many ways, you would have to hope that the Synths were not sentient. Things we take for granted—such as updating the OS on our computers or even swapping old laptops for the newest mode—would be like abandoning a person, a member of your family even, leaving them with strangers to be murdered when their memories are wiped and reinstalled with a blank slate. And as an AI rather than a human, the android has no say in the matter.
And if it’s another family member who develops feelings for the android rather than the owner, then you have uncomfortable question of what happens when you can literally trade in your son or daughter’s crush for a new model?
It’s a vital question to ask because the arc of technology has shown that no matter how closed the system is, there will always be those who want to take it apart and see how it works… and to make it work the way they want it to. Hackers, jailbreakers and modders always look for ways to modify and adjust their device’s parameters, whether it means adjusting and changing the hardware or simply running code and programs that the developers never intended. The challenge—and potential—of jailbreaking a sentient android would be almost too tempting to resist… and even more problematic.
If some enterprising hacker is able to override an Synth’s OS, does that make it a form of coercion or simply a violation of the EULA? Even more frightening is the idea of replacing a Synth’s personality entirely – at what point does this cross the line from basic jailbreaking a device into outright murder?
Artificial intelligence and synthetic humans can seem like a godsend, especially with an aging population in need of care and attention. But once we create true artificial intelligence, we then have to grapple with questions of morality in ways that will utterly change how we relate to our devices and possessions… and whether we can consider them possessions at all.
And I, for one, am looking forward to seeing where AMC’s new show, Humans, will take this conversation.
Watch the series premiere of Humans Sunday, June 28 at 9/8c on AMC.
This post was written in partnership with AMC
–
Readers also have the opportunity to win $2,500 during the week of June 21 to June 28. Fans are encouraged to post their thoughts here (and confirm with Rafflecopter below), on the four HUMANS posts on The Good Men Project, and one comment will be chosen at random for the grand prize.
Be sure to leave a comment before entering the sweepstakes!
Read the rest of our authors’ thoughts and insights about HUMANS and the future of robots, and see more exciting trailers from this groundbreaking series:
Synthetic Love, Could a Human Fall In Love With a Robot? by Lisa Hickey
Could a Robot Make Your Relationship Better? by Thomas G. Fiffer
Could a Race of Highly Intelligent Robots Teach Us About Our Own Prejudices? by Anne Thériault
Robot equality hangs upon how much responsibility can be delegated to the robot. By and large humans are currently responsible for a robots actions. Until there is a shift towards penalizing the robot therefore creating some notion of robot autonomy equality is impossible.
Since Humans is a work of fiction and there is no such thing as artificial intelligence, or human-like robots, I’d have to say that the main issues and questions raised in the show are not primarily about technology, although those questions are interesting in their own right, but about our relationships with each other, with our own humanity. To the extent that Humans can get us to think about or illuminate issues in the real world, like slave labor and trafficking, just to name two, then I think it will be valuable. Looks like a good cast, and great to… Read more »
The future of technology is scary
I think this concept is really interesting. At what point does technology more than just objects? And if we do reach that point what do we, as a society, do about it?
I think my question is are we attempting to create synthetic people to avoid intimacy with real people. So, your question is valid in that line of thinking. If we create these synthetic beings and then create intimacy with them, we are giving them a life but it is one-sided and based on our individual needs. Their survival and existence is based on their owner’s happiness. It sounds very much like slavery. If people want to create robots to take care of their needs, then they shouldn’t look like humans and they shouldn’t have intelligence beyond the tasks they are… Read more »
A.I or living, If sentient, then reprogramming to force a sexual behaviour would in fact be rape just as drugs are used to force sex on living tissue humans today.
I don’t watch much TV in any season and while summer becomes generally media-free I am intrigued by topic of robotization of labour. Repetitive, (and potentially) dangerous tasks are today frequently handled by machines: but they don’t look (nor act) human. Add in decision making and awareness and “our image” and now you have a good debate. What of Asimov’s 3 Laws? How will we behave towards what we create?
These are some great questions. At least in the U.S I feel like everyone “different” has had to fight for their agency, and by extension their humanity, in some way, shape, form, or fashion. I think these androids would be no different. I think in order for them to even broach the topic of sentience, agency and informed consent they would have to fight and gain legislation to outline and protect these rights. Then and only then would androids be considered equal to humans and then and only then would informed consent be possible. That being said, I don’t think… Read more »
We don’t differentiate humans based on how they were born (natural or c-section) or conceived (natural or AI or Petri dish). I wonder why we would do so for sentient, self-aware androids? Considering our history of racism, classism, sexism, and homophobia, etc. it might be easy to predict discrimination, prejudice, and hundreds of years of struggle for android rights because we never seem to learn from our mistakes, unfortunately.
That seems like a very abstract question to me without having a society that is reliant on artificial intelligence (e.g. ‘Do Androids Dream of Electric Sheep?’). However, that being said, I think it’s safe to say that based on humanity’s track record, if this technology did exist, human beings would use it to their benefit without a lot of consideration of whether it would be exploiting it or not.
Anything that can help us become more aware and compassionate is a good thing. Call us out on “our stuff” and make us see. Change can be good , different can be good.
Very interesting.
Wow, this all sounds so interesting, and never really thought about it! I’m looking forward to learning and seeing more in the show!
I do believe Humans and ‘Synths” can be equals in every other aspect besides having a true sense of morality and human empathy. Yes those emotions can always be programmed but it’s weird knowing that empathic feeling a synth feels towards a person isn’t original
Woops, I didn’t see this post before discussing the issue in your “Synthetic Love” post. It will definitely be tricky to provide informed consent when a human could easily switch out the hardware of the robot, essentially killing them as you pointed out. That power imbalance would make it difficult for robots and humans to see each other on equal terms. At the same time, though, humans can also be easily killed…morality & law are the main reasons they aren’t. If robots grow to be so human-like, human morality may shift to consider robots sentient and therefore deserving of life.… Read more »
Presumably, Synths have the ability to adjust and adapt to their own situations – after all, each is designed to interact specifically with its individual owner. By adjusting its behavior and response parameters to match the desires and needs of the owner it is, almost by definition, developing a personality—one that differentiates it from others of its make and model. We have different viewing habits on Youtube therefore each of our accounts will have a different set of recommending videos to watch. If Youtube had a physical manifestation it may fit this bill. Well Youtube does on run computers, tablets,… Read more »
I think I saw a trailer of this show the other day but didn’t pay that much attention. After reading this post I became more interested and curious about it. I think that If the robots ever do any harm, it’d be the humans itself(AI programmers, hackers, etc) indirectly harming others. Robots would just be the medium.
Agree. In the end, it’s always humans (the flesh and bone ones!) who are doing the harm.
I find the whole thing a bit disturbing, I like humans. Would I like someone else to do the grunt work in the house for me? Yes, but as you point out, would these robots be “slaves” or capable of consent. It’s an interesting question. Looking forward to the show.
What’s sad is that Almost Human was just starting to explore exactly these types of issues when it was canceled. Hopefully Humans manages a longer run with as much quality in how it handles these issues.
It’s an interesting question, and given the advances made in recent years towards artificial intelligence, one that is worthy of having now, rather than later. It also calls into question the “three laws” of Isaac Asimov’s I, Robot series, which requires AI to have a directive that says no robot can harm a human. So many science fiction stories draw on those laws, and the question of whether or not that would be ethical is interesting and important.