Can a relationship between a robot and a human ever be one of equals? What if the robot looks and seems human, like the Synths from AMC’s new series, “Humans”?
One reader will win $2,500 by commenting on this or the other GMP “HUMANS” articles! See the bottom of this post for the Rafflecopter button – leave a comment and then confirm by clicking through on the Rafflecopter button.
One of the oddities of human nature is how often we form emotional attachments to inanimate objects. We all had beloved stuffed animals as children. We give our cars and boats names and genders, we feel a pang of loss when saying goodbye to an old and well-used computers and phones…
It’s not unusual for people to have intense emotional attachments to products and even the corporations that create them – witness the passion that Apple fans have for their products, or the intense rivalry between Sony’s Playstation consoles and Microsoft’s Xbox series.
The difference is that your Xbox doesn’t look a Brazilian supermodel.
Synths—artificially intelligent androids in AMC’s new series Humans—seem like a godsend. After all, you’d have someone (I mean, something) to do the drudge work of day to day living, freeing up your time to finally illuminate that manuscript like you’ve always wanted… what’s not to love?
But that question of love – and even basic morality – becomes much more complicated when you’re talking about artificial humans.
As Dr. NerdLove, I get paid to think WAY too much about both television and relationships. And the world posited by Humans opens up any number of incredibly fascinating—and uncomfortable—questions regarding our relationships with the devices in our lives. The first question of course, becomes one of whether any relationship with a Snyth can be considered one of equals.
The morality of simply buying a Synth is fraught with peril and unforseen complications. It’s hard to tell whether or not Synths are indeed sentient beings, but if they are, then we have for all intents and purposes created a slave race. Having an emotional connection to your synth becomes one of master and slave, and adds an even more uncomfortable undertones to the idea of owning a sentient being… and this is even without the question of hacking and jailbreaking.
If Synths are artificially intelligent, then they most likely have an awareness of self and at least the simulacrum of emotions. But how genuine can a person be when their entire existence are lines of code, no matter how complicated? When those emotions and personality are part of a generated algorithm, can they be said to be genuine at all?
This seems like a nitpicky question, but it’s one with real world implications, especially for the android’s owners. Once we accept that something is sentient, the question of the existence and validity of emotions becomes incredibly important. After all, if they have self-awareness, then we have to ask if they also have agency. Never mind whether they dream of electric sheep, are androids capable of giving informed consent?
After all, if an artificial being is sentient and self-aware then at what point does it become immoral to not let it have agency? At what point does it become immoral to own it at all? Are we enslaving it by coding it to want to serve us? After all, it’s not as though Siri or Cortana can go on strike if they decide they don’t like the way we choose to interact with them.
Even in our current universe (as opposed to the parallel one in which Humans takes place), we are already having conversations about whether we truly “own” the applications we run on our computers; shrinkwrap contracts and EULAs (End User’s Licensing Agreements) all define our purchases as “licensing agreements” rather than true ownership, and ones that give the companies rights over the apps and potentially even the content.
This tendency is growing beyond ephemera like code and into physical objects. Even now, GM and John Deere are claiming that we don’t actually own our cars and tractors because of proprietary code in the chips that run the engines. With Synths, we run the risk of inviting even more corporate interference into our lives, and in potentially terrifying ways.
Presumably, Synths have the ability to adjust and adapt to their own situations – after all, each is designed to interact specifically with its individual owner. By adjusting its behavior and response parameters to match the desires and needs of the owner it is, almost by definition, developing a personality—one that differentiates it from others of its make and model.
If we develop any sort of attachment to the android and its unique quirks, we take the risk that its corporate owners could hold one of our loved ones hostage. Imagine if they could force patches and updates to the OS… ones that could functionally “kill” the emergent personality and leave the owner with the shell of their former loved one, a cruel reminder of who they used to be. Even if we grant the rights of the owner over the AI, how much respect do we give to the desires of the AI?
In many ways, you would have to hope that the Synths were not sentient. Things we take for granted—such as updating the OS on our computers or even swapping old laptops for the newest mode—would be like abandoning a person, a member of your family even, leaving them with strangers to be murdered when their memories are wiped and reinstalled with a blank slate. And as an AI rather than a human, the android has no say in the matter.
And if it’s another family member who develops feelings for the android rather than the owner, then you have uncomfortable question of what happens when you can literally trade in your son or daughter’s crush for a new model?
It’s a vital question to ask because the arc of technology has shown that no matter how closed the system is, there will always be those who want to take it apart and see how it works… and to make it work the way they want it to. Hackers, jailbreakers and modders always look for ways to modify and adjust their device’s parameters, whether it means adjusting and changing the hardware or simply running code and programs that the developers never intended. The challenge—and potential—of jailbreaking a sentient android would be almost too tempting to resist… and even more problematic.
If some enterprising hacker is able to override an Synth’s OS, does that make it a form of coercion or simply a violation of the EULA? Even more frightening is the idea of replacing a Synth’s personality entirely – at what point does this cross the line from basic jailbreaking a device into outright murder?
Artificial intelligence and synthetic humans can seem like a godsend, especially with an aging population in need of care and attention. But once we create true artificial intelligence, we then have to grapple with questions of morality in ways that will utterly change how we relate to our devices and possessions… and whether we can consider them possessions at all.
And I, for one, am looking forward to seeing where AMC’s new show, Humans, will take this conversation.
Watch the series premiere of Humans Sunday, June 28 at 9/8c on AMC.
This post was written in partnership with AMC
Readers also have the opportunity to win $2,500 during the week of June 21 to June 28. Fans are encouraged to post their thoughts here (and confirm with Rafflecopter below), on the four HUMANS posts on The Good Men Project, and one comment will be chosen at random for the grand prize.
Be sure to leave a comment before entering the sweepstakes!
Read the rest of our authors’ thoughts and insights about HUMANS and the future of robots, and see more exciting trailers from this groundbreaking series:
Synthetic Love, Could a Human Fall In Love With a Robot? by Lisa Hickey
Could a Robot Make Your Relationship Better? by Thomas G. Fiffer
Could a Race of Highly Intelligent Robots Teach Us About Our Own Prejudices? by Anne Thériault