Slaves and Robots

If we invent a machine with a personality and desires, does it deserve freedom?

The human race has never been shy when it comes to enslaving fellow human beings. The ancient Babylonians owned slaves, as did the Greeks and the Romans, as well as the Coast Salish tribes of the Pacific Northwest, not to mention the Chinese, various African nations, and infamously, the Portuguese slavers who brought Africans bound in chains to suffer and toil in the New World. As slavery became a global enterprise,  terms such as “slave trade,” “slave raiding” and “slave markets” became common in numerous tongues, across much of the world.

I once happened upon, by accident (I opened the wrong door) a dark and dingy sweatshop in a major American city. About thirty Asian women were running sewing machines, working in a space meant for ten people at most. Rolled up sleeping mats and pillows, as well as what looked like small cooking stoves, lined the walls.

As soon as mankind gave up its hunter-gatherer existence, and settled down into agrarian-based societies of varying scales, distinctions between ‘us’ and ‘other’ were often exploited to take advantage of others, in order to force them into undesirable work (not hunting, of course). Skin color, religion, which side of a particular war someone happened to be on, the caste systems and a host of other criteria were used to dehumanize and devalue the rights of other sentient beings, and thus justify their bondage or near bondage. When the law forbade outright ownership and slavery, other means of control were available. Debt, historical precedent, geographic origins and indentured servitude were just some of the tools that made it possible for one man to rule over another, even if he didn’t have actual ownership papers.

While slavery, in its various forms, has subsided in many parts of the world (although not all, unfortunately), the desire to get others to carry out our labor intensive work hasn’t slackened much. Migrant workers, illegal immigrants and people trafficked against their will are often paid only a fraction of the minimum wage (if anything at all), and forced to live and work in appalling conditions. I once happened upon, by accident (I opened the wrong door) a dark and dingy sweatshop in a major American city. About thirty Asian women were running sewing machines, working in a space meant for ten people at most. Rolled up sleeping mats and pillows, as well as what looked like small cooking stoves, lined the walls. I noticed several women curled up into balls, sleeping in the corners of the room. An older woman noticed me noticing them. She got up from her chair, yelled at me in Cantonese, and slammed the door in my slack-jawed face.

With the advent of modern technology, our workforce has moved away from some of the chores that used to require the labor of many employees or slaves. Gadgetry and machines, from combines and dishwashers to motorized transportation and automated assembly lines run by robots can be found in many societies now, although a significant number of people in the developing world still rely on rudimentary tools to carve out a very basic form of living.

Despite the disparities between and within different societies, our steady march into a digital existence, and our desire to seek out increasingly sophisticated forms of technological aid, have created the possibility of artificial intelligence (A.I.) coming into being, and the fear that A.I. might somehow slip into our machines and then spiral out of human control, creating chaos and destruction, just like the major slave revolts of old. While the plausibility of this is up for debate, it’s an intriguing notion that has grabbed our attention on a visceral cultural level, as evidenced by the rise of the machines in iconic films like Terminator and The Matrix, and the classic collection of sci-fi short stories in I, Robot, by Isaac Asimov. If the creations we’ve built to do our bidding attain consciousness someday, will it still be morally right to keep them enslaved?

My belief in the better nature of human beings had me thinking for quite some time that our fascination with this subject, often seen through the lenses of science fiction, stemmed from the moral growth of human society as a whole, and the universal realization that slavery is evil, and inherently wrong, no matter the form. While I agree with this last sentiment unequivocally, I think the dialogue some of us might be having about the nature of life and the future sentience of inanimate things parallels likely conversations the rulers of Ancient Egypt, and other slave owning societies (ours among them) once had. If we recognize these working ‘machines’ as equal, with rights and more than basic survival needs, who’s going to do all of our hard work? Not us, after all. That’s why we built/enslaved them in the first place.

Oddly enough, in the Star Wars universe, the most popular film franchise of all time, there seems to be an endless supply of sentient robots with distinctive personalities, emotive ways of expressing fears, doubts and desires, and the need to preserve life. Yet even so, most (not all) of these androids and robots are property, in the service of flesh and blood creatures. Even biological clones are denied any true freedom. The moral questions, centered on the creation and enslavement of sentient life, found so often in other works of science fiction, are strangely absent here. Maybe George Lucas is more comfortable with the idea of robot servitude than his peers are—or maybe I’m expecting too much from a space opera, at least as far as the philosophical conundrums surrounding android and clone slavery are concerned.

Some folks (like Sterling Archer) fear the coming of cyborgs and robots. Will their overwhelming computational power and strength diminish the value of an ordinary human life? Other people, and I suspect Mr. Lucas might fall into this camp, welcome as much technology as we can possibly muster, because machines are capable of doing things we simply can’t, like printing buildings on the moon, or unpleasant activities we’re reluctant to undertake, like killing and dealing with industrial waste.

Maybe with intelligent robots scooting around everywhere, we’ll finally run out of reasons to look down upon people we see as ‘other,’ because we’ll be too busy looking down on our slave-like machines, or else dodging missiles raining down from drones that have ‘decided’ to go off the grid, and explore this vast world on their own. The future, it would seem, is wide open.

 

Read more of Carl Pettit’s weekly column, Root Down, on The Good Life.

Image credit: Andres Rueda/Flickr

Sponsored Content

Premium Membership, The Good Men Project

About Carl Pettit

Carl Pettit is a writer, illustrator and musician whose education and travels have taken him all over the world. When not out exploring, or pondering the universe, he finds time to produce fiction for both adults and children. You can catch up with him on his blog, or twitter.

Comments

  1. Do Androids dream of electric sheep?

  2. wellokaythen says:

    The level of intelligence may be one determining factor. One can have personality and desires and not necessarily full consciousness. If having personality and desires translated into rights of citizenship, we would have no pets and would all be vegetarian. (That’s not a bad thing, necessarily.) If we gave full status to robots we would need to give it to all the mammals that we currently milk, ride, and eat. Veganism might provide some insight into how to treat robots, if the issue is about personality.

    Besides, granting them full constitutional rights would just be one step on their path to total global domination…. Come to think of it, I can’t assume that the author is not a robot himself (itself?). It’s already started. They’ve infiltrated mass media. SkyNet is upon us.

  3. ….
    Maybe with intelligent robots scooting around everywhere, we’ll finally run out of reasons to look down upon people we see as ‘other,’ because we’ll be too busy looking down on our slave-like machines…

    Meaning that chances are instead actually recognizing and addressing why the concept of enslaving others people will just skip over to using machines because “come on they aren’t human”.

    Since we are dropping names of titles that touch on this I’d bring up Ghost in the Shell as well. In that series there is a blurred line between what is human and what is not. For example in that show cloning is possible, but clones can’t inheret property. Also there are some religions that forbid taking on cybernetic parts (and mind you this show it is actually possible to have a prosthetic BODY).

    So what worries me a bit (and this is total side track) is that as that line blurs what measures would be taken to maintain it and what are the consequences of those actions.

    As the technology advances push forward (I recall reading an article based on a report saying that android prostitutes could be viable in the next 30 years) there are a lot of things that need to be worked out.

    As for your Star Wars reference I think you may be expecting a bit too much from the legendary space opera. On the other hand one of the main ongoing storylines of Star Trek TNG was of Data wrestling with the implications of being human versus an android. There was even one episode where a Starfleet scientist tried to force Data to go to Earth so that he could be poked prodded and examined with the hopes of duplicating him and making him a standar issues tool on star ships. (Overall Star Trek is much more grounded in reality than Star Wars.)

    • Danny, i always thought it uproariously funny that data would want to be a lower lifeform – a human. I always remember riker’s satisfied grin in tng’s pilot when data says he’d give up all his advantages to be human.
      yes data, a being of pure logic, of pure reason, of supreme efficiency. would give that up to only have conscious control of about 5% of the subconscious processes of the brain, to be that the mercy of these whimsical whirrings, to be inefficient – to be hu’mann ( loved it when quark would spit out the word, or 7 of 9’s glared at the humans).
      but there’s ‘nowt queer as folk’, so perhaps data was a total humanophile

      a being of pure logic, of pure reason, of supreme efficiency – is of course, the man of granite inside and out, the ideal man, the real man, of european imperial masculinity circa 1870 to 1914(maybe 1945) eg. Those stiff upperlipped men and their granite moustaches, or the lone cowboy. though their attempt to create such men failed because they didnt engage in the thousands of hours of yogic practice, or had the coming augmentation technologies. the ancient spartan men got pretty close though

      • Danny, i always thought it uproariously funny that data would want to be a lower lifeform – a human. I always remember riker’s satisfied grin in tng’s pilot when data says he’d give up all his advantages to be human.
        But are androids superior to humans? Yes Data’s positronic net could calculate circles around the human brain but he saw something in them (mainly emotions) that he wanted to obtain for himself.

        • yes data is superior to humans. however while i am drawn to him (and 7 of 9, the no1 in the original startrek pilot), data n 7 of9’s existences are too severe, too sterile an existence for me. i like enjoying emotions, i also being like the greatercapacity to shut them off that the middle age hardening of my brain has given me

          so data with emotion chip and the ability to shut them off, is the best. the coming augment tech will allow humans to do what yogis/yoginis had to spend thousands of hours mastering – man as machine, with an emotional soul

          my main wasnt about emotions but about mastery of the subconscious systems as i wrote here,’ would give that up to only have conscious control of about 5% of the subconscious processes of the brain, to be that the mercy of these whimsical whirrings’

          eg. imagine having the ability to reset your emotional state – at will, to turn off pain at will, to raise the body temp – at will. ive finally found a howto for gtummo yoga as ive been trying to raise the temp in my hands using visualisation, with only occassional success. i dont whether the guide ive found works or not, it involves the hatha yoga like breath practice (and i hated breath practice). i can post the link if you want.

        • Pre emotion chip data has
          1.superior thinking ability to humans
          2.Superior ability to command the vessel he is trapped in
          3.Inferior emotional enjoyment of life

          Post emotion chip Data has;

          1.superior thinking ability to humans
          2.Superior ability to command the vessel he is trapped in
          3.Humanlike ability to enjoy emotions – at will

          Soon humans too will be like post-chip data, i also wonder what proportion of us will decide to keep to our current physical form

  4. Nice article, an enjoyable read carl
    “My belief in the better nature of human beings had me thinking for quite some time that our fascination with this subject, often seen through the lenses of science fiction, stemmed from the moral growth of human society as a whole, and the universal realization that slavery is evil, and inherently wrong, no matter the form
    [...]
    I think the dialogue some of us might be having about the nature of life and the future sentience of inanimate things parallels likely conversations the rulers of Ancient Egypt, and other slave owning societies (ours among them) once had. If we recognize these working ‘machines’ as equal, with rights and more than basic survival needs, who’s going to do all of our hard work? Not us, after all. That’s why we built/enslaved them in the first place.

    i agree with your point there however from a slightly darker outlook. thats why earlier when you wrote, ‘and the universal realization that slavery is evil, and inherently wrong, no matter the form.’ i added in my head the words,’… for now’. the consensus can change.
    the linear direction of progress i really agree with is,’technological progress’. other things are more cyclical eg the reintroduction of the death penalty in the usa. i see no reason why there would not be a return of human chattel, or the introduction of ai chattel slavery, in some or all of the world in the next 100yrs.

    I agree with wello, ai beings having something approaching humanlike intelligence plus, the strong degrees of selfawareness, and of empathy would be factors in according rights from a theoretical angle. The political angle would be murkier, as you noted with those rights being granted depending on social and economic impact
    Eg rather like the discussions in star trek ds9 about the coming need to review holographic lifeform rights.

    • an editing error on my part. i forgot paste the below after the sentence, ‘other things are more cyclical eg the reintroduction of the death penalty in the usa.’

      i think the default empathy status for majority of humans is ‘indifferent to coldly-indifferent’ – it is why the great teachers like iirc The Buddha, used not logic but storytelling and its emotionrousing connective power, to impart truths to, to generate empathy and community in humans. plus, the human ability to compartmentalise with ever more nuance their wickedness, is a thing of wonder.

Speak Your Mind

*