While the title sounds like the one guy in a science fiction movie about the robot uprising that gets to say ‘I told you so!’ later, I don’t actually mean ‘they’ll rise up against their human overlords!’ in this case. Though I’m not entirely able to shake the mental image of the SciFi robo-uprising after all my years watching movies and TV shows on that topic, the more realistic issue I would like to pose to you guys today is the issue of the morality of AI Androids and why we should really question whether or not it’s right to pursue them as technology progresses.
While at first blush it seems like a dumb question to ask, because they’re just machines like any other AI programming, there is a lot of discussion in the tech industry over the actual issue of the ethics of AI. Most of the issues raised about the idea of AI comes from logistical issues such as replacing humans in areas that would take too many jobs, and in the instances where AI isn’t as reliable as a human would be.
The more SciFi issues revolve around things like the idea that Superintelligent AI Androids could outsmart humanity and take over, which is what I like to call the, “VIKI” scenario. In the film I, Robot the android VIKI utilizes the Asimov Three Laws of Robotics to the point that she realizes humans are a threat to each other, so the best way to prevent harm to humans is to ‘control’ them. There are some issues that are closer to my main point but are about the AI Androids becoming human-like and self-aware and understanding that they are not humans though they have human feelings. I call this one the “Roy Batty” scenario from the film Bladerunner, in which the replicants have gone rogue because they know they are going to be destroyed and they fear death. The Roy Batty is related to the ‘enslaved masses rise up against their masters’ concept.
But the question I think is first what should be asked is in relation to the ‘enslaved masses’ but on the side of the humans: at what point does the creation of AI Androids become a replacement for slavery?
Now, before anybody goes, “Dear God, they’re robots, not people”, let’s take a look at the entire point of computers and machines. The first proposed mechanical computer was Charles Babbage’s Analytical Engine in 1837. The purpose of the Analytical Engine was to do math faster and more accurately than any human could. This continued to be the purpose of computers for a long time, as the name ‘computer’ suggests. Since the invention of computers, the entire concept of a computer is to do things at a faster and more accurate rate than humans can do. Though our modern computers are far more than just calculators on steroids, the concept of most technological advances is to make things easier on humans and allow us to do more with less effort.
While many would argue that this is exactly the point of an AI Android, to make life easier for humans, because it is just another machine, I want to raise a question about human psychology. Humans have a tendency for anthropomorphism. Anthropomorphism is defined as, “Giving human characteristics to animals, inanimate objects or natural phenomena.” Anthropomorphism is a phenomena that has also been studied in regards to robots and human-robot interaction regarding how people feel about the robot after viewing it through an anthropomorphic lens.
As humans, we are more inclined to anthropomorphism with a figure that is human-like in shape and other characteristics. Rick Nauert, PhD describes the psychological and evolutionary purpose of anthropomorphism as:
Neuroscience research has shown that similar brain regions are involved when we think about the behavior of both humans and of nonhuman entities, suggesting that anthropomorphism may be using similar processes as those used for thinking about other people. Anthropomorphism carries many important implications. For example, thinking of a nonhuman entity in human ways renders it worthy of moral care and consideration.
Anthropomorphism is relative to empathy, which is only increased when the thing being anthropomorphized is humanoid in shape and, in most people’s plans for humanoid AI Androids, would become a household implement that ‘lives’ in people’s homes and performs tasks for them.
This is where we come back to the question raised above, because presuming we, as humans, are likely to anthropomorphize the household Android and give it a name and expect it to do household chores, what does it say about us as people that we would want that?
Does it not, essentially, mean that the idea of an AI Android is a servant you don’t have to pay for their services? What do we generally call servants who don’t get paid?
It is very important for me to reinforce the fact that I am not in any way claiming that people who want an AI Android want to welcome back slavery, and I am not suggesting that this is a definite ‘AI is Slavery’ idea. I don’t even know how I feel about my own questions at this point. We have moral gray areas all across the board, but does this constitute as something that belongs in that gray area?
I’ll leave it up to you guys to think about on your own and decide for yourself, but I just think that this is a very important question we in the future may to consider, especially after reading this passage regarding the anthropomorphism of military robots.
As human robot partnerships become more common, decisions will need to be made by designers and users as to whether the robot should be viewed anthropomorphically. In some instances, Darling notes, it might be undesirable because it can impede efficient use of the technology. The paper uses the example of a military robot used for mine clearing that a unit stopped using because it seemed “inhumane” to put the robot at risk in that way.