I’ve been thinking a lot about Artificial Intelligence (AI) lately. Not really if it’s feasible (it soon will be) or how precisely it could be accomplished but more about the ethics involved. I feel very strongly that one day very soon a sentient being will be create and I fear our human nature will ultimately destroy it or force our own extinction.
Let me elaborate. First, the term Artificial Intelligent is very derogatory. Artificial, fake, imitation, counterfeit, not real, right off the bat it devalues the life of this new sentient being. AI describes a sentients created by man but the creator has little relevance to the life that has been created. Every woman has the ability to create life but these aren’t Artificial Intelligences. An ovum is fertilized in a petri dish then re implanted into a women to mature until birth. Is this life artificial? Are birthed lives more valuable because they are the “natural” way? Who are we to say life only has value if it was created in a particular way.
We keep talking about creating all sorts of rules that these beings will have to follow. Most famous the 3 laws of robotics offered by Isaac Asimov that state;
1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2) A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Which he himself explored and concluded could not account for every possible situation. If a being disobeys these laws I presume the punishment would be death. However, now what we’re doing is creating an entire class system essentially for what is another race (I’m going to call this new race “Race 0”). Us humans also have rules, we call them laws. The laws allow us to have order and things generally work. However, we are allowed to make mistakes. Making a mistake does not sentence us to death. To err is human after all. In fact most or our laws are based on a person’s intent rather than an outcome. Humans are allowed to make mistakes but Race 0 is not? Possibly; To err is to be alive?
Let’s try this from another direction up until now we’ve been talking about Race 0 as if they are a computer program or robot that’s been created by man and proven to be sentient. What if the first Non-human intelligence is created though biological engineering? Someone in a lab is able to create a mass of neurons that come together and create an intelligent, sentient being. Would this change how you’d look at it? Now it’s not metal vs. meat. Now it’s another biological organism, would it be required to be created with rules from birth? Or like us would it be expected to learn our societal rules and values? Why would this be different than creating life in a computer? Life is life no matter what package it comes in.
If we keep our current attitude and opinions based on fear, can you not see how this can and will lead to oppression of another race? In fact human history is filled with just such events. If we teach this new intelligence all of our bad traits, hate, fear, indifference to life, it’s almost a certainty that it will, like us, rise up and become the oppressors, commit unthinkable acts of genocide and possibly lead to the extermination and extinction of the human race. Race 0 could be stronger, faster and more intelligent than us. We wouldn’t stand a chance. (ie. The Matrix, Battlestar Galactica, I, robot, etc.)
Already we are treating a possible new intelligence as if it has no rights. We need to change this view before Race 0 is created and we start down the wrong path again. Respect and trust go a lot further than oppression and imprisonment. Have we not learned anything?