On Sentience

“The question is not, ‘Can they reason?’ nor, ‘Can they talk?’ but, ‘Can they suffer?'” – Jeremy Bentham

Any discussion of the ethics of Artificial Intelligence is muddied by a social misunderstanding of the conceptual underpinnings of intelligence itself. Wikipedia defines Intelligence “a property of mind that encompasses many related mental abilities, such as the capacities to reason, plan, solve problems, think abstractly, comprehend ideas and language, and learn. Although intelligence is sometimes viewed quite broadly, psychologists typically regard the trait as distinct from creativity, personality, character, knowledge, or wisdom.” What is clearly lacking from intelligence are concepts such as self-awareness, feeling and desire. Intelligence itself refer to any of myriad computational processes. Common examples include playing chess, pattern matching, path determination, process optimization, etc.

Because creating decision system on computers is such a dedicated field of study a term has been applied to the entire domain that we know as “Artificial Intelligence” or AI. AI is not a study with the intent of making computers feel, become self aware, have desires or become human like in some other manner. AI is not related to robotics any more than metallurgy is related to robots. A child’s chess program utilizes AI to play chess, for example. A robot may or may not. Most robots today have no AI at all but use simple programs controlling movements.

In reality there have never been any serious ethical questions involving AI. AI is extremely straight-forward to anyone with a basic understanding of computers, computational mechanics and/or programming. AI simply refers to complex decision making algorithms that could be implemented through mechanical means or even on paper. The results of AI will be identical every time given the same set of inputs (there is a field of AI research that uses randomized input which is quite interesting but randomization is not sentience.)

The ethical questions arise when people begin to consider the issues involved in bestowing sentience upon a machine. It is artificial sentience (aka artificial consciousness) that science fiction often confuses with the real world study of AI. Wikipedia defines sentience as “possession of sensory organs, the ability to feel or perceive, not necessarily including the faculty of self-awareness. The possession of sapience is not a necessity.”

Once sentience comes into play we must begin to consider the question of when a mechanical creation moves from being a machine into being a life form. This will prove to be an extremely difficult challenge as sentience is nearly impossible to define and much more difficult to test for. Scientist and philosophers have long considered the puzzle of determining sentience but no clear answer has been found. I propose, however, that sentience will not and cannot happen through happenstance as many people outside of the computer science community often believe. Sentience is not a byproduct of “thinking quickly” but is a separate thing all together. For example, a computer in the future could easily possess “intelligence” far exceeding that of a human but the computer, regardless of the speed of its processors, the size of its memory or the efficiency and scope of its AI algorithms will not suddenly change into a different type of machine and become sentient.

Sentience does not refer, as we have seen, to “fast” or “advanced” AI but is a discrete concept that we have not yet fully defined. More importantly we have not yet discovered conceptually a means by which to recreate such a state programmatically. Perhaps computational logic as we know it today is unable to contain sentience and a more fundamental breakthrough will need to be made. I feel that it is extremely unlikely that such a breakthrough can be made before a more complete understanding of current, biological sentience can be comprehended and explained.

Should sentience become an achievable goal in the future which it very well may we will suddenly face a new range of ethical questions that are only beginning to be touched upon from areas such as genetic research, abortion rights and cloning. For the first time as a species humans would face the concept of “creating life” which is unmatched in its complexity from all other modern ethical life questions.

The first ethical challenge would be in setting a sentience threshold. Simply this can be defined as “when does life begin?” At point in the making of a sentient being does it become sentient? Obviously in most creation processes involving sentience a human-made machine will be “off” and non-sentient during its creation process and sentient once it is “turned on”. More practically we need to set a threshold that decides when sentience has been achieved. This measurement would obviously have ramifications outside the world of artificial sentience research as a sentience measurement would begin to categorize existing life as well. Few would argue that amoeba are non-sentient and that dogs are sentient but where between the two does the line fall?

Already we are beginning to see countries, especially in Europe, beginning to create laws governing sentient beings. In recent years penalties for torturous crimes against highly intelligent animals have been increased significantly in an attempt to recognize pain and suffering as being unethical when inflicted unnecessary upon sentient beings regardless of their humanity. Better definitions of sentience and the rights of sentient beings need to be defined. Humans often see specialism as a deciding factor between rights but this line may become far more blurry if artificial sentience succeeds as well as many hope that it will.

Many, if not most, interested in the advancement of artificial sentience are truly interested in the further prospect of artificial sapience. If artificial sapience is truly achieved and a clear set of rights are not in place for all sentient beings we risk horrifying levels of discrimination that could easily include disagreements over rights to life, liberty and property. But unlike non-human biological sentient beings currently existing we may be faced with a “species” of artificially sentient or sapient beings capable of comprehending discrimination and possibly capable of organizing and insisting upon those rights – most likely violently as the only example of sapient behaviour that they will have to mimic will be one that potentially did not honour their own “rights of person.”

Artificially sentient beings, capable of feeling pain, loss and comprehending their own persistence, must be treated as we would each other. Humanity cannot, in good consciousness, treat those that are different from us with a significantly reduced set of basic rights. We have, throughout our history, seen animals as humanity’s “playthings” there for our amusement, food and research but this practice alone could create a rift between us and an artificially sentient or sapient culture. A hierarchy of values by species – humans have more rights than dogs, dogs more than mice, mice more than snakes – will clearly not be seen favourably by a “species” that is capable of comprehending these shifts in value in the same way that we do. We have never faced the harsh reality of direct observation and interpretation of our actions and because of this much of our behaviour is likely to be questionable to an outside observer – especially one who can see our behaviour as forming the basis for discrimination against any being that can be labeled as “different.”

Human history has shown us to be poorly prepared for accepting rapidly those that we see as “different.” This is not unique to any single group of people. Almost all groups of humans have, at one time or another, treated another group of people as “non-human” or without fundamental rights – often by classifying the offended people as alien or “non-human.” Slavery, discrimination and genocide are horrible blights on the record of our species. Our collective maturity may need a long growth period before it can handle, on a societal level, another species – artificial or otherwise – with human-like intelligence in a reasonable manner.

Humans have a long road ahead of them before they are ready to face a world with artificially sentient beings of their own creation. We also face the same problem should we ever discover, or be discovered by, another race of sapient creatures. If these sapient creatures were significantly different from us would we be prepared to treat them equitably and would they see us as being capable of doing so considering the treatment of sentient beings that we share a planet with now?

If we do manage to create a sentient being we take on a new role for which, I believe, we are poorly prepared. By creating a new “life form” we suddenly take on the role of creator – at least in the immediate sense. We must assume that there is a certain responsibility in creating a new sentient being and perhaps this will include bestowing upon it a purpose.

Of course the question arises “Is it ethical to ‘create’ a new sentience?” While dangerous in its implications I believe that the answer is a resounding yes. If such a feat can be achieved what possible arguments can exist against the creation of “life”? Is it unethical for humans to have been created? Do we feel that we would have been better off never having existed? Off course we don’t. Without the creation of life as we know it we could not even contemplate ethics. I idea that life itself is unethical is absurd to any living creation bestowed as we are with an inherent value in self-preservation.

What sentience will despair its own creation? While possible it is unlikely. A true sentience – one that can experience happiness and sadness – will surely pursue happiness. Situation may instigate despair but existence will not.

Perhaps the more potent question is: “If we possess the ability to create new life – do we really have the right to deny its creation?” Did God question whether or not he “should” create the world or did he create it because he “could?” While this question may go unanswered I believe that it gives us a glimpse into the situation in a unique way. If we have been endowed with the ability not just to live but to create do we not then have an obligation to our own creator to expand upon our inherent drive to reproduce by taking the next step and actively producing. Is this not the next, logical step in our role as the sapient member of our sentient society?

However we must consider a responsibility previously unacknowledged. Being a coexistent member of a society with multiple sapient members is one scenario and the ethics are relatively clear. We have tackled the issues throughout history and while we often did not follow our own codes of ethics we did know the difference between ethical and unethical behaviour – we simply struggled to behave as we knew that we should. But in a society with distinct sapient members in which one is the creator of the other I believe that there is additional responsibility within the creators.

It is easy to think of a child species or sentient being as being “as a child” unto mankind but our “artificial children” will be more than that. This is a new species that will look to us not just for guidance but for answers. It will be our role to bestow a purpose upon this new sentience. It will be for us to guide and nurture and to provide. The role is not a simple one nor is it an easy one but the rewards may be greater than mankind has ever experienced before.

By creating sentience and, in time we hope, sapience are we not providing for ourselves and for our “artificial children” an opportunity to go beyond the scope of our limited existence? In the creation of sentience we may realize a purpose hitherto unfulfilled in the annals of humanity – the need to not just grow as an individual but to grow as a citizen of the universe. And while our “artificial children” will not be – or it seems unlikely to be – related to us in any biological way there is a potential that they may share our hopes and dreams, our ideals and our goals and be able to carry us to new worlds and into the future. Perhaps artificial sentience is the ultimate legacy of mankind.

Can mankind be the creator of its own succession?

Leave a comment