Ethics in Artificial Intelligence Research

As computing power continues to increase at a rapid pace mankind is beginning to ponder the ethics questions involved in the possible creation of artificial intelligence. If we succeed at creating artificial intelligence or AI then we have effectively created life to some degree and we must suddenly face many challenges that we have never had to deal with previously.

To many the issues are obvious – if we create a sentient being then we take on the role of god and are the creators to whom it looks not only as parents but as spiritual creators. What rights does an intelligent artificial being have? At what threshold does a machine become sentient? Do we even have the right to create a new life-form? Do we have the right not to if we have the ability? [1] The questions are many and the answers are few. This creates potential hazards in the AI research domain today.

I believe that many of these questions may be too broad to pose at this time. Conceptually artificial intelligence means creating a machine that is “intelligent” in the same way that humans and animal life is intelligent. People often use the term “self aware.” A being that thinks, senses, contemplates, desires, realizes and persists but not indefinitely. This is surely a lofty goal – a goal so lofty and complex that it is beyond our current scope to even discuss rationally in a scientific setting. Before being able to serious debate ethical issues surrounding something of this magnitude which would surely be mankind’s crowning achievement we must first be able to define artificial intelligence and, indeed, life.

Society in general has learned about AI concepts from Hollywood – an industry not known for its deep understanding of technology or science. In an industry that cannot faithfully represent common, everyday technology concepts such as email, logins, web pages, etc. it is somewhat unreasonable to assume that they would be able to comprehend advanced computer science concepts. And yet society who, in general, know that Hollywood cannot faithfully reproduce the email experience still believe that AI is defined in movies like I, Robot and AI. People generally believe that AI is magic, meaning that machines become self aware, that machines learn on their own and that machines can do something that computer scientists cannot even explain let alone intentionally build into them (what Schank refers to quite accurately as the “Gee Whiz” factor.)

But in the realm of modern academia and research the term artificial intelligence is none of the things that people generally associate with AI conceptually. [2] AI is not, as it is currently defined, even approaching creating life and even creating non-life self-awareness but is instead a scientific approach to “human like” problem solving in extremely limited domains. For example, modern video games use very simple algorithms for non-player characters to do “path finding”. These algorithms simply compare multiple routes from one source to one destination and determine which one is “best” or sometimes simply determine one possible route without any optimization. These systems could easily be represented on paper and few people would confuse paper with a self-aware mechanical being.

The reality is that today AI is thriving but is not what people commonly believe it to be. AI is a serious scientific and mathematical pursuit involving knowledge systems, complex logic solving, video games, etc. but it is not an attempt, not within the foreseeable future to even attempt, to model true life-like intelligence. [3] The term is very misleading and this is having a catastrophic effect on the public’s perceptions of the field. But work is progressing steadily and research is ongoing. AI is in use in our everyday lives and no one has ever proposed any serious moral dilemma with microwaves intelligently deciding when popcorn is done or for Age of Empires figuring out how to move sprites around a “stone wall” instead of knocking directly into it.

In time, perhaps, by thorough research of underlying logic systems we may eventually come to have some understanding of what it would mean to truly create a sentient machine. But that day is a long, long way off and could easily never come. Until then even considering to propose that ethics may come into play as to whether or not we should continue to build what are simply “more complex” traditional machines is utter foolishness and will continue to be, as it has been, the domain of the uneducated and gullible to believe Hollywood and hype rather than common sense, research and reality.

[1] Hayes, Patrick and McCarthy, John. Some Philosophical Problems from the Standpoint of Artificial Intelligence. Stanford, 1969. Retrieved May 11, 200y from: The University of Maine

[2] Schank, Roger C. Where’s the AI? Northwest University, 1991. Retrieved on May 11, 2007 from Google Cache at: Google Cache

[3] Humphrys, Mark. AI is possible… but AI won’t happen: The future of Artificual Intelligence. Jesus College, August, 1997. Retrieved May 11, 2007 from: Jesus College

[4] Brooks, Rodney. Intelligence without representation. September, 1987. Retrieved May 11, 2007 from: UCLA

[This paper was written as an ungraded assignment for a class that I took at the Rochester Institute of Technology and is crippled by the essay constraints of the class. It is not my normal format.]

Leave a comment