|Gianmarco Veruggio und Fiorella Operto über ethische Richtlinien in der Robotik|
|von Gerhard Dabringer|
|Dienstag, 17. November 2009|
Interview with Gianmarco Veruggio and Fiorella Operto. Gianmarco Veruggio is CNR-IEIIT Senior Research Scientist and President at Scuola di Robotica(Genova). He is serving the IEEE Robotics and Automation Society as Corresponding Cochair of the Technical Committee on Roboethics, as Co-Chair of the Human Rights and Ethics Committee, and as a Distinguished Lecturer. Among others, he was the General Chair of the “First International Symposium on Roboethics”, Sanremo, January 2004. In 2009 he was presented with the title of Commander of the Order to the Merit of the Italian Republic. Fiorella Operto is President at the Scuola di Robotica and scholar professor of philosophy. She has specialised in ethical, legal, and societal issues in advanced robotics and has been working in collaboration with important scientific laboratories and research centres in Europe and in the United States. Recently she has co-operated with the Robotics Department of the National Research Council in Italy in promoting the knowledge and understanding of the new science of Robotics.
In the next decades in the Western world - in Japan, United States, Europe - humanoids robots will be among us, companions to elderly and kids, assistants to nurse, physicians, firemen, workers. They will have eyes, human voices, hands and legs; skin to cover their gears and brain with multiple functions. Often, they will be smarter and quicker than the people they ought to assist. Placing robots in human environments inevitably raises important issues of safety, ethics, and economics. Sensitive issues could be raised by the so called “robotics invasion” of many non-industrial application sectors, especially with the personal robot; and the surveillance and military applications.
To discuss this matter, let us start from a question: How is a new science born?
Let us think of chemistry, of physics, sciences originating from many original and even weird sources, and later on systematized by famous scientists whose mission was to order the knowledge in laws, principles and rules, applying mathematical methodology to structuring the cluster of confirmed experiences and cases. Sciences are syncretic creatures, daughters of rationality, non rationality and of societal forces.
There is another important element of development, and it is the boost in robotics’ applications, which in turn is controlled by the so-called forces of the market: huge investments are funneled into it, from Japan’s Meti 40 billion yen in the humanoids challenge, to the 160 billion dollars in the US Future Combat Systems program.
Paolo Rossi writes:
Even Ove Arup, the leading Anglo-Danish engineer, said that: “Engineering is not a science. Science studies particular events to find general laws. The projecting activity of the engineer uses those laws to solve particular problems. In this, it is closer to the art or handicraft: problems are under-defined and there are many solutions, good, bad and indifferent. The art is finding a good solution through a compromise between media and scopes. This is a creative activity, that requires imagination, intuition and deliberative choice.”(5)
I had hard times witnessing discussions on the rights for robots; on robotics’ superiority to humans; on the development of robots to other biological, top-dog species. The sad side of the story is that often are we, roboticists, who are responsible of repeating, or fostering such legends, for narcissism or fashion of being philosophers. I believe that we have to use clear thinking, from now on. We would need other myths, images, and metaphor, which are truly intrinsic and proper to Robotics, and not to the anthropology of the human/automata tragedy and legend. Real robotics is far more exciting that fantasy!
Actually, human nature is not only the expression of our symbolic properties, but also the result of the relationships matured during our extra uterine development (we are Nature AND Culture). There is a very important concept that is embodiment, which means that an intelligence develops in a body and that its properties cannot be separated by it. A very enlightening article was written by José Galvan in the December 2003 issue of IEEE Robotics & Automation Magazine, “On Technoethics”, where it is said, among other things: “The symbolic capacity of man takes us back to a fundamental concept which is that of free will. Free will is a condition of man which transcends time and space. Any activity that cannot be measured in terms of time and space can not be imitated by a machine because it lacks free will as the basis for the symbolic capacity”.
It is quite obvious that when a machine displays an emotion, this doesn’t mean that it feels that emotion, but only that it is using an emotional language to interact with the humans. It is the human who feels emotions, not the robot! And attributing emotions to the robot is precisely one of these human emotions.
In the field of human-robot interaction, there are many studies on all these topics. Mit’s kismet is one; also, all the projects involving robot pet therapies (for instance, the robot Paro, designed by Japanese roboticists Takanory Shibata(6), or those robotics playmate which can help children with autism.
This is not a very original idea! For instance, in mathematics, the estimate of the variable x (exact or "truth" value) is referred to as “x hat”, while its measure is indicated as “x tilde”.
This could be a first, very simple way to keep us aware of these ontological differences, and in the same time it can help in avoiding flaws in our reasoning, like this:
For the sake of thruth, it necessary, even when we discuss the philosophy of our science, that we engineers apply the same sharpness as Galileo recommended in his synthesis of the Scientific Methodology: “Necessary demonstrations and sense experiences”.
A few lines about the “history” of Roboethics can be useful here to understand its aims and scope.
In 2005, EURON (European Robotics Research Network) funded the Research Atelier on Roboethics (project leader was School of Robotics) with the aim of developing the first Roadmap of a Roboethics. The workshop on Roboethics took place in Genoa, Italy, 27th February - 3rd March 2006. The ultimate purpose of the project was to provide a systematic assessment of the ethically sensitive issues involved in the Robotics R&D; to increase the understanding of the problems at stake, and to promote further study and trans-disciplinary research. The Roboethics Roadmap – which was the result of the Atelier and of the following discussions and dissemination - outlines the multiple pathways for research and exploration in the field, and indicates how they might be developed. The Roadmap embodies the contributions of more than 50 scientists, scholars and technologists, from many fields of science and humanities. It is also a useful tool to design a robotics ethic trying to embody the different viewpoints on cultural, religious and ethical paradigms converging on general moral assessments.
All these processes embodied in the robots produce an intelligent machine endowed with the capability to express a certain degree of autonomy. It follows that a robot can behave, in some cases, in a way, which is unpredictable for their human designers. Basically, the increasing autonomy of the robots could give rise to unpredictable and non predictable behaviours.
For instance, we have felt that problems like those connected to the application of robotics within the military and the possible use of military robots against some populations not provided with this sophisticated technology, as well as problems of terrorism in robotics and problems connected with bio-robotics, implantations and augmentation, were pressing and serious enough to deserve a focused and tailor-made investigation. It is clear that without a deep rooting of Roboethics in society, the premises for the implementation of artificial ethics in the robots’ control systems will be missing.
On the other side, in Europe, in the frame of the ongoing process of the culture’s cohesion, the course of regulation and legislation of science and technology assume a character of the foundation of a new political community the European Union, which is centred around the relationship between science and its applications, and the community formed by the scientists, the producers, and the citizens. We can safely assume that, given the common classical origin of jurisprudence, the latter process could be helpful in influencing other cultures, for instance, the moderate Arab world.
This means that also Roboethics – which is applied ethics, not theoretical – is the daughter of our globalised world. An Ethics which could be shared by most of the cultures of the world, and capable of being translated into international laws that could be adopted by most of the nations of the world.
Last, but not least, the very concept of intelligence, human and artificial, is subject to different interpretation. In the field of AI and Robotics alone, there is a terrain of dispute– let’s imagine how harsh could be outside of the circle of the inner experts.
This is precisely the mission that led us to start and to foster the Roboethics Programme, and to develop the Roboethics Roadmap. The basic idea was to build the ethics of robotics in parallel with the construction of robotics itself.
Let us see one case. In the field of service robots, we have robots personal assistants, machine which perform task from cleaning to higher task like assisting elderly, babies, disable people, students in their homework, to the entertainment robots. In this sector, ELS issues to be analyzed concern the potection of human rights in the feld of human dignity, privacy, the position of human in control herarchy (non instrumentalization principle). The right to human dignity implies that no machine should be damaging a human, and it involved the general procedures related to dependability. From this point of view, robotucs personal assistants could raise serious problems related to the reliability of the internal evaluation systems of the robots, and to the unpredictability of robots’ behavior. Another aspect to be taken into account, in the case of autonomous robots, is the possibility that these were controlled by ill-intentioned people, who can modify the robot’s behavior in a dangerous and fraudulent course. Thus, designers should guarantee the traceability of evaluation/actions procedures, and the identification of robots.
On a different level, we have to tackle the psychological problems of people who are assisted by robots. Lack of human relationships where personal connections are very important (e.g. for elderly care or edutainment applications) and general confusion between natural and aryificial, plus technological addiction, and loss of touch with the real world – in case of kids – are the some of the psychological problems involved.
In particular, military robotics opens up important issues are of two categories: a) Technological; b) Ethical.
In the case of robotics machines, their behavour is affected by issues regarding the uncertainty of the stability of robot sensory-motor processes and other uncertainty questions. For this reason, in robotic systems that are designed to interact with humans, stability and uncertainty issues should be systematically and carefully analyzed, assessing their impact on moral responsibility and liability ascription problems, on physical integrity, and on human autonomy and robotic system accountability issues.
The very same military milieus have several times underlined the danger implied by the lack of reliability of robotics systems in a war theatre, especially when the urgency of quick decisions and the lack of clear intelligence over the situation requires the maximum control over its own forces.
The other side of the issues – also stressed by military spokesmen – in military robotics is the high risk of information security gap. Autonomous robot employed in war theatres could be intruded, hackered, attacked by virus of several types, and become enemy’s tools behind our back.
For all these consideration, although very briefly summarized, I am deeply convinced that to attribute a “license to kill” to a robot is a decision of a so extreme gravity that no Nation or community alone can do it by itself. This question must be submitted to a deep and thorough international debate
(1) Kuhn, Th. The Essential Tension. Tradition and Innovation in Scientific Research, The Third University of Utah Research Conference on the Identification of Scientific Talent, ed. C. W. Taylor Salt Lake City: University of Utah Press 1959
(2) Kuhn Th., idem
(3) Springer Handbook of Robotics, Siciliano, Bruno; Khatib, Oussama (Eds.), 2008
(4) Paolo Rossi, Daedalus sive mehanicus: Humankind and machines, Lecture at the Euron Atelier on Roboethics, Genoa, Feb-Narch 2006. In: http://www.scuoladirobotica.it/lincei/docs/RossiAbstract.pdf
(5) Ove Arup,1895-1988 http://www.arup.com/arup/policies.cfm?pageid=1259
(7) Ph. Coiffet, Conference’ speech, International Symposium on Roboethics, 30th - 31st January, 2004, Villa Nobel, Sanremo, Italy, "Machines and Robots: a Questionable Invasion in Regard to Humankind Development"