|Jürgen Altmann über unbemannte Systeme und Rüstungskontrolle|
|von Gerhard Dabringer|
|Sonntag, 7. März 2010|
Jürgen Altmann, researcher and lecturer at the University of Dortmund, is one of the founding members of the International Committee for Robot Arms Control. Since 2003 he is a deputy speaker of the Committee on Physics and Disarmament of Deutsche Physikalische Gesellschaft (DPG, the society of physicists in Germany) and currently directs the project on “Unmanned Armed Systems - Trends, Dangers and Preventive Arms Control” located at the Chair of Experimentelle Physik III at Technische Universität Dortmund.
How and why did you get interested in the field of military robots?
I have done physics-based research for disarmament since 25 years. One strand concerned automatic sensor systems for co-operative verification of disarmament and peace agreements. My second, more interdisciplinary, focus is on assessment of new military technologies under viewpoints of peace and international security, and possibilities of preventive arms control. In 2000-2001 the German Research Association Science, Disarmament and International Security (FONAS) did joint projects on preventive arms control. In that context I studied potential military uses of micro-systems technology (Altmann 2001). Already in that research I looked into the problem of military robots, then mostly small and very small ones. When I investigated military applications of nanotechnology, a very broad field, uses in uninhabited vehicles with sizes from large to extremely small were investigated (Altmann 2006). Limitations for such vehicles figured high in my recommendations for preventive arms control. Aware of the increasing number of countries developing and producing uninhabited air vehicles, of the large efforts for uninhabited ground and water vehicles, and of the rising trend to equip uninhabited vehicles with weapons, we proposed a research project which was granted in 2009.
Currently you are directing the project on “Unmanned Armed Systems - Trends, Dangers and Preventive Arms Control”. Could you elaborate on the focus of your research?
This project – funded by the German Foundation for Peace Research (DSF) for 1.5 years – has four goals:
These goals (with main focus on uninhabited aerial vehicles, UAVs) will be pursued in interdisciplinary research with considerable scientific-technical content. The results are to be published in a monograph.
You are also one of the founding members of the International Committee for Robot Arms Control (ICRAC). What were your motivations to set up the Committee and what do you hope to achieve by it?
At present we are four scientists from various disciplines: robotics, philosophy, physics/peace research. We are worried by the accelerating trend to arm uninhabited military vehicles, by the high numbers of non-combatants killed in present US and UK remote-control attacks in Iraq, Afghanistan and Pakistan, and by the seriously discussed prospect that soon computers may decide, when and whom to kill. We see dangers for the laws of warfare – discrimination and proportionality demand assessment of a complex war situation which for the foreseeable future artificial-intelligence systems will likely not be able to make. When the US near-monopoly of armed UAVs will be broken, additional dangers can be foreseen: from the undermining of arms-control treaties via the destabilisation of the situation between potential adversaries to proliferation and to possible use by terrorists. Politically, the prospect of sending fewer human soldiers and using mostly uninhabited combat systems may raise the inclination to go to war for some states.
What are the recommendations of the Committee?
They are contained in its founding statement:
The foundation of the ICRAC did produce considerable media interest. What kind of responses did the Committee receive from the international community and fellow researchers?
From governments, not many up to now. But committee members are regularly being invited to present their arguments to conferences, including ones organised by the military or for the military. Among the few other researchers worldwide who have written on potential problems from armed uninhabited vehicles we feel general support. This includes robot ethicists. The vast community of robotics and artificial-intelligence researchers has mostly not yet really taken up the problem of killing robots. We hope that this will change with a new robot-ethics book which covers military uses in three chapters (Capurru/Nagenborg 2009), with our upcoming workshop and related publications.
Where do you see the main challenges for the international community regarding the use of armed unmanned systems by the military. What are the specific challenges of autonomous systems as compared to current telerobotic systems?
The main challenge is in deciding whether the present trend should continue and expand to many more countries and to many more types of armed uninhabited vehicles (in the air, on and under water, on the ground, also in outer space), or whether efforts should be taken to constrain this arms race and limit the dangers connected to it. Here not only governments, but non-governmental organisations and the general public should become active.
Do you think the Missile Technology Control Regime could play a part in the non-proliferation of UAV technologies?
Yes, it does so already – its limitations concern UAVs (including cruise missiles) capable of carrying a payload of 500 kg over 300 km range. For UAV systems with autonomous flight control/navigation or beyond-visual-range remote control and aerosol-dispensing mechanisms, there is neither a payload nor a range threshold. These rules could be expanded beyond aerosol dispensing. However, one-sided export-control regimes such as the MTCR do not encompass all developer/producer/exporter countries, and they do not limit the armaments of the regime members themselves. Truly effective would be export controls embedded in comprehensive prohibitions valid for all relevant countries, that is, in arms control and disarmament treaties, as is the case with biological and chemical weapons. Limits on armed uninhabited vehicles will need to be more differentiated and pose some definitional issues, but with the understanding of states that such limits are in their enlightened national interest the detailed rules could be worked out. Some general ideas have been published by members of our Committee (Altmann 2009, Sparrow 2009).
Regarding international humanitarian law, would you think there is a need for additional legislation concerning the deployment of unmanned systems?
The biggest problem is posed by autonomous attack decisions. In principle, the requirements of discrimination and proportionality would suffice to rule this out for one to two decades because artificial intelligence will at least for this time not achieve the level of human reasoning – and this is the standard of international humanitarian law. However, it has to be feared that military reasons and political motives lead to autonomy in weapon use much earlier, thus an explicit legal requirement to have a human making each single weapon-release decision is required, I think. For remotely controlled systems a self-destruct mechanism in case of communication failure should be mandatory. Further rules will probably be needed – this should be the subject of legal research. Legal research would also be helpful in finding out whether video images as the sole real-time information are sufficient for compliance with the laws of armed conflict, and if specific rules are needed here.
In your work you have stressed the threats autonomous armed systems can pose to arms-control treaties and to international humanitarian law. What would be the most pressing problems at the moment?
Seen from today, with a detailed analysis still pending, armed uninhabited vehicles – autonomous or not – would undermine nuclear-reduction treaties (INF Treaty, START successor) if they were used as new nuclear-weapon carriers. The Treaty on Conventional Armed Forces in Europe would be endangered by armed ground vehicles outside of the Treaty definitions (of tanks or armoured combat vehicles) or by disagreement about which armed UAVs count as combat aircraft or attack helicopters (for some more information see Altmann 2009).
As you noted, the use of unmanned systems can affect the decision to go to war. Do you think, with the possibility to wage war without putting one’s own troops at risk, one of the principles of just war theory - war being the last resort (ultima ratio) - might be challenged?
This is not my area of expertise, but the thought suggests itself.
Apart from questions regarding the right to go to war (ius ad bellum), there is also the question of military necessity of actions in an armed conflict. Without the “man in the loop”, and even if it is ensured that the target is a legitimate one, do you think autonomous systems should or could ever be entrusted with decisions as how, when and even if to attack such a target?
In a purely scientific view one can argue that autonomous systems could only be entrusted with such decisions if and when they had proven that they can assess complex situations in war at a level comparable to the one of a capable human commander. The slow speed of robotics/artificial-intelligence development during the last fifty years and the scepticism of credible roboticists about progress in the coming decades lead me to the conclusion that this requirement will likely not be fulfilled in the next one or two decades. This conclusion is corroborated by the time frame envisaged for realisation of the “ultimate goal of the RoboCup Initiative“, namely a team of humanoid robot soccer players winning against the World-Cup winner, which is “mid-21st century”. If at some future time robotic systems consistently demonstrated better performance than humans, then one could argue that IHL and the ethics of war would even demand replacing humans.
In the discussion of the NATO air strike in Afghanistan near Kunduz in September 2009, it has been brought forward that the use of UAVs might have helped to prevent the amount of civilian casualties. Do you think the limited use of UAVs might actually increase the battlefield awareness of soldiers and eventually could help to achieve proportionality and target discrimination on a higher level?
In principle it could. Unfortunately not all details of that attack are available. From media accounts it seems that the commanding officer consciously decided to have the two stolen fuel trucks bombed together with all people surrounding them, despite several offers of the bomber pilots to first overfly the scene to scare people away. So in this case the use of armed UAVs would probably not have made a difference.
In the last years you also have worked on projects concerning non-lethal / less-lethal weapon systems (e.g. acoustic weapons, a millimetre-wave skin-heating weapon). Where do you see the potential and the challenges of these systems, especially if they are mounted on autonomous weapon platforms?
Acoustic weapons do not really exist. An existing long-distance loudspeaker system (the so-called Long Range Acoustic Device from the USA) can be turned to higher intensity which would result in permanent hearing damage if unprotected persons are exposed at distances below, say, 50 m for longer than a few seconds (Altmann 2008). This demonstrates the main problem with acoustic weapons in the audio range: The transition from annoying or producing ear pain to lasting damage is very fast. (Infrasound, on the other hand, has no relevant effect and is difficult to produce in high intensities.) So if real acoustic weapons were deployed on UAV and used to attack a crowd, mass incidence of permanent hearing damage would be the probable outcome.
Altmann, J. 2001. Military Uses of Microsystem Technologies – Dangers and Preventive Arms Control, Münster: agenda.