Jürgen Altmann über unbemannte Systeme und Rüstungskontrolle PDF Drucken E-Mail
von Gerhard Dabringer   
Sonntag, 7. März 2010

Juergen AltmannJürgen Altmann, researcher and lecturer at the University of Dortmund, is one of the founding members of the International Committee for Robot Arms Control. Since 2003 he is a deputy speaker of the Committee on Physics and Disarmament of Deutsche Physikalische Gesellschaft (DPG, the society of physicists in Germany) and currently directs the project on “Unmanned Armed Systems - Trends, Dangers and Preventive Arms Control” located at the Chair of Experimentelle Physik III at Technische Universität Dortmund.

How and why did you get interested in the field of military robots?

I have done physics-based research for disarmament since 25 years. One strand concerned automatic sensor systems for co-operative verification of disarmament and peace agreements. My second, more interdisciplinary, focus is on assessment of new military technologies under viewpoints of peace and international security, and possibilities of preventive arms control. In 2000-2001 the German Research Association Science, Disarmament and International Security (FONAS) did joint projects on preventive arms control. In that context I studied potential military uses of micro-systems technology (Altmann 2001). Already in that research I looked into the problem of military robots, then mostly small and very small ones. When I investigated military applications of nanotechnology, a very broad field, uses in uninhabited vehicles with sizes from large to extremely small were investigated (Altmann 2006). Limitations for such vehicles figured high in my recommendations for preventive arms control. Aware of the increasing number of countries developing and producing uninhabited air vehicles, of the large efforts for uninhabited ground and water vehicles, and of the rising trend to equip uninhabited vehicles with weapons, we proposed a research project which was granted in 2009.

Currently you are directing the project on “Unmanned Armed Systems - Trends, Dangers and Preventive Arms Control”. Could you elaborate on the focus of your research?

This project – funded by the German Foundation for Peace Research (DSF) for 1.5 years – has four goals:

  1. Compile the status in research, development and deployment of uninhabited armed systems;
  2. Describe the technical properties of uninhabited armed systems to be expected in the next twenty years with the approximate times of their introduction;
  3. Assess the systems to be expected under criteria of preventive arms control;
  4. Analyse limitation options and verification possibilities.

These goals (with main focus on uninhabited aerial vehicles, UAVs) will be pursued in interdisciplinary research with considerable scientific-technical content. The results are to be published in a monograph.

You are also one of the founding members of the International Committee for Robot Arms Control (ICRAC). What were your motivations to set up the Committee and what do you hope to achieve by it?

At present we are four scientists from various disciplines: robotics, philosophy, physics/peace research. We are worried by the accelerating trend to arm uninhabited military vehicles, by the high numbers of non-combatants killed in present US and UK remote-control attacks in Iraq, Afghanistan and Pakistan, and by the seriously discussed prospect that soon computers may decide, when and whom to kill. We see dangers for the laws of warfare – discrimination and proportionality demand assessment of a complex war situation which for the foreseeable future artificial-intelligence systems will likely not be able to make. When the US near-monopoly of armed UAVs will be broken, additional dangers can be foreseen: from the undermining of arms-control treaties via the destabilisation of the situation between potential adversaries to proliferation and to possible use by terrorists. Politically, the prospect of sending fewer human soldiers and using mostly uninhabited combat systems may raise the inclination to go to war for some states.
We hope to raise awareness of the dangers connected to armed uninabited vehicles in the public as well as with decision makers. The goal is to prevent an unconstrained global arms race. For this, the important arms-producing states need to agree on mutual limitations with adequate verification mechanisms. Based on our founding statement, we want to develop concrete proposals for such limitations and hope that some states will take the initiative. For presenting and discussing concepts we plan to convene an international expert workshop on robot arms control in September 2010 in Berlin.

What are the recommendations of the Committee?

They are contained in its founding statement:
“Given the rapid pace of development of military robotics and the pressing dangers that these pose to peace and international security and to civilians in war, we call upon the international community to urgently commence a discussion about an arms control regime to reduce the threat posed by these systems.
We propose that this discussion should consider the following:
- Their potential to lower the threshold of armed conflict;
- The prohibition of the development, deployment and use of armed autonomous unmanned systems; machines should not be allowed to make the decision to kill people;
- Limitations on the range and weapons carried by “man in the loop” unmanned systems and on their deployment in postures threatening to other states;
- A ban on arming unmanned systems with nuclear weapons;
- The prohibition of the development, deployment and use of robot space weapons.”

The foundation of the ICRAC did produce considerable media interest. What kind of responses did the Committee receive from the international community and fellow researchers?

From governments, not many up to now. But committee members are regularly being invited to present their arguments to conferences, including ones organised by the military or for the military. Among the few other researchers worldwide who have written on potential problems from armed uninhabited vehicles we feel general support. This includes robot ethicists. The vast community of robotics and artificial-intelligence researchers has mostly not yet really taken up the problem of killing robots. We hope that this will change with a new robot-ethics book which covers military uses in three chapters (Capurru/Nagenborg 2009), with our upcoming workshop and related publications.

Where do you see the main challenges for the international community regarding the use of armed unmanned systems by the military. What are the specific challenges of autonomous systems as compared to current telerobotic systems?

The main challenge is in deciding whether the present trend should continue and expand to many more countries and to many more types of armed uninhabited vehicles (in the air, on and under water, on the ground, also in outer space), or whether efforts should be taken to constrain this arms race and limit the dangers connected to it. Here not only governments, but non-governmental organisations and the general public should become active.
Autonomous systems obviously would open many new possibilities for war by accident (possibly escalating up to nuclear war) and for violations of the international laws of warfare. A human decision in each single weapon use should be the minimum requirement.

Do you think the Missile Technology Control Regime could play a part in the non-proliferation of UAV technologies?

Yes, it does so already – its limitations concern UAVs (including cruise missiles) capable of carrying a payload of 500 kg over 300 km range. For UAV systems with autonomous flight control/navigation or beyond-visual-range remote control and aerosol-dispensing mechanisms, there is neither a payload nor a range threshold. These rules could be expanded beyond aerosol dispensing. However, one-sided export-control regimes such as the MTCR do not encompass all developer/producer/exporter countries, and they do not limit the armaments of the regime members themselves. Truly effective would be export controls embedded in comprehensive prohibitions valid for all relevant countries, that is, in arms control and disarmament treaties, as is the case with biological and chemical weapons. Limits on armed uninhabited vehicles will need to be more differentiated and pose some definitional issues, but with the understanding of states that such limits are in their enlightened national interest the detailed rules could be worked out. Some general ideas have been published by members of our Committee (Altmann 2009, Sparrow 2009).

Regarding international humanitarian law, would you think there is a need for additional legislation concerning the deployment of unmanned systems?

The biggest problem is posed by autonomous attack decisions. In principle, the requirements of discrimination and proportionality would suffice to rule this out for one to two decades because artificial intelligence will at least for this time not achieve the level of human reasoning – and this is the standard of international humanitarian law. However, it has to be feared that military reasons and political motives lead to autonomy in weapon use much earlier, thus an explicit legal requirement to have a human making each single weapon-release decision is required, I think. For remotely controlled systems a self-destruct mechanism in case of communication failure should be mandatory. Further rules will probably be needed – this should be the subject of legal research. Legal research would also be helpful in finding out whether video images as the sole real-time information are sufficient for compliance with the laws of armed conflict, and if specific rules are needed here.

In your work you have stressed the threats autonomous armed systems can pose to arms-control treaties and to international humanitarian law. What would be the most pressing problems at the moment?

Seen from today, with a detailed analysis still pending, armed uninhabited vehicles – autonomous or not – would undermine nuclear-reduction treaties (INF Treaty, START successor) if they were used as new nuclear-weapon carriers. The Treaty on Conventional Armed Forces in Europe would be endangered by armed ground vehicles outside of the Treaty definitions (of tanks or armoured combat vehicles) or by disagreement about which armed UAVs count as combat aircraft or attack helicopters (for some more information see Altmann 2009).
Most pressing are the issues of international humanitarian law. Already now remote-control UAV attacks in Iraq, Afghanistan, Pakistan – directed from thousands of kilometres away, based only on images from a video camera – lead to many civilian deaths, so that compliance with the requirements of discrimination and of proportionality is doubtful. With armed UAVs the only action possibility is to shoot; soldiers on site would have more possibilities to act – check identities, search for weapons, take people into custody.
Even more problems would be created by autonomous attack – delegation of the authority to select targets to computers. If such autonomous armed uninhabited vehicles will be introduced before one or two decades, one can expect a marked increase in civilian casualties.
This could be prevented by a prohibition of autonomous attack. At least as important are efforts to reduce the likelihood of war in the first place – with respect to the issue at hand by preventive arms control for armed uninhabited vehicles, on a more general level by general limitations of weapons and armed forces, combined with political measures of reducing confrontation.

As you noted, the use of unmanned systems can affect the decision to go to war. Do you think, with the possibility to wage war without putting one’s own troops at risk, one of the principles of just war theory - war being the last resort (ultima ratio) - might be challenged?

This is not my area of expertise, but the thought suggests itself.

Apart from questions regarding the right to go to war (ius ad bellum), there is also the question of military necessity of actions in an armed conflict. Without the “man in the loop”, and even if it is ensured that the target is a legitimate one, do you think autonomous systems should or could ever be entrusted with decisions as how, when and even if to attack such a target?

In a purely scientific view one can argue that autonomous systems could only be entrusted with such decisions if and when they had proven that they can assess complex situations in war at a level comparable to the one of a capable human commander. The slow speed of robotics/artificial-intelligence development during the last fifty years and the scepticism of credible roboticists about progress in the coming decades lead me to the conclusion that this requirement will likely not be fulfilled in the next one or two decades. This conclusion is corroborated by the time frame envisaged for realisation of the “ultimate goal of the RoboCup Initiative“, namely a team of humanoid robot soccer players winning against the World-Cup winner, which is “mid-21st century”. If at some future time robotic systems consistently demonstrated better performance than humans, then one could argue that IHL and the ethics of war would even demand replacing humans.
However, robots/artificial intelligence at or beyond the human level would raise fundamental ethical questions much beyond war and could bring existential dangers. Consideration of the interests of humankind and the precautionary principle could well lead to a rational decision for a general prohibition of the development of such systems. Ensuring compliance with such wide-ranging rules – similar ones will probably also be required with some future developments in nanotechnology – may need a transformation of the international system: moving away from trying to provide security by national armed forces to a system with a democratically controlled supernational authority with a monopoly of legitimate violence. Otherwise perceived military necessities and military resistance against far-reaching inspection rights could prevent nations from agreeing on strong limits on research and development, even though highest human interests would demand them.

In the discussion of the NATO air strike in Afghanistan near Kunduz in September 2009, it has been brought forward that the use of UAVs might have helped to prevent the amount of civilian casualties. Do you think the limited use of UAVs might actually increase the battlefield awareness of soldiers and eventually could help to achieve proportionality and target discrimination on a higher level?

In principle it could. Unfortunately not all details of that attack are available. From media accounts it seems that the commanding officer consciously decided to have the two stolen fuel trucks bombed together with all people surrounding them, despite several offers of the bomber pilots to first overfly the scene to scare people away. So in this case the use of armed UAVs would probably not have made a difference.
Generally, having a weapon at hand where a UAV is observing could serve for more precise targeting and for reaction to short-term changes on site. But this could in principle also be provided by piloted aircraft. Video observation from very far away brings the possibility of misjudgements as many incidences of killing the wrong persons in Afghanistan and Pakistan demonstrate. But pilots on board aircraft have limited sensory input, too.
A final problem is that the awareness is only guaranteed in a very asymmetric situation: when one side has UAVs available while the other does not. The “fog of war” would be much thicker if both sides possess (armed) UAVs, jam each other’s communication links etc.

In the last years you also have worked on projects concerning non-lethal / less-lethal weapon systems (e.g. acoustic weapons, a millimetre-wave skin-heating weapon). Where do you see the potential and the challenges of these systems, especially if they are mounted on autonomous weapon platforms?

Acoustic weapons do not really exist. An existing long-distance loudspeaker system (the so-called Long Range Acoustic Device from the USA) can be turned to higher intensity which would result in permanent hearing damage if unprotected persons are exposed at distances below, say, 50 m for longer than a few seconds (Altmann 2008). This demonstrates the main problem with acoustic weapons in the audio range: The transition from annoying or producing ear pain to lasting damage is very fast. (Infrasound, on the other hand, has no relevant effect and is difficult to produce in high intensities.) So if real acoustic weapons were deployed on UAV and used to attack a crowd, mass incidence of permanent hearing damage would be the probable outcome.
Concerning millimetre-wave weapons for producing pain by skin heating, the existing U.S. Active Denial System (with 500 to 700 m range, tested but not yet deployed) is very big, requiring a medium truck (Altmann 2008). Research is underway to develop an even stronger system to be carried on aircraft – it is doubtful if that would be used without pilots and operators on board. If so, the general problems of applying force over a distance, not being on the scene, would be aggravated. The same would hold if other “non-lethal” weapons were used from uninhabited (air, ground) vehicles, say, tasers or, more traditionally, water cannons.
With “non-lethal” weapons, much depends on the scenario of use (armed conflict? peace-keeping operation? crowd? few criminals?), on the context and the general culture (democratic control of security forces?) in the respective society. One can suspect that putting them on uninhabited vehicles can increase, rather than decrease, the level of violence.


References

Altmann, J. 2001. Military Uses of Microsystem Technologies – Dangers and Preventive Arms Control, Münster: agenda.
Altmann, J. 2006. Military Nanotechnology: Potential Applications and Preventive Arms Control, Abingdon/New York: Routledge.
Altmann, J. (2008). Millimetre Waves, Lasers, Acoustics For Non-Lethal Weapons? Physics Analyses and Inferences, Forschung DSF No. 16, Osnabrück: Deutsche Stiftung Friedensforschung, http://www.bundesstiftung-friedensforschung.de/pdf-docs/berichtaltmann2.pdf.
Altmann, J. 2009. Preventive Arms Control for Uninhabited Military Vehicles, in Capurro/Nagenborg 2009, http://e3.physik.tu-dortmund.de/P&D/Pubs/0909_Ethics_and_Robotics_Altmann.pdf.
Capurro, R., Nagenborg, M. (Eds) (2009). Ethics and Robotics, Heidelberg: AKA/IOS.
Sparrow, R. (2009). Predators or Plowshares? Arms Control of Robotic Weapons, IEEE Technology and Society, 28 (1): 25-29.