Autonomous military robots

In an earlier blog, ‘A robot in every home‘, I commented on some of the issues related to the development of robotics software, in particular in relation to open standards and safety-criticality. This was in the context of domestic and industrial robots. Now it seems that the issue which is causing debate in the scientific community is prospect of deployment of autonomous decision-making military robots  (‘Robot future poses hard questions‘, BBC News).

There have been significant advances in the development of robotic systems for military applications in recent years, the DARPA Grand Challenges in 2004 & 2005 having been successful in encouraging the advancement of autonomous systems for battlefield environments. Later this year, the DARPA Urban Challenge, as its name suggests, will present different technical challenges, as the autonomous vehicles will need to manoeuvre in mock city environment  "executing simulated
military supply missions while merging into moving traffic, navigating traffic
circles, negotiating busy intersections, and avoiding obstacles
". Some of the competitors have already developed advanced sensors for this challenge, and it’s claimed that in addition to being used in autonomous military vehicles, these systems could provide benefits for civilian systems, including driver assistance. However, as I mentioned in my blog ‘Can Automotive learn from Avionics Safety?’, I think there’s an important distinction between passive driver assistance systems AND autonomous systems or those which take control away from the driver – the difference being the matter of safety.

In my view, there are two issues here:

  1. Is the software 100% reliable for the functions that is designed to perform (i.e. is it safe, has it been tested properly)?
  2. What limits or safeguards can be placed on the system beyond its defined operating parameters?’.

These are important questions for when you take a human being out-of-the-loop, because despite all our flaws, we can still make decisions in unanticipated scenarios, whereas pre-programmed computers by definition cannot. In the case of an unmanned ground vehicle, an unanticipated scenario could result in a collision, and possibly even fatalities.

However, more worrying is the prospect of armed autonomous military robots which could make errors with catastrophic consequences. This is no longer in the realms of sci-fi, as the BBC story mentions, as Samsung has developed the SGR-A1 armed robot (The Register). There’s an interesting promotional video on Techblog which shows a soldier surrendering to an SGR-A1 (which I find strangely reminiscent of the OCP board member surrendering to the ED-209 robot (wikipedia) in the movie Robocop, immortalized by the line "You have 20 seconds to comply…").

Let’s consider a few scenarios for a moment. These robots have advanced image sensor systems, but can they reliably distinguish between a friend and a foe, an armed combatant and an unarmed soldier surrendering, or even identify a child playing with a toy gun? We have a long way to go before robots achieve the reasoning capabilities of Star Trek’s Lieutenant Commander Data (wikipedia).

So, I am with the scientists on this one, I think we need an informed debate….