One of the most heated debates in AI concerns the devel-opment of so-called killer robots. Advocates of autonomous weapons argue that wars fought by machines instead of humans would be more humane. Human rights abuses com-mitted by soldiers during conflict are certainly all too com-mon, but could machines really do better? While many think this is an outrageous idea, others think not only that machines could but that they must. 'Humans are currently slaughtering other humans unjustly on the battlefield,' says Ronald Arkin, a robotics expert at the Georgia Institute of Technology in Atlanta. 'I can't sit idly by and do nothing. I believe technol-ogy can help' The development of lethal autonomous weapons systems —killer robots — is accelerating, with many of the world's armies looking for ways to keep their soldiers out of the line of fire. Sending robots in place of human soldiers would save lives, par-ticularly for the nation possessing such advanced technology. And unlike humans, robots will not break the rules. The issue is also rising up the international agenda. In the past few years, the United Nations has discussed so-called lethal autonomous weapons systems many times. But, with strident opposition coming from groups such as the Campaign to Stop Killer Robots, there are signs that the discussions are becom-ing more urgent. Nine nations have called for a ban on lethal autonomous weapons systems and many others have stated that humans must retain ultimate control of robots.
123
Robots already play several roles on the battlefield. Some carry equipment, others dismantle bombs and still others pro-vide surveillance. Remote-controlled drones let their opera-tors control attacks on targets from thousands of kilometres away.The latest machines, though, take drones to the next level. Capable of selecting and engaging targets with little or no human intervention, the authorization to open fire is some-times all that remains under human control. The US Navy's Phalanx anti-missile system on board its Aegis ships can perform its own 'kill assessment' — weighing up the likelihood that a target can be successfully attacked. UK firm BAE is developing a crewless jet called Taranis. It can take off, fly to a given destination and identify objects of interest with little intervention from ground-based human operators unless required.The jet is a prototype and carries no weapons, but it demonstrates the technical feasibility of such aircraft. Meanwhile, Russia's 'mobile robotic complex' — a crewless tank-like vehicle that guards ballistic missile installations — and South Korea's Super Aegis II gun turret are reportedly able to detect and fire on moving targets without human supervision. The Super Aegis II can pinpoint an individual from 2.2 kilo-metres away. Arms manufacturers dislike talking about the details. The specifics, in general, are classified. What is clear, though, is that technology is no longer the limiting factor. In the words of a spokesman for UK missile manufacturers MBDA, 'technology is not the likely restriction as to what is feasible in the future'. Instead, autonomous weapons will be constrained by policy, not capability.
124
Starfish killer
In 2016 robots started to shoot to kill, no questions asked. This wasn't a RoboCop (see Figure 4.3) remake but real life on Australia's Great Barrier Reef, where a killer robot is being deployed against coral-wrecking starfish. Called COTSbot, it is one of the world's most advanced autono-mous weapons systems, capable of selecting targets and using lethal force without any human involvement. A starfish-killing robot may not sound like an inter-nationally significant development, but releasing it on to the reef would cross a Rubicon. COTSbot amply dem-onstrates that we now have the technology to build robots that can select their own targets and autonomously decide whether to kill them.The potential applications in human affairs — from warfare to law enforcement — are obvious. Against this background, COTSbot is a good thing —a chance to test claims about autonomy, accuracy, safety, hackability and so on in a relatively benign environment. It also offers an opportunity to demonstrate that autonomous robots can do good as well as bad. But the real significance is that it shows that RoboCop is getting ever closer to reality.
Rules of engagement What, then, are the relevant rules of war? There are no laws spe-cifically covering robots, but all weapons must comply with exist-ing conventions. One key principle is that civilians and civilian property must not be intentionally targeted.Weapons must also be capable of discriminating between civilians and soldiers. And the use of force must be proportional — the expected military advan-tage of an attack cannot be outweighed by collateral damage.
125
No comments:
Post a Comment