Robot Killers In The Sky?

The Case Against Robots With License to Kill

Battlefield drones and robots capable of choosing their targets and firing without any human oversight won’t arrive for a few decades, experts say. But a new Human Rights Watch report calls for an international ban on fully autonomous “killer robots” before they ever become a part of military arsenals around the world.

The thousands of drones and robots that the U.S. military already has deployed alongside troops are all controlled remotely by human operators, who can take responsibility if the machines accidentally injure or kill civilians. Fully autonomous robots capable of choosing targets and firing weapons on their own may come online within the next 20 or 30 years, if not sooner.

“Giving machines the power to decide who lives and dies on the battlefield would take technology too far,” said Steve Goose, the Arms Division director at Human Rights Watch. “Human control of robotic warfare is essential to minimizing civilian deaths and injuries.”

“Fully autonomous weapons” operating without oversight won’t have the artificial intelligence, human judgment or empathy necessary to distinguish between armed soldiers and cowering civilians in murky battlefield conditions, Human Rights Watch says. Its joint report with Harvard Law School’s International Human Rights Clinic argues robots could never follow rules of international humanitarian law. 5 Reasons to Fear Robots

The report released on Nov. 19 suggests the following to stop the “killer robots” future:

Ban development, production and use of fully autonomous weapons through an international agreement.
Adopt national laws to ban the creation and use of fully autonomous weapons.
Keep watch on technologies and components that could lead to fully autonomous weapons.
Make a professional code of conduct to oversee research and development of autonomous robotic weapons.

The report also highlights concerns about the possible use of fully autonomous robots by dictators to brutally suppress their civilian populations, and about the easier decision to go to war when leaders aren’t worried about troop casualties.

Robots may lack human empathy, but history already has shown that human soldiers are capable of committing the world’s worst atrocities despite their supposed humanity. Ronald Arkin, a robotics researcher at Georgia Tech, even has argued that fully autonomous robots could make the battlefield safer: They wouldn’t fall prey to the fatigue that can result in misidentifying targets, or to the anger that could lead to sadistic abuse of prisoners and civilians.

The U.S. military spends about $6 billion each year on developing and deploying thousands of drones and robots. Its huge arsenal includes ground robots rolling or walking along under direct human control, Reaper drones that can fly parts of their mission without human control, and robot boats capable of firing missiles.

Automatic defense weapons such as the U.S. Navy’s Phalanx turret can fire thousands of rounds at incoming missiles without a human order and with only the barest human supervision. Israel’s “Iron Dome” defense detects incoming threats and asks human operators to make a split-second decision on whether to give the command to fire missiles that can intercept enemy rockets and artillery shells.

Both Israel and South Korea also have deployed robot sentry turrets that could, in theory, operate on automatic mode.

This story was provided by TechNewsDaily, a sister site to LiveScience.

The “Lantirn” fire control system has the ability to acquire, identify, and engage targets using available weapons from which it chooses, and would do so unless overridden by a human. it is/was used on aircraft. Though I wouldn’t call it autonomous, it can’t discriminate between friend, and foe, it assumes that anything you’re pointing it at is foe. The system has been around for a long time, but I havn’t worked on it since the mid 90’s , so who knows what its morphed into. At the time it came out, it would be a bit too large for a smaller land vehicle.

Automatic defense weapons such as the U.S. Navy’s Phalanx turret can fire thousands of rounds at incoming missiles without a human order and with only the barest human supervision. Israel’s “Iron Dome” defense detects incoming threats and asks human operators to make a split-second decision on whether to give the command to fire missiles that can intercept enemy rockets and artillery shells.

Both Israel and South Korea also have deployed robot sentry turrets that could, in theory, operate on automatic mode.

Gents , you shouldn’t have any illusions about that sort of “master-weapon”. The flying drones effective ONLY if faces weak and obsolete enemys AA-defence system. Like Isreali’s against mentlahy-retarded arabs:)
Any modern missle IS not that stupid like those home-made idiotic petards which Hamas so helpless uses. and the effective drones" are just another one propogandic tell which military corporation use to pump up their billionare contracts.The drones are effective just in relatively narror field of aplication. Can trust me as retired aa-defence officer.Any modern aa-missle would have finish that " robotic drone" with 95% probability. Firstly coz the technical progress works for missles FIRST:)

25 years ago, I was given a temporary assignment to Ireland’s Civil Service Commission as an Interview Board Supervisor. As part of my (typically minimal) training, I was assigned as an observer on an interview board assessing candidates for the post of Air Traffic Controller. One Board member - an Army Air Corps officer - had a standard question for candidates as to what they thought of totally automatic (computerised) air navigation for civilian aircraft. He got a variety of answers, but was notably non-committal as to his conclusions regarding the answers. After a day or two, I asked him what, exactly, his purpose was in asking these questions. In reply, he said that he marked down anybody who suggested that wholly automatic/computerised navigation of civilian airliners was acceptable - “Mad,” he said, “you can trust machines a long way - but human input is always necessary to deal with their faults.” Several serious air accidents in the civilian sphere since then have proved his point. When it comes to what might, broadly, be described as flying bombs, we are a very, very long way from achieving a level of dependability that would justify the use of fully automatic flying bombs. Furthermore, the history of “smart” aerial weapons does not suggest that they are very good at living up to the high claims for their accuracy, even where a high level of “human input” is involved in their operation. Yours from Under the Table, JR.