It happens quickly—more quickly than you, being human, can fully process.
A front tire blows, and your autonomous SUV swerves. But rather than veering left, into the opposing lane of traffic, the robotic vehicle steers right. Brakes engage, the system tries to correct itself, but there’s too much momentum. Like a cornball stunt in a bad action movie, you are over the cliff, in free fall.
Your robot, the one you paid good money for, has chosen to kill you. Better that, its collision-response algorithms decided, than a high-speed, head-on collision with a smaller, non-robotic compact. There were two people in that car, to your one. The math couldn’t be simpler.
That's the beginning of a PopSci article discussing a recent opinion piece at Wired on one of the most disturbing questions in robot ethics: If a crash is unavoidable, should an autonomous car choose who it slams into?
Here's a second robotic scenario, in combat:
A group of soldiers has wandered into the kill box. That’s the GPS-designated area within which an autonomous military ground robot has been given clearance to engage any and all targets. The machine’s sensors calculate wind-speed, humidity, and barometric pressure. Then it goes to work.
Boston Dynamics' Atlas robot - video below |
The shots land cleanly, for the most part. All of the targets are down.
But only one of them is in immediate mortal danger—instead of suffering a leg wound, like the rest, he took a round to the abdomen. Even a robot’s aim isn’t perfect.
As with the autonomous car crash scenario, everything hinges on that level of technological certainty. A human soldier or police officer isn’t legally or ethically expected to aim for a target’s leg. Accuracy, at any range or skill level, is never a sure thing for mere mortals, much less ones full of adrenaline.
But if it’s possible to build that level of precision into a machine, expectations would invariably change. A manufacturer may be able to program systems to cripple targets instead of executing them. But if that’s the clear choice—that robots should actively reduce human deaths, even among the enemy—wouldn’t you have to accept that your car has killed you, instead of two strangers?
The Department of Defense issued a Directive in 2013 titled “Autonomy in Weapons Systems”—another sign of how seriously the military is taking this. Among other things, it “Establishes guidelines designed to minimize[emphasis added] the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements.” But the directive also requires that the robots will “Function as anticipated in realistic operational environments against adaptive adversaries.”
Plus this:Coping with “adaptive adversaries” implies at least a degree of autonomy—and that is a discomforting notion for some.
DARPA—the Defense Advanced Research Projects Agency—has been actively funding robot research for years and this past summer showcased “one of the most advanced humanoid robots ever built”: a stocky 6’ 2” behemoth of a bot named “Atlas.” Its creators at Boston Dynamics (a company recently acquired by none other than Google, Inc.) say it is designed for disaster response, such as nuclear and chemical incidents. Atlas has no weapons, but it’s not hard not to blink your eyes and imagine a potential military future.
0 comments:
Post a Comment