jump to navigation

Ethics of Military Robots February 10, 2010

Posted by emiliekopp in industry robot spotlight.
Tags: , , , , ,
6 comments

I recently became familiar with the DoD’s Unmanned Systems Integrated Roadmap, a document that forecasts the evolution and adoption of robot technologies in modern warfare. For a government document, it was actually a pretty interesting read.

Many people are timid when discussing military robots, and justifiably so. While most of the robots are meant to perform tasks that are simply too dull, dirty or dangerous to warrant the risk of human life (for instance, MULE robots or robots that dispose of IEDs), most of the mainstream media attention is geared towards the robots with guns. And that’s when the references to Skynet come rolling in.

What happens when robots have guns? If something goes wrong, who is ultimately held responsible? The robot? The operator? The designer? The supplier of electromechanical parts? The chain of responsible parties could go on and on.

So we haven’t found the answer. But initiating the conversations is an good start.

P.W. Singer’s Wired for War has brought the conversation to the mainstream. The main point Singer addresses is that once you begin to move humans away from the battlefield (i.e give the guns to robots), they become more willing to use force. So as you reduce the risk of human life on one side, you become more willing to shed human life on the other. Understandably scary.

Another resource I found incredibly interesting is a report prepared for the US Department of Navy’s Office of Naval Research by California Polytechnic State University: Autonomous Military Robotics: Risk, Ethics, and Design. This report takes a more technical approach to understanding the ethics behind robots in the battlefield. While it addresses many of the concerns that Singer has brought to light, it also entertains the point that robots are “unaffected by the emotions, adrenaline, and stress that cause soldiers to overreact or deliberately overstep the Rules of Engagement and commit atrocities, that is to say, war crimes.” Of course, this assumes that humans are capable of programming robots to make ethically sound decisions on their own, which in turn warrants a walk down memory lane with Asimov’s 3 Laws of Robotics.

Bottom line: technology is a double-edged sword (thank you, Ray Kurzweil). There will be pro’s and con’s to exponentially-advancing technologies, especially with battlefield robots. Yet, we shouldn’t feel like we need to tip toe around the issue. The more we talk about it, the better equipped we’ll be when decisions must be made.

3 Laws of Robotics: Part 2 of Series September 8, 2009

Posted by emiliekopp in robot fun.
Tags: , , , ,
add a comment

Where NI employees take a stab at rewriting Asimov’s Three Laws of Robotics.

Click here for Part 1 of this series, where we learned that programming robots to keep Will Smith alive will prevent robotic apocalypse.

Here’s another entry that I particularly enjoyed, from Nick Hobbs, one of our Applications Engineering interns and student at Olin College (I want to go to there).

  1. All robots made without a kill switch will kill you, shortly before killing themselves and months of your hard work.
  2. All robots are sufficiently self aware to know when they are participating in a live demo. They’re also spiteful enough to begin smoking the moment the demo begins.
  3. All engineers wish they were roboticists, if for no other reason than the fac that their parents would finally know what they do.

3 Laws of Robotics: Part 1 of Series August 20, 2009

Posted by emiliekopp in robot fun.
Tags: , , , ,
3 comments

Thanks to a tweet from @RobotUprising, I figured this would be a timely post.

Recently, the NI Robotics Team had a social gathering (more like geek-fest) at one of our LabVIEW developer’s house (sweet pad, @brianhpowell). One of our fun robot-related activities was to try to recall Asimov’s Three Laws of Robotics, as close to word-for-word as possible. One guy was able to write down all 3, verbatim, plus the Zeroth Law. Impressive? Or just uber-dweeby? Not sure.

However, for most of us, it was a shot in the dark. I collected the responses and saw some pretty funny stuff. I thought I’d share one; I’ll share some more in future posts.

Here’s one of my favorite submissions, from Elben Shira, an R&D intern on the LabVIEW Core team:

  1. Thou shall not develop artificial life that can develop organic life, for this might result in an infinite, recursive loop.
  2. Robots may not interfere in human disputes, for attempting to change human nature is a futile task.
  3. (and this is my favorite): Thou must keep Will Smith alive forever, even if it breaks Law 1 and/or Law 2; for with Will Smith alive, no robotic apocolypse will sustain.

Thanks for keeping the world safe, Hitch.