U.S. Department of Defense giving robots a coded conscience

It appears that the U.S. military's new wave of killer drones has even the military itself a little wary. Now that we have drones capable of carrying out offensive strikes with little or no human control, the military, it seems, has finally decided that it might help everybody out if these killer robots had a sense of morality. Calling in philosophers, computer scientists and roboticists from all over the nation, the Department of Defense (DoD) has actually begun to look into giving robots morals.

The inclusion of philosophers in the project might strike you as odd, but given that humanity itself doesn't exactly have a unified concept of morality, you can probably see how a bit of debate before the coding begins might help. To decipher what it means to act in a moral manner, the group will conduct theoretical research and lab experiments. The ideal result of this research is to locate the part of the human brain that handles morality and translate its workings into code, basically creating a "cut and paste" conscience.

Equipped with the morality coding, robots would perform a quick ethics check before taking lethal or violent action. Even choices like stopping to help an injured soldier would be weighed against the importance of the mission at hand. It's an area of robotics that many a sci-fi story has tackled a number of times, but strangely never in this manner. Usually, rule number one in robot ethics is to do no harm to living beings. With combat robots slated to be the first to get the DoD's ethics programming, there's really no telling how emotionally-intelligent robots will evolve going forward.

Let's just hope our shaky human morals don't trigger some sort of iRobot-esque morality war between humanity and our constructs.

Via ExtremeTech

For the latest tech stories, follow DVICE on Twitter
at
@dvice or find us on Facebook