One of the world's smartest computers recently took an intelligence test. The test was exactly the same sort that you or I might take to measure our IQ. The computer, a ConceptNet 4 A. I. system, did pretty darn well for a non-breathing form of intelligence: it scored as well as a four-year-old.
Great. What an interesting benchmark. But that's not the whole story — not by a long shot. After the study was complete, lead author Robert Sloan noticed some irregularities in the data:
"If a child had scores that varied this much, it might be a symptom that something was wrong."
The particular area of learning where the A. I. faltered was reasoning. It could factually describe a situation, but might not know why that situation had occurred in the first place. For instance: robot has a knife, human is dead, how this came to pass is irrelevant.
This sort of finding might be of particular interest to another group of scientists across the pond — those currently testing if robots are evil. Whether robots do turn out to be evil or not, we're pretty sure that constructing a horde of super-strong, potentially mentally-unbalanced four-year-olds is pretty much a bad idea no matter how you slice it.