Google's 16,000-brain neural network just wants to watch cat videos

The techno-wizards over at Google X, the company's R&D laboratory working on its self-driving cars and Project Glass, linked 16,000 processors together to form a neural network and then had it go forth and try to learn on its own. Turns out, massive digital networks are a lot like bored humans poking at iPads.

The pretty amazing takeaway here is that this 16,000-processor neural network, spread out over 1,000 linked computers, was not told to look for any one thing, but instead discovered that a pattern revolved around cat pictures on its own.

This happened after Google presented the network with image stills from 10 million random YouTube videos. The images were small thumbnails, and Google's network was sorting through them to try and learn something about them. What it found — and we have ourselves to blame for this — was that there were a hell of a lot of cat faces.

"We never told it during the training, 'This is a cat,'" Jeff Dean, a Google fellow working on the project, told the New York Times. "It basically invented the concept of a cat. We probably have other ones that are side views of cats."

The network itself does not know what a cat is like you and I do. (It wouldn't, for instance, feel embarrassed being caught watching something like this in the presence of other neural networks.) What it does realize, however, is that there is something that it can recognize as being the same thing, and if we gave it the word, it would very well refer to it as "cat."

So, what's the big deal? Your computer at home is more than powerful enough to sort images. Where Google's neural network differs is that it looked at these 10 million images, recognized a pattern of cat faces, and then grafted together the idea that it was looking at something specific and distinct. It had a digital thought.

Andrew Ng, a computer scientist at Stanford University who is co-leading the study with Dean, spoke to the benefit of something like a self-teaching neural network: "The idea is that instead of having teams of researchers trying to find out how to find edges, you instead throw a ton of data at the algorithm and you let the data speak and have the software automatically learn from the data." The size of the network is important, too, and the human brain is "a million times larger in terms of the number of neurons and synapses" than Google X's simulated mind, according to the researchers.

"It'd be fantastic if it turns out that all we need to do is take current algorithms and run them bigger," Ng added, "but my gut feeling is that we still don't quite have the right algorithm yet."

Via The New York Times

For the latest tech stories, follow DVICE on Twitter
at @dvice or find us on Facebook