What exactly is it? Well, essentially it’s a neural network (NN) on silicon. A NN seeks to imitate the neurons and synapses of the brain. NNs need to be trained to be effective, though. That is, a data set consisting of inputs and desired outputs is fed into the network. The weights between each neuron are then modified to reduce the error between the NN’s output and the desired output, repeated for each of the training data points. Once the network is trained, then the network can be used on data without a priori knowledge of the outputs.
What’s interesting about this announcement (other than the fact that they put a NN on silicon) is that the team claim that they’ve extended Moore’s Law a few years. Moore’s Law says that computer speeds double in about two years. The problem with faster and faster processors is that computer chips can only get so thin. In this video clip, Michio Kaku predicts that this will happen in 20 years and that we won’t see computers on the level of human brains until 50 to 100 years from now (he also claims that current computers have the intelligence of a “retarded cockroach”).
Anyways, I’m far from an expert on this topic, but the fact that this team is claiming to push back Moore’s Law with this technology is pretty interesting stuff. Dharmendra Modha, the project leader of this effort, says,
“Everyone else is playing within the [Moore’s Law] system,” he argued. “We’re changing the game.”