Thursday, March 13, 2008

The Economist's look at Jeff Hawkins

The Economist is finally mentioning Jeff Hawkins work, in its current technology quarterly.

Mr Hawkins's fascination with the brain began right after he graduated from Cornell University in 1979. While working at various technology firms, including Grid Computing, the maker of the first real laptop computer, he became interested in the use of pattern recognition to enable computers to recognise speech and text. In 1986 he enrolled at the University of California, Berkeley, in order to pursue his interest in machine intelligence. But when he submitted his thesis proposal, he was told that there were no labs on the campus doing the kind of work he wanted to do. Mr Hawkins ended up going back to Grid, where he developed the software for the GridPad, the first computer with a pen-based interface, which was launched in 1989.
Unfortunately, the piece is much more focused towards the man than towards Numenta's work.

And I, of course, couldn't resist commenting:
Hawkins is certainly right in his "grand vision", but he is also certainly right to stumble into 3 serious problems that will take decades to solve.

First, he believes "pattern-recognition is a many-to-one mapping problem". That is simply wrong, as I have pointed out in the journal "Artificial Intelligence", ages ago. If he is a rapid learner, he will backtrack from that mistake soon. Otherwise he may spend ages on this classic error.

Secondly, his HTM model is currently using a statistical model with numerous design decisions. That by itself would not be problematic if not for the fact that ALL nodes (and here we are talking about gigantic amounts of those) would be following precisely the same statistical rule. The problem with that approach is that the slightest, imperceptible error in a parameter setting or a design decision will propagate rapidly, and amplify into utter gibberish.

Finally, it is virtually impossible with current technology to "debug" NUMENTA's approach. We are talking about gigantic matrices filled with all kinds of numbers in each spot... how does one understand what the system is doing by looking at some tiny thousands (at most) cells at a time?

I have given PhD courses concerning "cognitive technology", and I do believe that a new information-processing revolution is going to hatch perhaps in a decade. However, we are dealing with much harder territory here than creating successful silicon valley startups. The tiniest error propagates throughout the network, and is rapidly amplified. It is impossible to debug with current technology. And some of his philosophical perspectives are simply plain wrong.

While I do think Hawkins will push many advances, including by firing up youngsters and hackers leaving web2.0, there are others which are building on a much more promising base (google, for instance, Harry Foundalis).

2 comments:

Post-Psaikik said...

hey,

You said: "And some of his philosophical perspectives are simply plain wrong."

Could you elaborate more on that?

Thanx

Alexandre Linhares said...

Well, for starters, the view that a mind has to discover "causes" in the outside world. I love Hawkins and etc, but he's in engineering speed here, and he's bypassing the numerous struggles philosophers have (and have had) concerning causality.

The second point is that he mentions once or twice that perception is some kind of one-to-one or many-to-one mapping, and that is just far, far from truth. I wrote at length about this precise issue in Artificial Intelligence, back in 2000.

In any case, I haven't met the guy, so maybe he's changed a bit his views. I'm certain that his is one of the best work in brain modeling. But his speed is not adequate to the task he faces. He seems to be moving on to classify pictures of dogs versus pictures of cats. That's like Santos Dummont trying to go to the moon, in my opinion. Well if he does pull that one out, he certainly nailed it.

My group is competing, but we have a much slower timeline of progress. We want to bridge two theories, and there are many levels needed to bridge those theories. We expect to work on two or three of these levels in 2009--2010, and see where we get. We expect to have serious results somewhere around 2015. Before huge advances are made, there will be no cat versus dog classification system in this planet, outside an animal's brain or a DNA machine.