There is today an immense flux of innovation going on on the web. Entrepreneurs are finding untold riches in all sorts of domains: from skype to google to youtube to blogs to buzzword to facebook, things which were unimaginable in 10 years have become part of everyday life.
But cognitive scientists are just not there. Not yet, I feel.
But I believe that the next huge wave of innovation will come from cognitive technologies. Bridging the gap from machine information-processing to human information-processing is something so large-scale that, as soon as the first big-hit cognitive engineering enterprise comes up, venture capitalists and scientists and engineers from all walks of life will start jumping ship.
We know a lot about the brain. We know a lot about perception. We know a lot about language, vision, we have all sorts of psychological experiments detailing human behavior and cognition. But we are still in a stage of a pre-foundational science. There is widespread debate about, well, just about everything. Consider this:
- is logic essential?
- is neuroanatomy essential?
- is "a body" essential (as in the embodied paradigm)?
- is the mind modular?
- is the computer a good metaphor for the mind?
- is the mind a "dynamical system"?
- is syntax essential to understand language?
I believe that a good starting point is by studying human intuition. I don't study logic, or the brain, or syntax. I'd like to understand, and build a computational model, of something as simple as Shane Frederick's bat and ball problem: "If a bat and a ball cost 110, and the bat costs 100 more than the ball, how much is the ball?"
I have built a computational model of human intuition in chess, the Capyblanca project. It is still short of a full theory of human thought and expertise, of course--and, to my regret, it has been rejected without review by two top journals, with the same reply: "we're not interested in chess, send to some specialized journal". I replied to an editor that it was not really about chess, but about general cognition, abstract thought, meaning, the works--and that the model provided essential clues towards a general theory (he then said I should resubmit, "re-framing" the manuscript towards that view).
The human mind has not evolved to deal with chess, or to watch soap operas, or to learn to read this sentence: книга находится на таблице. The human mind evolved to find meaning. It is an incredible meaning-extracting machine. And it evolved to grasp that meaning really fast; it has evolved because it's a life or death matter. When we do find apparently immediate meaning, that's intuition.
Sometimes intuition "fails", as in the bat and a ball problem. But, as Christian pointed out the other day, "that's not a bug, it's a feature". Intuition is a way for us to restrict the space of possibilities really rapidly, so it only fails because if the mechanisms weren't there, we would all be "Shakey the robot" or "Deep Blue"--combinatorial monsters exploring huge spaces of possibility (that is, of course, exactly what economists think we are).
If we have a model of how intuition works, the next step up is to include learning, in the general sense. How did that intuition come about? That's what Jeff Hawkins is now trying to do. I have an enormous appreciation for his work, and the very same objective: to build a general-purpose cognitive architecture, suitable for chess, vision, and one day, maybe during our lifetime, watching soap-operas. Hawkins is, I think, right on spot on the issues of the importance of feedback, the issue of representation invariance (which is something Capyblanca is all about), and repeating hierarchical structure. On the other hand, I feel the emphasis on the brain is counter-productive, and I have some criticisms of his theory which I might put in writing someday.
But let's get back to cognitive scientists as entrepreneurs. We have been having wave after wave of revolutions in computing. From mainframes to command-line PCs to
If you can connect people to either other people (skype, facebook), to information (google, wikipedia), or to things (amazon, ebay), better than others, you will find untold riches in that space. But current computer science, left alone, cannot provide some much-needed connections. And a huge empty space lies open for cognitive scientists of the computational sort.
As an example, imagine a web browser for the whole planet. You might be thinking your web browser can "go" to the whole planet. It can, but you can't. You can't go to a Nigerian website, then a Russian one, then an Iranian one, then a Brazilian one, and understand what is there. Machine translation sucks. And as entrepreneur Paul Graham puts it, your system has to pass the "does not suck" test. We don't need perfect translation. But it has to be better than the garbage we have today.
We are far from that goal. Current systems have a lot of computer-science, and hardly any cognitive-science. One set of systems goes word for word (basically), another set works in a statistical manner, having "seen" loads of previously translated texts, and using some advanced techniques to guess what the output should look like. If you're translating a text from news about russian politics, you might get the idea that it is about a law, and there are some tradeoffs the law brings, but you can't always get the exact feel of whether the article is pro or contra the law. Current systems can give you a vague idea of what a text is about. But what machine translation needs is to deal with concepts, meaning, experience, learning, culture, connotation explosions--all topics for cognitive scientists. All difficult, of course. But remember: it doesn't have to be perfect. It has to pass the "Does not suck" test.
Translation is one example. There are many other crucial areas in which cognitive technologies could have an impact. And let's not forget the lesson of history: generally, the killer applications were not conceived when the technology was first introduced.
I could be wrong in my own vision on how to model cognition. In all probability I am wrong in some counts; who knows?; maybe I am wrong in all philosophical and technical details. But someone out there is bound to be absolutely-right in turning this Rubik's cube. And in the coming decades we should start to see these cognitive scientists having a bold impact in technological innovation, far beyond the borders of our journals and conferences.