Thursday, May 10, 2007

What does free will have to do with decision-making?

A lot, perhaps. That's what I'm presenting in this year's Eurocogsci 2007. Here are the slides:

Sunday, May 6, 2007

More on the Psychology Today article

Ok, Psychology Today is out with their cover issue about intuition. One thing I've mentioned is that intuition is best viewed as an almost immediate situation understanding. Two qualifiers: almost immediate, because there's a lot of processing involved behind the scenes, on our unconscious machinery. It is, of course, to us, "apparently immediate". Situation understanding as intuition is always meaningful; it is always filled with meaning. This is in stark contrast to those that define intuition as "immediate knowledge". The word knowledge does not begin to convey the meaning that's involved in an intuitive realization.

Where does that meaning come from? Shortly, from the explosion of connotations. We never think about a "triangle" in isolation. What does a "mental" triangle look like? Does it look like a triangle in your brain? Does it look like a triangular pattern of neuron firings? No, it doesn't. It's just a regular type of neuron firing. So how can we have an abstract visualization of something like a triangle (or an atom, an kangaroo, or a skyscraper)? The answer is that we activate a concept in our brain; but that activation is not the key to meaning; the key is what it does. We activate the concept triangle, and a host of connotations explodes, each with another concept and its own associated connotations: pointy things, the number three, polygons, line segments, closed figures, geometry, Pythagoras, angles, pyramids, the triumvirate, and so forth. These awakened, associated, concepts, are what create meaning. Not the word. Not the definition. Words and definitions are meaningless--that's why mathematics is hard.

The whole problem with the symbolic school of cognitive science and AI has been to place their faith on clear definitions; words (symbols) and definitions (procedural knowledge). Clear definitions are great in a proper context, but hey, that's not the way our minds work.

Whenever we think and reason, intuition is the guiding force. If reason is the captain of the mental ship, then intuition is the singing mermaid, who attracts us towards a certain information-processing trajectory (and not others). For very experienced people, that trajectory usually is a great one to follow; for they know where the bad mermaids usually are singing, and instantly avoid them. For the rest of us, we'll just go along for the ride, and see whether or not the pathway is a dead-end. (If it is, we can expand our rolodex--what a great metaphor!).

Just something I haven't mentioned, but wish I had.


We've got 2 new journal papers under review. The first 6 billion people to ask get them for free! Limited supplies!

Well, sorry about that...

Here they are:

Cognitive Reflection: the ‘Premature Temperature Convergence’ Hypothesis

By J. Silva and A. Linhares

We present a new hypothesis concerning cognitive reflection and the relationship between System 1 and System 2, corresponding roughly to intuition and reason. This hypothesis postulates a tighter integration between systems than is implied by the common framework of separate modules. If systems are tightly coupled, as we propose here, an explanation of cognitive reflection may rest in the premature convergence of an ‘entropy’, or ‘temperature’, parameter.


On the nature of chess intuition: Manifesto for a renaissance

by A. Linhares

How could Kasparov compete? Perhaps the unasked question of the Deep Blue versus Kasparov debate was: how could Kasparov play at the same level of a massively parallel machine which coldly computed up to 330 million tree nodes per second? What is the information processing behind human intuition like? The purpose of this paper is to explore new ideas for a research agenda to model of human chess intuition and intelligence. We initially present results from psychology, in chess players’ abstract perception of strategically similar scenarios, and analyze how current cognitive models might not be able to handle such perception. We next study how scientific effort may be measured: Is move quality still the standard to look forward to, when we do not want to construct a world champion, but rather to understand and model human intuition? Finally, we propose a proof-of-concept model, in the form of a computational architecture, which may be able to account for many crucial features of human intuition, such as concentration of attention to relevant aspects, how humans avoid the combinatorial explosion, perception of similarity at a strategic level, and a global understanding of how a scenario may evolve.