Wednesday, December 26, 2007

Is the feedback nucleous social?

On the carriage of baby auto-feedback post, we may speculate the auto-feedback origin is on reproducing a nucleous social feedback. See the hilarious demonstration in the video below.

Sunday, December 16, 2007

Thursday, December 13, 2007

The Everests of Artificial Intelligence

[updated after a new suggestion]

Here are some of the Everests for Computational Cognitive Modeling. Some people call them AI-complete. That might not be the best term, as it extends the notion of NP-Completeness, which is a precise, formal, mathematical notion, into a very blurry territory.

Anyway, I've put them from easier to harder...

Here are my feelings when delving into theory... (hat tip to her).

Do you have another problem that's missing from this list? I would appreciate additions and suggestions in the comments.

The "Cloud" is looking ugly

Here are interesting articles about Numenta, from Wired and CNN Money. Very worthwhile.

I´ve been convinced by Drama 2.0 that the web2.0 is now a bubble. Damn it; I'm now bearish.

However, here's my longer-term view. If history is any indication, a timeline of transformation in the information revolution can be nicely broken into decades.

  • 1945. There was, like, this war, somewhere, and some dudes, like, invented these big machines called computers. It was great because they could now kill other guys much more conveniently.
  • 1955. The rise of Business Computing. IBM builds fifty-six SAGE computers at the price of US$30 million each.
  • 1965(-1). IBM, with the System/360 Mainframe, produced approximately 70 % of all computers.
  • 1975. Popular electronics shows up the Altair and wakes Gates, Allen, and the Steves. The Microcomputer revolution is born.
  • 1985(-1). Apple computer launches the Macintosh. The GUI and ease of use and WYSIWYG revolution is born.
  • 1995. Netscape IPO. The Web revolution is born. Everybody but me becomes a billionaire.
  • 2005. YouTube is born, and it epitomizes Web2.0. Also, On September 30th this "Tim O'Reilly" dude writes a piece summarizing his view of Web 2.0.
I'm now very bearish, and I believe a big market crunch is coming. It won't hurt Grandma, because she's not investing in stocks. There are no IPOs out there. It will hurt the VCs and the big players: Google, Facebook, MySpace, Yahoo, and so on. The crunch will suck capital dry. The VCs left standing will be gone to green. If I could advise any of the big players, I'd suggest saving some billions now (Euros, not Dollars). They would thank me later.

But remember this: while everyone thought the web was a fad, quietly, Google was being born and growing profitable. Google's IPO awakened everyone, again.

While I'm thinking we're heading for a very nasty crash in 2008 or 2009; in the longer-term, I think the likes of 2015, we will have serious progress. If history is any guide, perhaps a new revolution is on the way. Here's my guess.
  • 2015. A theory of meaning is complete. The cognitive information-processing revolution is triggered.
Around 8 years now. Who knows? With all due respect to all the smart folks working on these cognitive things, I think that perhaps we might turn out to be one of its drivers.

We have been turning that Rubik's cube, and we want to sneak in the next party.

Wednesday, December 12, 2007

Congratulations, Brazilian Chapter President Gurgulino!

Alto funcionário internacional brasileiro distinguido por Portugal

O Professor Heitor Gurgulino de Souza vai receber no dia 11 de Dezembro, das mãos do embaixador Francisco Seixas da Costa, na Residência da Embaixada de Portugal em Brasília, as insígnias de Comendador da Ordem do Infante Dom Henrique, com que foi distinguido pelo Governo português.

O Professor Gurgulino de Souza, que teve uma brilhante carreira brasileira nas áreas da Ciência, Tecnologia e Educação, desempenhou igualmente cargos internacionais de grande relevo, em particular como Subsecretário-Geral da Organização das Nações Unidas e Reitor da Universidade das Nações Unidas, em Tóquio.

Esta distinção destina-se a mostrar o reconhecimento de Portugal pela contribuição dada pelo Professor Heitor Gurgulino de Souza à causa da paz e da amizade entre os Povos e, muito em especial, pelo seu trabalho em prol do multilateralismo e do reforço do papel das Nações Unidas na ordem internacional.

Thursday, December 6, 2007

Largest island in a lake on an island in a lake on an island

Talk about some recursion.
Since chunking mechanisms use a lot of recursion, perhaps we may want to start a class on them by visiting the largest island in a lake on an island in a lake on an island.

THIS IS THE ORIGINAL SOURCE, and kudos to them!

(All I've done was mash it up; the credit is all theirs).

Largest island

View Larger Map

Largest lake

View Larger Map

Largest lake on an island
Nettilling Lake on Baffin Island (CAN)

View Larger Map

Largest island in a lake
Manitoulin Island in Lake Huron (CAN)

View Larger Map

Largest island in a lake on an island
Pulau Samosir in Danau Toba on Sumatera (INA)

View Larger Map

Largest lake on an island in a lake
Lake Manitou on Manitoulin Island in Lake Huron (CAN)

View Larger Map

Largest lake on an island in a lake on an island
Crater Lake on Vulcano Island in Lake Taal on Luzon (PHI)

View Larger Map

Largest island in a lake on an island in a lake on an island
Vulcan point in Crater Lake on Vulcano Island in Lake Taal on Luzon (PHI)

View Larger Map

Tuesday, December 4, 2007

Stumbling upon something new

We are turning some good cranks on that Rubik's cube.

From the start of the PhD course we had wanted to publish everything on, the slides, and slidecasts of the whole thing. But at one point these last weeks a real dilemma came up. What we were talking about, and doing, and seeing it run on the screen, was new. Something that most likely has not been done before, and that, if the underlying philosophical premises are correct, might have quite an impact in both computer science and cognitive science.

My mind goes like this: what to do with it? Publish the classes, as the original plan called for? Publish as a series of papers? Get a PhD candidate to work on it and see what's up? Write up a patent? If we're correct than it could potentially have wide applicability.

I think we made an advance on what we've called autoprogramming before. So I'm on Jekyll-and-Hyde mode on this one.

And the thing is... I think there's more. I think that there's another important idea clearing up... something like "concept-oriented programming"... or maybe "encapsulating object encapsulation". Just to give a glimpse of the idea: in language and cognition we use analogy all the time, of course. But how can we say that "that lawyer is a vampire", that "if independent, Quebec will become a small boat in a big storm", or something I said the other day, that "I really hope that Dr "dude" Lisi is a new Einstein. We really need a new Einstein."

In Object-oriented programming, objects have state and interfaces. But in human concepts, we apply the interfaces and properties and relations that belong to one class to almost anything else. A Canadian state becomes a boat, a lawyer becomes a fantastic figure dreamed up in novels, someone becomes an "Einstein".

How can we design classes and objects that reflect this? Even with polymorphism, inheritance and all that OO-goodness, it seems far-fetched. But I think we're stumbling on the answer. And it is beautiful. This week I'm designing the blueprint & requirements, and I hope to have a proof-of-concept (pun intended) by next week.

There's a parallel here with what Garret Lisi says about his work: either our model will be extremely simple and elegant, or it will fail spectacularly. While neither Jekyll nor Hyde wins the fight, we can't say much for now.

In the meantime, feel free to check out the possible theory of everything in the universe below.

Thursday, November 29, 2007

Pleasure, maybe. Avoid pain, thanks!

Today I could'nt go to the class. I was giving a seminar of text mining and artificial intelligence on Fluminense Federal University (UFF). Here then goes my today's contribution.

Doing cognitive science is hard job because we just can't look inside us clearly. We can't ask objective questions to our brain. We have only access to a very high level instructions. Thus, last class I refactored my intuition about the hedonic guide of our lives, e.g., one being tries to maximizes pleasure and minimizes pain as well. I realized it might have a possibility that this is not a simetric distribution. Ingrate, but I think one may have much more pain then pleasure. Pain is a strong word, the left extreme. A discomfort or distress may be a better word. Thus, one may have much discomfort in daily life. It might be necessary to make us move on, to act or todo something.

Imagine one exposed to a situation that causes him this feeling. The situation sets hedonic function to a little bit under neutral point. A short-circuit alerts to bring back balance. One only stabilish peace if the short-circuit was released. Ok, my point is that this situation is kind of a bit similar with pleasure, but it is not the same. A pleasure is a move with neutral point as origin. I suppose we have a stick that slip a bit down every time. And this movement is what makes life.

Monday, November 19, 2007

On the verge of a breakthrough

I think we're on the verge of a breakthrough. We are about to solve the shape sorting problem, the most important scientific problem that nobody cares about.

I would like to take this opportunity to thank the Swedish Academy.

Images from qotile.

And here's a live specimen in its habitat:

Saturday, November 10, 2007

NUMBO To-do list

Classes dealing with

  • Activations
  • Nodes & links
  • Slipnet
  • (Bare) Chunks, withdraw and insert chunks and objects
  • Numbo-specific chunks
  • Codelets
  • Temperature
and... a project wrap-up and it's done. Getting there.

Life on the easy street

Things to finish STILL in 2007!

  • NUMBO implementation
  • NUMBO documentation (framework)
  • NUMBO documentation (NUMBO domain-specific)
  • The PhD Seminar
  • Paper on the Capyblanca Project
  • Jarbas' PhD Project
  • Carla's PhD Project
  • Anna's PhD Project
  • Jarbas' paper
  • Brum's psychological experiment
  • Simone's MSc Thesis
  • Anne's MSc Thesis
  • Taisa's MSc Thesis
  • Nicholas Thesis is for january, way way into the far future!

This list does not include any Club of Rome activity whatsoever, as you might have noticed.

What a piece of cake!

I hope I haven't forgot anything.

Numbo's slipnet: nodes

I'll be posting here the NUMBO's real slipnet, directly from Daniel Defays's code. Please disregard my previous guesses!

Node Numbers


Sum nodes

Multiplication nodes

Other nodes

I guess I'll be delving deeper into the lisp code, to see what's the best way to design an OO framework to handle it.

Thank god for Daniel Defays.

Wednesday, November 7, 2007

Compile and run. Nothing has changed? GOOD!

We are, today, taking small, but significant steps towards our framework.

The MOST crucial thing now is to proceed in a manner that gives us total confidence in what has been done. If there is any bad design decision, it will be rapidly found and corrected.

This is the most important thing in building such complex mechanisms. We have a long way to go, but let's take each single step in rock solid ground. This is a framework, not a program. It will be used for a long time period. It has to be rock solid.

We can only move on to new functionality when we feel confident to say, as members of the Cosa Nostra are fond of saying: "forget 'booout it!"

Finally, here's an important tip: Compile and run. Nothing has changed? GOOD!


Tuesday, November 6, 2007

Is Microsoft still the power?

The first war was, at the beginning, Microsoft vs Apple for a graphical user interface (GUI), where in 1984 Apple strikes first. In 1985, Microsoft shows the world Microsoft Windows, a GUI for any personal computer, no only Macs, overtaking Apple.

The second war was against Netscape to the Internet. Netscape strikes first in 1994 with 85% of market share. Microsoft, in 1995 uses its Windows share ands "gives" Internet Explorer to all Internet users, flattening Netscape to 1%.

The third war was to instant communication, ICQ strikes first in 1996. In 1999, Microsoft launches MSN Messenger, using its OS monopoly again.

The fourth war was and is to the search market. Google strikes first, in 1998, with PageRank technology. Internet was bigger then anyone could imagine. Google has 53,6% of it, a virtual monopoly. This time Google vs Microsoft seems like David vs Golias. Microsoft has no quick jabs. Time goes so fast that, today, 10 years later, nothing has stated yet. Google expanded to Gmail, YouTube, Orkut, Earth etc. A new tech-empire. Everybody shouts David, David, David...

Is Microsoft still the power?

Today I saw MSN Video embedded on MSN Messenger. I saw, at a glance, Microsoft again. How subtle and weak an empire might be. Incredible, when things sounds stable I see Facebook overtaking the consumed social networks phenomena. Look at behind of it, Microsoft on the fire. The brand-new Windows Vista comes powered by Microsoft Search as native or default for everything.

Where are Google fast moves? Now I prefer to think about Adobe Systems. While nobody used to give attention, it has 99,1% of computers around the world using its engine. More then Windows, Google and Firefox. Browser style goes down and it is coming Adobe AIR.

Monday, November 5, 2007

Refactoring a method to the strategy pattern

An example from O`Reilly:

Refactoring to Strategy

The getRecommended() method presents several problems. First, it's long—long enough that comments have to explain its various parts. Short methods are easy to understand, seldom need explanation, and are usually preferable to long methods. In addition, the getRecommended() method chooses a strategy and then executes it; these are two different and separable functions. You can clean up this code by applying STRATEGY. To do so, you need to

  • Create an interface that defines the strategic operation

  • Implement the interface with classes that represent each strategy

  • Refactor the code to select and to use an instance of the right strategic class

Three steps involved.

Why is this important? Because, in turning this Rubik`s cube, we will make mistakes. We do not know exactly many parameters and many design decisions will have to be taken without them. Educated guesses are informing all cognitive models in the market. So if we know that we will make mistakes, we should be able to undo those mistakes easily. So here is a design principle for our architecture:

Whenever a design decision is based on an educated guess, the implementation should not be done through class methods, but with the strategy pattern.
I don`t know to what extent we will be able to follow this. That is precisely why we need to start getting used to the refactoring involved and master these three small steps.

We want to create a strategy pattern to deal with potentially different activation curves and potentially different decay curves. We are not sure of which is the best (i.e., psychologically plausible) curve, hence we want this to be easy to change. The easier to change, the higher the productivity. The higher the productivity, the most mistakes we can make without major backtracks. And mistakes we will make, lots of mistakes. On that you can be sure.

But the strategy pattern also gives us something enormously valuable. It gives us the ability to change code on the fly. It gives us the ability, for example, to change behavior should a global signal be received; we know from experience that our memory of intense moments is registered; while expected, boring, moments do not register as well. I used to believe that in the mind there was no space for global variables, or global events. I was wrong. A spike in adrenaline makes you alert, opens your eyes, raises your heart rate, raises your blood pressure, feeds your muscles with blood, preparing them for action, and halts numerous "background" processes, such as digestion. There are global signals, and behavior changes on the fly. (BTW, where in NUPIC does this happen?)

The strategy pattern gives us this ability. And this is something we do want. So we want to be masters at the craft of transforming refactoring a class method into a strategy pattern. So here we go. Here's our current class definition for activations:

Tactivation = class
current_state, level, increment: real;
Signals_Received: real;
procedure Recompute_Level;

Constructor Create;
procedure increase (step: real);
Function Get_Level:real;
function Get_CurrentState:real;
function Get_Increment: real;
Procedure Reset_Increment;
Procedure DeployChange;
procedure decay;

Step 1. Creating an interface that defines the operation

(The syntax java it is easier on the eyes than in Delphi, so translating this code should be a piece of cake.)

Here we go:

IRecompute_Activation = Interface
function Recompute_Activation(Current_state: Real):Real;

Compile and run. Nothing should have changed yet in functionality. No errors should appear.

Step 2. Implement the interface with classes that represent each strategy.

Let's have, for starters, a sigmoid curve, and a linear one (i.e., level=current_state).

TRecompute_Activation_Sigmoid = Class (TInterfacedObject, IRecompute_Activation)
function Recompute_Activation(Current_state: Real):Real;

TRecompute_Activation_equals_state = Class (TInterfacedObject, IRecompute_Activation)
function Recompute_Activation(Current_state: Real):Real;

Copy/Paste the previous method code (in the sigmoid case), without deleting the original.

function TRecompute_Activation_Sigmoid.Recompute_Activation(Current_State:Real):real;
var pyramid, sum, t:real; counter: integer;
for counter:= 0 to floor (Current_State*max_Steps) do
t:= counter/max_steps;
If(t<=0.5) then Pyramid:=t else pyramid :=1-t;
Sum:=(4*(1/max_steps)* Pyramid) + Sum;
Result:= Sum;

function TRecompute_Activation_equals_State. Recompute_Activation (Current_State:Real) :real;
Result:= Current_State;

Now compile and run. Nothing changed in functionality. Good.

Step 3. Refactor the code to select and to use an instance of the right strategic class.

NOW we're changing the original class. Five quick steps are involved here.

(3.1) First we need to include the strategy pattern object, named activation_strategy, then compile and run. No change. Good.

(3.2) Now on to include methods to set_activation_sigmoid, or set_activation_linear; hence we include in the TActivation Class:

Activation_Strategy: IActivation_Strategy;
Procedure set_activation_sigmoid;
Procedure set_activation_linear;

...and the respective methods on the Activation class which call the constructor of the desired strategy:

Procedure Tactivation.set_activation_sigmoid;
Activation_Strategy:= TRecompute_Activation_Sigmoid.create;

Procedure Tactivation.set_activation_linear;
Activation_Strategy:= TRecompute_Activation_Linear.create;

Compile and run. Nothing changed in functionality. Good.

(3.3) Kill the previous code, by commenting out the method and its declaration. Compile... and it doesn't run anymore! Great, because the compiler will point out to you all the previous calls made to the method, so just substitute them for your strategy. In our example, we substitute calls to Recompute_Level to the function Level:=Activation_Strategy. Recompute_Activation (Current_State).

Compile and run, and functionality should be restored!

(3.4) Now, test whether or not the whole strategy is working by changing the pattern at runtime. In my case this means including the following piece of code in the end of the DeployChange method: if Level>0.5 then set_activation_linear;

Compile and run. Now marvel at the runtime behavioral change.

(3.5) Finally, clean the code. Delete the (commented out) method calls and method implementation (and declaration).

Compile and run. Works like a charm. Congratulate yourself now.

Here's to the strategy pattern. This one is truly important. Everybody should master this technique. This MUST be trivial to do, anytime. Unless you are so overconfident to believe that you'll never get paralyzed because of a bad design decision taken ages ago.

Don't be.

How could we ever beat Numenta?

Let me start with an...

anecdote used by Jerry Jordan, president of the Federal Reserve Bank of Cleveland, in an article in the Cato Journal last summer. Jordan described a U.S. businessman visiting China a few years ago. The American came upon a team of 100 workers building a dam with shovels. Shovels.

He commented to a local official that, with an earth-moving machine, a single worker could build the dam in an afternoon. The official replied,

"Yes, but think of all the unemployment that would create."
"Oh," said the businessman, "I thought you were building a dam. If it's jobs you want to create, then take away their shovels and give them spoons."

End-of-quote. Numenta is doing great work; but perhaps they have made some choices to be in quite difficult territory. I think that their strict reliance on biological plausibility and the low level in which they work are deeply problematic.

Not that they are wrong.

Of course a sound theory of the mind can be reduced to the level of neurons. Nothing wrong with that idea. But here are two arguments: why stop there?, and how will you deal with the bandwidth problem?

Why stop at the neuron level? Why not go all the way to atoms? Or to quantum physics, perhaps? It seems aesthetically pleasing to model the mind using neurons; the final encounter between biology, psychology, and computation. But there is a more important question here; is this the most productive way to work?

I don`t think so. I think it is very unproductive.

As economists will tell you, there is another way to say that "productivity is low": You are limited in what you can achieve. I think Hawkins and Numenta are limited in what they can achieve, as they have made some design choices which might paint them in a corner for long stretches of time.

In building something so complex, your first questions should be:
  • what are the project`s self-correcting mechanisms?
  • how long does it take to self-correct?

Without facing these issues head-on, right from the start, and finding the most productive way to build this thing, there is no money or intelligence that will pull it off alone.

Saturday, October 27, 2007

Live fast, love hard, die young

Some important links today.

UPDATE: Boy survives two-hour flight to Moscow hanging onto plane wing (digg it here, story here, here, and here)

On our science section we have a great piece by The Economist. How can women still complain?

The Capyblanca Prize for industrial design goes to 3M's specially designed self-adhesive hooks: "Sticky bear is REALLY HAPPY to see you".

On our beauty department we have the solution for you big-nosed people out there: Be a Cleopatra Nose!

Paul Graham now has a feed. Check it out.

Finally, for those who only want the truth and the real truth and nothing but the truth, regularly check out the news in the official North Kolea Blog.

Victoly to North Kolea!!!!!!

Thursday, October 25, 2007

Cognitive scientists as policy advisors

If there is a social science that sees as its birth right to advise on matter of policy then it's Economics. From cold war strategy to inflation-fighting to abortion, economists have been advising policy for over a century.

They should better watch their backs now, for the cognitive scientists are coming.

In a way, economists are already cognitive scientists. They study people's (or animals'--but these are called ecologists) behavior in the aggregate. Two fundamental pillars of classical economics are the ideas of (i) incentive systems and (ii) utility.

Incentives assume that people will tend to do what they get incentives for, and will tend to inhibit their behavior if a negative incentive is there. Sticks and carrots, sticks and carrots. This insight, of course, is the behaviorist insight: treat the mind as a stimulus-response black box. Incentive systems work most of the time. Sometimes they backfire, as in "perverse incentives" which actually encourage people to do exactly the opposite of the intended policy.

I have come to believe that the mind has three distinct feedback systems, and incentives apply to just one of them, an hedonic system. Give a shock to a mouse, and the poor creature will learn not to do whatever it was doing. Stimulate it's brain's pleasure center, and, like a heroine addict, mice will ignore food and sexy females, fancying themselves all the way to their death. Inhibitions act as if they were sending a global halt message to the brain's numerous activities, and hedonically good incentives will bring computer-loopy types of behavior.

This is a good insight, of course, but it is not sufficient to explain (and predict) behavior. More is needed. We need to look inside the black box.

Utility is related, but different. It concerns different preferences that different people may have. I like nurses; maybe others don't. Different people, different preferences. Classical economics takes this into account, and it probably stems from the same cognitive mechanisms which have built different memories through experience. Genetic and anatomical stuff also may play a large part in determining preferences.

Another thing involved in utility is that it does not grow linearly. A 30 minute massage is a better experience than a 30 second one. But it is probably worse then a 30-day one. Now, if the hedonic feedback system helps to determine your preferences, a different system, I think, acts over here; an attentional feedback system--this one does not directly drives behavior, but it decides what gets stored into memory. The first minute of the massage gets stored; the fifth hour is just plain boring (people in chronic pain notwithstanding, but even they should find diminishing returns).

Now, here's some cognitive science creeping under the economists' stage.

Herbert Simon, for one, pointed out that we just can't figure things out--or at least not as infinitely deeply as the rational actor model suggests. The space of possibility is just too monstrouly huge. Here is one of the goals of the Human Intuition Project: to study how intuition guides the choice generating process, and the repercussions of this to economics. Intuition destroys the vastness of the space of possibility, presenting a tractable course of thought and action. (Perhaps misguided, of course, but if it's here, there are evolutionary reasons).

Daniel Kahneman and Amos Tversky truly turbocharged Simon's work--eventually leading to the school of behavioral economics. They've shown that framing affects decision-making; that the utility curve was susceptible to language, that preferences do revert even within a single individual in an instant of time. But this insight is something that economics should have embedded in the models since Thomas Schelling, another psychologically inclined economist. I love how he brings up our Jekyll and Hyde nature, and the deep, deep, questions involved. Choice and Consequence, and Micromotives and Macrobehaviors, and the Strategy of Conflict, is beautiful cognitive science, an enmeshing of philosophy, psychology, economics, and mathematics. (A Schelling point, for example, should be something studied by cognitive psychologists--though I´ve never seen a single textbook mention the term.)

Language and framing seem to be now on the agenda of cognitive scientists as policy advisors. George Lakoff, looking at language, telltales how the Bush team used language to present policies which become impossible to attack. My favorite example is the term "tax relief". Only a monster can be against any kind of relief. Watch your language, sir. Beware if you want to argue against this policy.

Even Steve Pinker seems to be joining the boat. In his recent book, he shows how language indirection distorts, for example, game-theoretical models. For example, nobody bribes a cop in direct language. Corruption has an etiquette. Here in Brazil you can buy a cop for a "cervejinha" (i.e., a small beer). In China or Greece it would be called an "envelope", in Iraq a "good coffee", in Mexico a "refresco", in North Africa, "un petit cadeau". Everyone knows the meaning of the message, but nobody uses the information efficient terms: "Can I bribe you, officer?"

The euphemisms, and language indirection, introduce plausible deniability, this distorting the game-theoretical scenario, as Pinker points out: they have long been know by diplomats to be "not a bug, but a feature" of language. Teen kids rarely know that the fastest way to a girl's, ahem, "heart", is never the direct route.

There are some very important insights I feel should find their way, eventually, into economic models:

  • The distinction between hedonic feedback systems and attentional feedback systems;
  • Hofstadter's fluid concepts model of cognition;
  • The choice generating process studied by Gary Klein, Gigerenzer, Barry Schwartz, and many others.
I had plans that Bia, a mathematician who joined my research group for the Ph.D., could make a great contribution here. But I guess Ἄτροπος had other plans.

The ideas live on, though.

In some years, we´re going to start seeing undergraduate courses on cognitive science flourish. MIT has one. But what will the thousands of students go after graduation? From MIT's website, it seems that most careers should need a PhD:

After Graduation

The majority of people who major in Brain and Cognitive Sciences attend graduate school, in fields such as medicine, neuroscience, psychology, cognitive science, or computer science. Some attend law or business school. With or without advanced degrees, majors work in a diverse array of careers, as researchers and professors, in telecommunications, financial advising, human resources and human relations, counseling, teaching K through 12, ergonomics, environmental design, robotics, AI.

I think that's not enough. Most undergraduates want a job after school; and Undergraduates-level cognitive scientists should play great roles as policy designers and advisers, and, of course, in entrepreneurship.

Monday, October 22, 2007

Mirror Perception

One of major trick experiences for a human to learn. This video gives some hints to understand that phenomena.

Saturday, October 20, 2007

Saturday, October 13, 2007

Essay on the fetiche with nurses

The other day I was mentioning a case in which a nurse responds incredibly rapidly to a furiously serious situation in a neonatal intensive care unit. Then this guy comes up with this:

"You really have a fetiche with nurses, hã?"

To which I reply: "Only when their name isn´t OLGA."

Why study these cases in a business school? What is the relevance of that? Why should a decision-making course actually start with the case of a radar operator, and also look at, for instance, chess-players or firefighters? (No fetiche here, thanks for asking--but remember: not all firefighters are equal).

What can business students get from studying this?

Superficially, people such as nurses, doctors, firefighters, radar operators, chess players, etcetera, do tasks which are extremely distinct from what a manager does. But look closer, and you´ll start to see deep, deep, similarities, in their cognitive processes.

Most white-collar work is, of course, like this: reading email, downloading attachments and working on them and sending them back, deleting those cheap v!agr@ emails, talking to people over the phone, not falling asleep in meetings and trying to sound intelligent, and making "exciting, enthusiastic", presentations.

What ties managers and chess players together is that their job consists, mostly, of separating what´s important in a situation from what´s irrelevant in it.

Imagine the immense amounts of paper and phone calls trying to reach, for instance, Larry Allison, this coming week. It will be vast. Most of it will be filtered by secretaries and managers with that specific job in mind. But he´ll still have to deal personally with an large load of "incoming" information. Two documents stand in his desk, waiting for a signature. What´s important, and what´s not? How to separate what´s important from what´s irrelevant? It´s extremely tricky, and there´s not a single isolated piece of information that´s up to the task.

Sometimes, a single comma can cost you a million Canadian dollars.

I believe something like 70% of my own email is marked "urgent". Hardly any of it is, of course. So a "high-priority" or " urgent" mark is no good source of information. Neither is the sender. It could be someone extremely important, yet, the message still is rather unimportant. There´s not a single isolated piece of information that will tell us whether something is relevant or not.

It´s in the whole scenario. Importance is spread over the whole chessboard, the whole health history of the baby turning blue, the whole situation about a strange fire that´s just too hot to handle (tough it looks, to the unexperienced, that it should be easy to handle).

It´s all in the struggle between one´s expectations and one´s perception. If you´ve acquired precise expectations about a situation, then you´ll know what to expect. This is one of Jeff Hawkins crucial points. Did you know that the brain is "saturated with feedback connections?" In some parts of the cortex, there seems to be 10 times more information going from the brain to the senses (e.g., from your brain to your eyes), than the connections coming from the senses to the brain. Why is there such a high-bandwidth going on the wrong direction? The answer seems to be that the brain is telling the senses what to expect, "and only report back to me if something is different from what I´m telling you". That´s what Hawkins calls the memory-prediction framework, and close in philosophy to what the folks over at overcoming bias call cached thoughts.

This can only be done through experience, of course. So an international master reconstructs a chess position after a mere 5 seconds presentation, and we can´t do it.

When something departs from expectations, your attention is rapidly grabbed because of this high-bandwidth info the brain is sending your eyes. If you have experience, you know what to expect. Two good questions to ask every time you´re studying decision-making or intuition or judgment are: how could an inexperienced person deal with this situation? And of course the classic: how could a machine do this? What are the information-processing mechanisms going on here?

How do we cache those thoughts? What are the precise cognitive operations involved? FARG theory has, in my opinion, solved the problem of how we classify things into categories in a satisfactory manner. So now the issue is: how do these categories and concepts form in the first place? Harry Foundalis has the best thesis on the subject. If this problem is nailed in the coming years, then we´ll be on rich, rich, unexplored territory.

And the nurses? Aren´t they incredible? These creatures exist for the sole purpose of making you feel better.

Dios mio!;
isn´t that awesome?

Saturday, October 6, 2007

A modest (billion-dollar) proposal

Imagine the following scenario. A secretive meeting, years ago, when Apple´s Steve Jobs, the benevolent dictator, put in place a strategy to get into the music business. It included not only a gadget, but also an online store, iTunes. I have no idea how that meeting went, but one thing is for sure: many people afterwards must have been back-stabbing Jobs, and mentioning "the music business? We´re going to sell music? This guy has totally lost it."

Fact of the matter was, technology had forever changed the economics of the music business, and Jobs could see it.

Having said that, I´d like to make a modest, billion-dollar, proposal, to the likes of Adobe, Yahoo, Apple, IBM, Microsoft, and whomever else might be up to the task.

Cui Bono?

Think about science publishing. I publish papers for a living. My first paper came out in Biological Cybernetics, a journal which cost, in 1998, over US$2000 for a one year subscription. I live scared to death of Profa. Deborah, who reviews my scientific output. And there are others like me in this world. Oh yes, many others.

The economics of science publishing is completely crazy for this day and age. Authors give enormous effort to bring their work to light, editors and journal and conference referees also put in enormous effort. All of that is unpaid, of course (or at least indirectly paid, in the hopes of tenure and/or prestige). But then, our masterpieces go to a journal, which obliges me to transfer copyright to the likes of Elsevier, or Springer, or someone else. Then some money starts to show up! According to wikipedia, Springer had sales exceeding €900 million in 2006, while Elsevier upped the ante to a pre-tax profit (in the REED annual report) to a staggering €1 billion (on €7.9 Billion turnover). But for those who brought out the scientific results, for those that bring the content, and the fact checking by referees and editors, all that work goes unpaid. The money goes to those who typeset it, then store it in a server, then print it out and mail it to libraries worldwide. And let´s not forget those which actually pay for the research, the public, as most research is government-financed. In the words of Michael Geist, a law professor:

Cancer patients seeking information on new treatments or parents searching for the latest on childhood development issues were often denied access to the research they indirectly fund through their taxes
How did we get here? A better question is how could it have been otherwise? In the last decades, how could a different industrial organization appear? Cui Bono?

Lowly (and busy) professors or universities were obviously not up to the risky and costly task of printing and mailing thousands of journals worldwide, every month. A few societies emerged, and, mostly funded by their membership, they were up to the task. So, in time, the business of science publishing emerged and eventually consolidated in the hands of a few players. And these few players could focus on typesetting, printing, mailing much better than the equation-loving professors or the prestige & money-seeking universities.

The other day I tried to download my own paper published in the journal "Artificial Intelligence", and I was asked to pay USD30.00 for it. That´s the price of a book, and I was the author of the thing in the first place!

Now, if you ask me, technology has forever changed the economics of the scientific publishing business, and it´s high time for someone like Jobs to step forward.

Adobe Buzzword is specially suited to do this. Most scientific publishers (Elsevier, Springer) and societies (IEEE, ACM, APA, APS, INFORMS) have just one or two typesetting styles for papers. I imagine a version of Buzzword which carries only the particular typesetting style(s) of the final published document, and researchers would already prepare those manuscripts ready for publication (there are glitches today, of course, like high-quality images and tables and equations--but hey, we´re talking about Adobe here!). A submit button would submit the papers for evaluation, either to a journal or a conference. Referees could make comments and annotation on the electronic manuscript itself, or even suggest minor rewritings of a part here and there. The process would be much smoother than even the most modern of online submission processes. And, since Adobe has flash, this means that they´re especially positioned to bring up future papers with movies, sounds, screencasts and whole simulations embedded. Wouldn´t that be rich? Doesn´t that beautifully fit with what´s stated in their page?

Adobe revolutionizes how the world engages with ideas and information.

But Buzzword is just my favorite option (because it enables beautiful typesetting, is backed by a large, credible, player, works on any platform, and enables worldwide collaboration between authors, editors, referees). Other options could be desktop processors (MsWord, Pages, OpenOffice, etc). There would be a productivity gain by using something the likes of Buzzword, but using desktop processors wouldn´t affect the overall idea.

Now, why would the people in Adobe, Yahoo, SUN, IBM, Microsoft, Google, or others actually want to do a thing like that?

There are two reasons. The first one is goodwill, the second one is money.


I recently had a paper outright rejected in the IBM Systems Journal. In retrospect, I now see that it was a very bad call to submit there. I had mentioned that choice to the editor of a very prestigious scientific journal, and he responded by saying: "They´re going to hate it. They´re not in the business of publishing great original science for a long time now. That´s just a marketing thing; they´re in the business of trying to impress customers." I responded that I thought that they´d be open-minded; that the journal had had some great contributions in the past and I thought it was just great. I was, of course, wrong. They didn´t even look at the thing; they didn´t even bother to send back a message. After a quick check, I felt enormously stupid: all papers, or maybe not all but something way above 90%, come from IBM authors. The IBM Systems Journal, it seems to me, is now a branch of IBM´s marketing department. And while it may impress less sophisticated customers, it´s definitely a huge loss for IBM.

The Systems Journal (and their R&D journal) used to be a fountain of goodwill for IBM. Scientists took pride in publishing there, and hordes of researchers (not customers) browsed it and studied it carefully. It was a fountain of goodwill--with a direct route to IBM´s bottom line: it attracted the best scientists to IBM. Now that it´s in the hands of marketing, you can hardly find any serious scientist considering it as a potential outlet. If I were in IBM, I´d be fighting to change things around. But I´m not there, I can speak the truth as I see it, and I can just submit somewhere else. The BELL LABS Technical Journal also seems to be meeting the same "marketing department" fate. Don´t expect to see nobel prizes coming from these journals any time soon.

When these journals didn´t belong to marketing, they brought, at least to this observer, a huge amount of goodwill and good publicity for their respective companies. The HR department must have loved choosing among the best PhDs dying to get into IBM. Sad to mention, I doubt that the best PhDs are now begging to work on these companies anymore.

Yet, IBM could change things around. As could Adobe, SUN, Apple, Microsoft, Google, Yahoo, and many others. What I feel they should do is establish a platform for online paper submission, review, and publication. This platform should be made openly available for all scientific societies, for free. From the prestigious journal "Cognitive Science" to the Asia-Pacific Medical Education Conference, this platform should be free (to societies, journals, and conferences) and the papers published online should be freely accessible to all, no login, no paywall, nothing in the way. Copyright should remain in the hands of authors. Gradually, one after another, journals and conferences would jump ship, as the platform gained credibility and respectability.

Now here´s the kicker. It´s not only about goodwill. There´s money to be made.


One crucial point is for the platform to be freely accessible to all. But you can do that, and still block the googlebot, the yahoobot, and all others "bots", but your own. Let´s say, for instance, that Microsoft does something of the sort. In some years time, not only it gets the goodwill of graduate students who are studying papers published by (as opposed to hey-sucker-pay-thirty-bucks-for-your-own-paper-Elsevier), but also the way to search for such information would be only through that website. As we all know, advertising is moving online: according to a recent study, the last year saw "$24 billion spent on internet advertising and $450 billion spent on all advertising". Soon we´ll reach US$100 Billion/year in advertising on the web. And imagine having a privileged position in the eyeballs of graduate-educated people, from medicine to science to economics to business to engineering to history.

I hope someone will pull something like this off. Maybe for the goodwill. Or maybe for the money.

Many companies could pull it off, but some seem specially suited to the task. My favorite would be Adobe--with buzzword and AIR and flash and pdfs, that´s definitely my choice. Google might want to do it just to preempt some other company from blocking the googlebot to get its hands on valuable scientific research. Microsoft, the Dracula of the day, certainly needs the goodwill, and it could help it to hang on to the MS-Word lock in. Maybe Amazon would find this interesting--fits nicely with their web storage and search dreams. Yahoo would have the same reason as Google.

I don´t see Apple doing it. I think it could actually hurt their market value, as investors might think that they would be over-stretching, ever expanding into new markets.

I don´t see IBM or SUN doing it either; in fact, if anyone in a board meeting ever proposed this, I can only see the exact same back-stabbing that must have gone through, years ago, in Apple: "Science-publishing? This guy has totally lost it. This is IBM, and that´s not the business we´re in." They´re to busy handling their own internal office politics, who´s getting promotion and pay packages. Innovation is hardly coming in from there (though both have been embracing open-source to a certain degree).

One thing is sure. The open-access to research movement is getting momentum everyday. It´s time to sell that Elsevier stock.

Just a final note. If any player is willing to do this, use an org domain name. Don´t name it "Microsoft Science". That won´t work with intelligent, independent scientists. Use a domain name such as,, and name it as "Open science", "World of Science", anything... but please don´t try to push your name too far. Let it grow slowly.

And just in case someone wants to pull this off, and is actually wondering... I´m right here.

News from the Spaniard front

Maybe I should mention something about the Club of Rome meeting last week; some positive things happened for our growing Brazilian chapter. The first one of those was due to Claudia's immense efforts, and now we have a beautiful copy of Limits to growth: 30 year update in portuguese. We'll be working on that launch soon.

I had a little setback, which I plan to write about later on. But a learned a lesson from the Samurai: Good shoes, a good bed, and a great job. More soon.

Another thing I'm glad is the deal, with my good friends Rolando and Sebastian, who brought one of the first production models of the very cool US$100 laptop, to execute Digital World 2008 in Brazil. Also in the picture are Raoul Weiler (Belgium) and Yolanda Rueda (Spain). I'm not sure that I should publish much additional information here, but some of our partners are the World Wide Web Consortium, The "Comitê para Democratização da Informática", and of course NETMATRIX. We hope to bring Prof. Negroponte next year.

We finally had a chance for a meeting of the Brazilian Chapter, in a dinner. Here are (clockwise from center-left) Profa. Eda Coutinho, Prof. Heitor Gurgulino (now a vice-president of The Club of Rome), Claudia Santiago, me, Mrs. Lilian (Prof. Gurgulino`s wife), and Prof. José Aristodemo Pinnoti.

Oh, and Don Juan Carlos I, The King of Spain, such a nice fellow.

Thursday, October 4, 2007

Breaking into categories: a way of consume the world

The human essence for cognition is to divide the world into categories so that one can handle parts of that. These parts can be combined in order to build new parts. This is the essence of manipulating a language, where language is a form of knowledge representation.

Let see an example on comprehending human biological sensors. Seems to me that there are seven of them.

1. Sight
2. Hear
3. Taste
4. Smell
5. Touch
6. Kinetics
7. Temperature

But Kinetics may be seen as an internal touch and Temperature may be seem as micro touch. Well, we now have only (classic) senses. But hear might be also a physical micro touch. And smell has something related with taste. Some kind of chemical interaction. At least, on the limit, sight might be a collision between light package and the retina, configuring a king of Touch either.

Categorization is, in such way, our choice to put some things together and other not. We choose a similarity criteria and go. This choice shows our way of seeing life. Directs our perception and the way we consume the world.

Wednesday, October 3, 2007

Listen to the Samurai!

Really, I really mean it. Listen to the Samurai!


And remember: boys don't cry.


I was trained as an operations researcher, both in my PhD with Horacio Yanasse (PhD, MIT OR center) and in my MSc with José Ricardo de Almeida Torreão (PhD, Brown Univ Physics).

An operations researcher is a decision scientist, a mixture of an economist with an engineer. Or a mixture of a computer scientist with business administrator. The basic idea is to find a business problem, to build a mathematical model of it, then solve it, as in obtaining the lowest-cost solution, or highest profit one. A model usually looks like this (rather simple one):

During all those years, of course I made an incredible lot of friends working in operations research.

Can you imagine what operations researchers talk about when they're not doing OR? When they're having dinner or a cup of coffee?

It usually goes like this:

"God, I still don't get that."


"This thing, man, it's so depressing."


"You know, the fact that industry practically ignores what we do. We keep on here doing amazing work which could save millions, perhaps billions of dollars, and we're practically ignored by industry. It's so hard to see a successful application in the real world. Why? How can this be? Isn't it unbelievable?"

All conclusions we have reached in the past were due to "the others". Businesspeople are just stupid. They can't grasp this. Or maybe that classic: "They'll spend 10 million in advertising to make 11 million, but they won't spend 1 million to save 10 million. Stupid, stupid, people".

At first I thought it was basically a Brazilian issue. The Brazilian OR community is strong; there's really world-class people in it. But it is hardly applied to industry around our jungles.

But then...

It's a worldwide phenomenon. Americans, Japanese, and Europeans share the same complaints.

So after many years I have come to a different conclusion. It's not that businesspeople are stupid. In fact, quite the contrary (Hopefully none of my friends still in the field will read this--but it's true).

OR isn't applied because of the nature of the work.

An OR model can indeed save billions of dollars--as some industries, such as airlines, have found out. But the problem lies in the static nature of models versus the dynamic nature of things. It doesn't reflect what the real world is like.

Let's say you've spent some years and developed a really groundbreaking model to solve, for example, fleet assignment. Airlines have numerous types of planes, each with particular carrying capacities, fuel consumption, flight range, and maintenance restrictions. How do you assign each one of your aircraft to each one of your flight legs while minimizing costs? That's a mathematical problem with a huge number of possibilities, an NP-Hard problem, which demands enormous computational effort and can only be solved to optimality if the dataset is small.

After you have a working system, then the problem becomes clear. If and when the rules of the game change, your math model doesn't reflect the new reality. It either has to have more restrictions, or, in the most usual of cases, it has to be rebuilt from scratch, with a whole new dynamics. Models are cast in stone, and business life shifts almost as rapidly as a politician's reputation. Airlines have been able to use models, as have other industries, but mostly in real life, the music is always changing and models can't dance according to the tune.

I wrote about machine translation as an avenue for computational cognitive scientists to make an impact in technology. Here's another one.

I've called it for years as "autoprogramming", and it is, I guess, a long lost dream for computer scientists. Imagine a model which is able to self-destruct automatically, as context has changed. A model which is able to self construct according to the new tune of the moment. This requires an immense amount of perception, learning from feedback, flexible adaptation, a high-level, abstract view of what's going on, and other stuff which shows up, for example, in the Copycat project, but is far, far away from current OR/management science.

This type of self-reorganizing model should, in principle, exhibit a whole spectrum of cognitive abilities. It should understand what's going on. As of today, it is pure science fiction. But it can be done, specially if one starts from restricted domains which can change only within some small boundaries.

There's a lot of research going on to make solution algorithms more flexible and adaptable, on meta-heuristics and on meta-meta-heuristics; however, it's one thing to have flexible solution methods, and another thing entirely to have a flexible diagnosis/model/solution system. The fact that the models and problems are changing practically weekly makes it hard to the extreme that industry will ever adopt them in a true large-scale manner.

This is largely unexplored territory, and cognitive technologies are specially suited to explore it. If a nurse can go through the diagnosis/model/solution cycle in the furiously fast changing scenario of a baby turning blue, then we know that it's possible, in information-processing terms, to do it. For the time being, "autoprogramming" is used in the ridiculously simple task of re-programming an RF tuning device after a power failure.

Meanwhile, the real thing I'm daydreaming here remains the stuff of science fiction.

Monday, October 1, 2007

Cognitive scientists: the next wave of entrepreneurs

There is today an immense flux of innovation going on on the web. Entrepreneurs are finding untold riches in all sorts of domains: from skype to google to youtube to blogs to buzzword to facebook, things which were unimaginable in 10 years have become part of everyday life.

But cognitive scientists are just not there. Not yet, I feel.

But I believe that the next huge wave of innovation will come from cognitive technologies. Bridging the gap from machine information-processing to human information-processing is something so large-scale that, as soon as the first big-hit cognitive engineering enterprise comes up, venture capitalists and scientists and engineers from all walks of life will start jumping ship.

We know a lot about the brain. We know a lot about perception. We know a lot about language, vision, we have all sorts of psychological experiments detailing human behavior and cognition. But we are still in a stage of a pre-foundational science. There is widespread debate about, well, just about everything. Consider this:

  • is logic essential?
  • is neuroanatomy essential?
  • is "a body" essential (as in the embodied paradigm)?
  • is the mind modular?
  • is the computer a good metaphor for the mind?
  • is the mind a "dynamical system"?
  • is syntax essential to understand language?
These are just some of the issues that one can find divisive battles in the literature and in our conferences. I didn't consult anything to bring this list up, and I'm sure that it could grow to pages if one really wanted to make a point. Our science in still in a foundational stage. We still need a lot of philosophy and a lot of new metaphors before settling into a set of common concepts, unified theories, and, of course, computational models.

I believe that a good starting point is by studying human intuition. I don't study logic, or the brain, or syntax. I'd like to understand, and build a computational model, of something as simple as Shane Frederick's bat and ball problem: "If a bat and a ball cost 110, and the bat costs 100 more than the ball, how much is the ball?"

I have built a computational model of human intuition in chess, the Capyblanca project. It is still short of a full theory of human thought and expertise, of course--and, to my regret, it has been rejected without review by two top journals, with the same reply: "we're not interested in chess, send to some specialized journal". I replied to an editor that it was not really about chess, but about general cognition, abstract thought, meaning, the works--and that the model provided essential clues towards a general theory (he then said I should resubmit, "re-framing" the manuscript towards that view).

The human mind has not evolved to deal with chess, or to watch soap operas, or to learn to read this sentence: книга находится на таблице. The human mind evolved to find meaning. It is an incredible meaning-extracting machine. And it evolved to grasp that meaning really fast; it has evolved because it's a life or death matter. When we do find apparently immediate meaning, that's intuition.

Sometimes intuition "fails", as in the bat and a ball problem. But, as Christian pointed out the other day, "that's not a bug, it's a feature". Intuition is a way for us to restrict the space of possibilities really rapidly, so it only fails because if the mechanisms weren't there, we would all be "Shakey the robot" or "Deep Blue"--combinatorial monsters exploring huge spaces of possibility (that is, of course, exactly what economists think we are).

If we have a model of how intuition works, the next step up is to include learning, in the general sense. How did that intuition come about? That's what Jeff Hawkins is now trying to do. I have an enormous appreciation for his work, and the very same objective: to build a general-purpose cognitive architecture, suitable for chess, vision, and one day, maybe during our lifetime, watching soap-operas. Hawkins is, I think, right on spot on the issues of the importance of feedback, the issue of representation invariance (which is something Capyblanca is all about), and repeating hierarchical structure. On the other hand, I feel the emphasis on the brain is counter-productive, and I have some criticisms of his theory which I might put in writing someday.

But let's get back to cognitive scientists as entrepreneurs. We have been having wave after wave of revolutions in computing. From mainframes to command-line PCs to Apple's Xerox's graphical interface, to the internet, and now this whole web2.0 thing. Each of these waves brought forth great innovation, raised economic productivity, and had a particular industrial organization. Each one of them established a platform for either doing business or connecting people. And as the entrepreneurs swarm over the latest web2.0 platform and it consolidates, as it is consolidating right now, the business space left open will be in the hands of computational cognitive modelers.

If you can connect people to either other people (skype, facebook), to information (google, wikipedia), or to things (amazon, ebay), better than others, you will find untold riches in that space. But current computer science, left alone, cannot provide some much-needed connections. And a huge empty space lies open for cognitive scientists of the computational sort.

As an example, imagine a web browser for the whole planet. You might be thinking your web browser can "go" to the whole planet. It can, but you can't. You can't go to a Nigerian website, then a Russian one, then an Iranian one, then a Brazilian one, and understand what is there. Machine translation sucks. And as entrepreneur Paul Graham puts it, your system has to pass the "does not suck" test. We don't need perfect translation. But it has to be better than the garbage we have today.

We are far from that goal. Current systems have a lot of computer-science, and hardly any cognitive-science. One set of systems goes word for word (basically), another set works in a statistical manner, having "seen" loads of previously translated texts, and using some advanced techniques to guess what the output should look like. If you're translating a text from news about russian politics, you might get the idea that it is about a law, and there are some tradeoffs the law brings, but you can't always get the exact feel of whether the article is pro or contra the law. Current systems can give you a vague idea of what a text is about. But what machine translation needs is to deal with concepts, meaning, experience, learning, culture, connotation explosions--all topics for cognitive scientists. All difficult, of course. But remember: it doesn't have to be perfect. It has to pass the "Does not suck" test.

Translation is one example. There are many other crucial areas in which cognitive technologies could have an impact. And let's not forget the lesson of history: generally, the killer applications were not conceived when the technology was first introduced.

I could be wrong in my own vision on how to model cognition. In all probability I am wrong in some counts; who knows?; maybe I am wrong in all philosophical and technical details. But someone out there is bound to be absolutely-right in turning this Rubik's cube. And in the coming decades we should start to see these cognitive scientists having a bold impact in technological innovation, far beyond the borders of our journals and conferences.

Microsoft Word has passed away. Time of death: 12.23AM.

Adobe has acquired Buzzword. Having one of the largest, and by the way, coolest, companies behind it will be the kiss of death to Microsoft Word.

With Buzzword, you get almost all word-processing functionality what really matters on your browser--with the addition of online collaboration. I was one of their first beta testers, receiving my invitation from Roberto Mateu on July 19th. I have Masters and PhD students writing up papers and theses with it, in real time collaboration, dispersed all around: Taisa is in the heart of the Amazon, Anne is in Harvard University, and Analize is in Curitiba, in the Brazilian South.

They have been adding features every month. It´s one of those things that, after you have it, you wonder how could you live without it. Soon, it will have the host of Adobe TrueType fonts, pdf support, offline functionality, etcetera. Yesterday it was alpha-geek only, but now it will spread like wildfire.

Adobe isn`t disclosing the financial part of the deal, but something like this just wouldn`t go for less than 100 million, perhaps some multiples of that. After this huge incentive, expect similar start-ups to jump in the Adobe AIR bandwagon and, twelve months from now, spreadsheets and more sophisticated presentation programs. In the future, expect everything all the way from charting to equation editors. Microsoft Office "ultimate" (¿?) goes for USD679,00. The "student edition" goes for USD149,00. (By the way, Apple should just give up this space and bundle iWork into new machines.) Scoble mentions that MS will still have some "office" revenue stream, yet: "There is blood in the water even if only the early sharks can smell it."

I`ve just replied to Tad Staley`s email, congratulating these folks.

As I wrote before, I´m almost feeling a little bit sorry for Mr Gates. But only almost. And only a little.

But, hey, maybe that cool Zune will make up for these lost sales?

Chase and Simon (1973) Perception in chess, Cognitive Psychology 4:55-81. A scientific blunder.

Blogging on Peer-Reviewed Research

Here´s an email I´ve sent some months ago to a number of very bright people.

The 1000 dollars offer holds until the end of this year.

Imagine if two famous biologists published a study, over 30 years ago, with two parts: in the first part, they unequivocally showed that sharks and dolphins had a strikingly different nature. In the second part, however, they tried to explain that difference by looking at the habitats of a dolphin and the habitat of a shark (i.e., the same data). Imagine that that paper would be cited by hundreds of people, for decades.

Now imagine that Chase and Simon, writing a study entitled "Perception in chess", in Cognitive Psychology 4, p.55—81 (1973), divided it into two parts. The first part (p.55—61) of the paper showed that when chess masters looked at a board for 5 seconds, they could reproduce it with enormous accuracy, while beginners could not reproduce it for more than a few pieces. This difference could not be explained by masters' greater memory, for, in randomized positions, the effect disappeared, with masters and beginners able to reproduce only a few pieces of the board. Sharks and Dolphins, it was clear, were different.

Now, what was the nature of the chunks like? The second part of the paper devised two tasks, a 'perception task', and a 'memory task'. These tasks looked at masters and beginners 'interpiece interval times' (within glances at the board, and in between glances) in reconstructing the boards. The results were unequivocal: the data was exactly the same for masters and beginners (figs 3 and 4). They pointed this out clearly:

[Perception task, p.65] "The first thing to notice is that the data are quite similar for all subjects. The latencies show the same systematic trends, and, for the probabilities, the product moment correlation between subjects are quite high: Master vs Class A=.93; Master vs Class B=.95, and Class A vs Class B =.92. The same is true for the between glance data… Thus, the same kinds and degrees of relatedness between successive pieces holds for subjects of very different skills."

[Memory task] "Again the pattern of latencies and probabilities look the same for all subjects, and the correlations are about the same as in the perception of data: Master vs Class A=.91, Master vs. Class B=.95, and Class A vs. Class B=.95".

The obvious conclusion is, of course, that whatever difference exists between Masters and Class B players, it cannot be obtained from this dataset. Nothing about the "nature of the chess chunk" can ever be obtained here.

Yet, with that dataset at hand, the authors proceeded to study the nature of the chess chunk: "These probabilities are informative about the underlying structures that the subjects are perceiving" (p. 68). How can they be, if a Master subject perceives the global meaning of the position, and a Class B perceives nothing?

"Our data gives us an operational method of characterizing chunks, which we will apply to the middle-game memory experiments of subject M[=Master]" (p.78). One wonders: why bother? Send the master home. They could gather all they needed a from a Class B subject, or from a yanomami, after that non sequitur.

Chase and Simon 1973 explained the difference between sharks and dolphins by looking at their habitats, and the whole world bought it. At the risk of running into utter humiliation, I will paypal one thousand dollars to the first person on this list that proves me wrong. The deadline for your thousand shiny dollars is 24h before the deadline for submission to cogsci in Nashville, when I will go on and commit scientific suicide.

Any takers?