Saturday, October 27, 2007

Live fast, love hard, die young

Some important links today.

UPDATE: Boy survives two-hour flight to Moscow hanging onto plane wing (digg it here, story here, here, and here)

On our science section we have a great piece by The Economist. How can women still complain?

The Capyblanca Prize for industrial design goes to 3M's specially designed self-adhesive hooks: "Sticky bear is REALLY HAPPY to see you".

On our beauty department we have the solution for you big-nosed people out there: Be a Cleopatra Nose!

Paul Graham now has a feed. Check it out.

Finally, for those who only want the truth and the real truth and nothing but the truth, regularly check out the news in the official North Kolea Blog.

Victoly to North Kolea!!!!!!

Thursday, October 25, 2007

Cognitive scientists as policy advisors

If there is a social science that sees as its birth right to advise on matter of policy then it's Economics. From cold war strategy to inflation-fighting to abortion, economists have been advising policy for over a century.

They should better watch their backs now, for the cognitive scientists are coming.

In a way, economists are already cognitive scientists. They study people's (or animals'--but these are called ecologists) behavior in the aggregate. Two fundamental pillars of classical economics are the ideas of (i) incentive systems and (ii) utility.

Incentives assume that people will tend to do what they get incentives for, and will tend to inhibit their behavior if a negative incentive is there. Sticks and carrots, sticks and carrots. This insight, of course, is the behaviorist insight: treat the mind as a stimulus-response black box. Incentive systems work most of the time. Sometimes they backfire, as in "perverse incentives" which actually encourage people to do exactly the opposite of the intended policy.

I have come to believe that the mind has three distinct feedback systems, and incentives apply to just one of them, an hedonic system. Give a shock to a mouse, and the poor creature will learn not to do whatever it was doing. Stimulate it's brain's pleasure center, and, like a heroine addict, mice will ignore food and sexy females, fancying themselves all the way to their death. Inhibitions act as if they were sending a global halt message to the brain's numerous activities, and hedonically good incentives will bring computer-loopy types of behavior.

This is a good insight, of course, but it is not sufficient to explain (and predict) behavior. More is needed. We need to look inside the black box.

Utility is related, but different. It concerns different preferences that different people may have. I like nurses; maybe others don't. Different people, different preferences. Classical economics takes this into account, and it probably stems from the same cognitive mechanisms which have built different memories through experience. Genetic and anatomical stuff also may play a large part in determining preferences.

Another thing involved in utility is that it does not grow linearly. A 30 minute massage is a better experience than a 30 second one. But it is probably worse then a 30-day one. Now, if the hedonic feedback system helps to determine your preferences, a different system, I think, acts over here; an attentional feedback system--this one does not directly drives behavior, but it decides what gets stored into memory. The first minute of the massage gets stored; the fifth hour is just plain boring (people in chronic pain notwithstanding, but even they should find diminishing returns).

Now, here's some cognitive science creeping under the economists' stage.

Herbert Simon, for one, pointed out that we just can't figure things out--or at least not as infinitely deeply as the rational actor model suggests. The space of possibility is just too monstrouly huge. Here is one of the goals of the Human Intuition Project: to study how intuition guides the choice generating process, and the repercussions of this to economics. Intuition destroys the vastness of the space of possibility, presenting a tractable course of thought and action. (Perhaps misguided, of course, but if it's here, there are evolutionary reasons).

Daniel Kahneman and Amos Tversky truly turbocharged Simon's work--eventually leading to the school of behavioral economics. They've shown that framing affects decision-making; that the utility curve was susceptible to language, that preferences do revert even within a single individual in an instant of time. But this insight is something that economics should have embedded in the models since Thomas Schelling, another psychologically inclined economist. I love how he brings up our Jekyll and Hyde nature, and the deep, deep, questions involved. Choice and Consequence, and Micromotives and Macrobehaviors, and the Strategy of Conflict, is beautiful cognitive science, an enmeshing of philosophy, psychology, economics, and mathematics. (A Schelling point, for example, should be something studied by cognitive psychologists--though I´ve never seen a single textbook mention the term.)

Language and framing seem to be now on the agenda of cognitive scientists as policy advisors. George Lakoff, looking at language, telltales how the Bush team used language to present policies which become impossible to attack. My favorite example is the term "tax relief". Only a monster can be against any kind of relief. Watch your language, sir. Beware if you want to argue against this policy.

Even Steve Pinker seems to be joining the boat. In his recent book, he shows how language indirection distorts, for example, game-theoretical models. For example, nobody bribes a cop in direct language. Corruption has an etiquette. Here in Brazil you can buy a cop for a "cervejinha" (i.e., a small beer). In China or Greece it would be called an "envelope", in Iraq a "good coffee", in Mexico a "refresco", in North Africa, "un petit cadeau". Everyone knows the meaning of the message, but nobody uses the information efficient terms: "Can I bribe you, officer?"

The euphemisms, and language indirection, introduce plausible deniability, this distorting the game-theoretical scenario, as Pinker points out: they have long been know by diplomats to be "not a bug, but a feature" of language. Teen kids rarely know that the fastest way to a girl's, ahem, "heart", is never the direct route.

There are some very important insights I feel should find their way, eventually, into economic models:

  • The distinction between hedonic feedback systems and attentional feedback systems;
  • Hofstadter's fluid concepts model of cognition;
  • The choice generating process studied by Gary Klein, Gigerenzer, Barry Schwartz, and many others.
I had plans that Bia, a mathematician who joined my research group for the Ph.D., could make a great contribution here. But I guess Ἄτροπος had other plans.

The ideas live on, though.

In some years, we´re going to start seeing undergraduate courses on cognitive science flourish. MIT has one. But what will the thousands of students go after graduation? From MIT's website, it seems that most careers should need a PhD:

After Graduation

The majority of people who major in Brain and Cognitive Sciences attend graduate school, in fields such as medicine, neuroscience, psychology, cognitive science, or computer science. Some attend law or business school. With or without advanced degrees, majors work in a diverse array of careers, as researchers and professors, in telecommunications, financial advising, human resources and human relations, counseling, teaching K through 12, ergonomics, environmental design, robotics, AI.

I think that's not enough. Most undergraduates want a job after school; and Undergraduates-level cognitive scientists should play great roles as policy designers and advisers, and, of course, in entrepreneurship.

Monday, October 22, 2007

Mirror Perception

One of major trick experiences for a human to learn. This video gives some hints to understand that phenomena.



Saturday, October 20, 2007

Saturday, October 13, 2007

Essay on the fetiche with nurses

The other day I was mentioning a case in which a nurse responds incredibly rapidly to a furiously serious situation in a neonatal intensive care unit. Then this guy comes up with this:

"You really have a fetiche with nurses, hã?"

To which I reply: "Only when their name isn´t OLGA."

Why study these cases in a business school? What is the relevance of that? Why should a decision-making course actually start with the case of a radar operator, and also look at, for instance, chess-players or firefighters? (No fetiche here, thanks for asking--but remember: not all firefighters are equal).

What can business students get from studying this?

Superficially, people such as nurses, doctors, firefighters, radar operators, chess players, etcetera, do tasks which are extremely distinct from what a manager does. But look closer, and you´ll start to see deep, deep, similarities, in their cognitive processes.

Most white-collar work is, of course, like this: reading email, downloading attachments and working on them and sending them back, deleting those cheap v!agr@ emails, talking to people over the phone, not falling asleep in meetings and trying to sound intelligent, and making "exciting, enthusiastic", presentations.

What ties managers and chess players together is that their job consists, mostly, of separating what´s important in a situation from what´s irrelevant in it.

Imagine the immense amounts of paper and phone calls trying to reach, for instance, Larry Allison, this coming week. It will be vast. Most of it will be filtered by secretaries and managers with that specific job in mind. But he´ll still have to deal personally with an large load of "incoming" information. Two documents stand in his desk, waiting for a signature. What´s important, and what´s not? How to separate what´s important from what´s irrelevant? It´s extremely tricky, and there´s not a single isolated piece of information that´s up to the task.

Sometimes, a single comma can cost you a million Canadian dollars.

I believe something like 70% of my own email is marked "urgent". Hardly any of it is, of course. So a "high-priority" or " urgent" mark is no good source of information. Neither is the sender. It could be someone extremely important, yet, the message still is rather unimportant. There´s not a single isolated piece of information that will tell us whether something is relevant or not.

It´s in the whole scenario. Importance is spread over the whole chessboard, the whole health history of the baby turning blue, the whole situation about a strange fire that´s just too hot to handle (tough it looks, to the unexperienced, that it should be easy to handle).

It´s all in the struggle between one´s expectations and one´s perception. If you´ve acquired precise expectations about a situation, then you´ll know what to expect. This is one of Jeff Hawkins crucial points. Did you know that the brain is "saturated with feedback connections?" In some parts of the cortex, there seems to be 10 times more information going from the brain to the senses (e.g., from your brain to your eyes), than the connections coming from the senses to the brain. Why is there such a high-bandwidth going on the wrong direction? The answer seems to be that the brain is telling the senses what to expect, "and only report back to me if something is different from what I´m telling you". That´s what Hawkins calls the memory-prediction framework, and close in philosophy to what the folks over at overcoming bias call cached thoughts.

This can only be done through experience, of course. So an international master reconstructs a chess position after a mere 5 seconds presentation, and we can´t do it.

When something departs from expectations, your attention is rapidly grabbed because of this high-bandwidth info the brain is sending your eyes. If you have experience, you know what to expect. Two good questions to ask every time you´re studying decision-making or intuition or judgment are: how could an inexperienced person deal with this situation? And of course the classic: how could a machine do this? What are the information-processing mechanisms going on here?

How do we cache those thoughts? What are the precise cognitive operations involved? FARG theory has, in my opinion, solved the problem of how we classify things into categories in a satisfactory manner. So now the issue is: how do these categories and concepts form in the first place? Harry Foundalis has the best thesis on the subject. If this problem is nailed in the coming years, then we´ll be on rich, rich, unexplored territory.

And the nurses? Aren´t they incredible? These creatures exist for the sole purpose of making you feel better.

Dios mio!;
isn´t that awesome?

Saturday, October 6, 2007

A modest (billion-dollar) proposal

Imagine the following scenario. A secretive meeting, years ago, when Apple´s Steve Jobs, the benevolent dictator, put in place a strategy to get into the music business. It included not only a gadget, but also an online store, iTunes. I have no idea how that meeting went, but one thing is for sure: many people afterwards must have been back-stabbing Jobs, and mentioning "the music business? We´re going to sell music? This guy has totally lost it."

Fact of the matter was, technology had forever changed the economics of the music business, and Jobs could see it.

Having said that, I´d like to make a modest, billion-dollar, proposal, to the likes of Adobe, Yahoo, Apple, IBM, Microsoft, and whomever else might be up to the task.

Cui Bono?

Think about science publishing. I publish papers for a living. My first paper came out in Biological Cybernetics, a journal which cost, in 1998, over US$2000 for a one year subscription. I live scared to death of Profa. Deborah, who reviews my scientific output. And there are others like me in this world. Oh yes, many others.

The economics of science publishing is completely crazy for this day and age. Authors give enormous effort to bring their work to light, editors and journal and conference referees also put in enormous effort. All of that is unpaid, of course (or at least indirectly paid, in the hopes of tenure and/or prestige). But then, our masterpieces go to a journal, which obliges me to transfer copyright to the likes of Elsevier, or Springer, or someone else. Then some money starts to show up! According to wikipedia, Springer had sales exceeding €900 million in 2006, while Elsevier upped the ante to a pre-tax profit (in the REED annual report) to a staggering €1 billion (on €7.9 Billion turnover). But for those who brought out the scientific results, for those that bring the content, and the fact checking by referees and editors, all that work goes unpaid. The money goes to those who typeset it, then store it in a server, then print it out and mail it to libraries worldwide. And let´s not forget those which actually pay for the research, the public, as most research is government-financed. In the words of Michael Geist, a law professor:

Cancer patients seeking information on new treatments or parents searching for the latest on childhood development issues were often denied access to the research they indirectly fund through their taxes
How did we get here? A better question is how could it have been otherwise? In the last decades, how could a different industrial organization appear? Cui Bono?

Lowly (and busy) professors or universities were obviously not up to the risky and costly task of printing and mailing thousands of journals worldwide, every month. A few societies emerged, and, mostly funded by their membership, they were up to the task. So, in time, the business of science publishing emerged and eventually consolidated in the hands of a few players. And these few players could focus on typesetting, printing, mailing much better than the equation-loving professors or the prestige & money-seeking universities.

The other day I tried to download my own paper published in the journal "Artificial Intelligence", and I was asked to pay USD30.00 for it. That´s the price of a book, and I was the author of the thing in the first place!

Now, if you ask me, technology has forever changed the economics of the scientific publishing business, and it´s high time for someone like Jobs to step forward.

Adobe Buzzword is specially suited to do this. Most scientific publishers (Elsevier, Springer) and societies (IEEE, ACM, APA, APS, INFORMS) have just one or two typesetting styles for papers. I imagine a version of Buzzword which carries only the particular typesetting style(s) of the final published document, and researchers would already prepare those manuscripts ready for publication (there are glitches today, of course, like high-quality images and tables and equations--but hey, we´re talking about Adobe here!). A submit button would submit the papers for evaluation, either to a journal or a conference. Referees could make comments and annotation on the electronic manuscript itself, or even suggest minor rewritings of a part here and there. The process would be much smoother than even the most modern of online submission processes. And, since Adobe has flash, this means that they´re especially positioned to bring up future papers with movies, sounds, screencasts and whole simulations embedded. Wouldn´t that be rich? Doesn´t that beautifully fit with what´s stated in their page?

Adobe revolutionizes how the world engages with ideas and information.

But Buzzword is just my favorite option (because it enables beautiful typesetting, is backed by a large, credible, player, works on any platform, and enables worldwide collaboration between authors, editors, referees). Other options could be desktop processors (MsWord, Pages, OpenOffice, etc). There would be a productivity gain by using something the likes of Buzzword, but using desktop processors wouldn´t affect the overall idea.

Now, why would the people in Adobe, Yahoo, SUN, IBM, Microsoft, Google, or others actually want to do a thing like that?

There are two reasons. The first one is goodwill, the second one is money.

Goodwill

I recently had a paper outright rejected in the IBM Systems Journal. In retrospect, I now see that it was a very bad call to submit there. I had mentioned that choice to the editor of a very prestigious scientific journal, and he responded by saying: "They´re going to hate it. They´re not in the business of publishing great original science for a long time now. That´s just a marketing thing; they´re in the business of trying to impress customers." I responded that I thought that they´d be open-minded; that the journal had had some great contributions in the past and I thought it was just great. I was, of course, wrong. They didn´t even look at the thing; they didn´t even bother to send back a message. After a quick check, I felt enormously stupid: all papers, or maybe not all but something way above 90%, come from IBM authors. The IBM Systems Journal, it seems to me, is now a branch of IBM´s marketing department. And while it may impress less sophisticated customers, it´s definitely a huge loss for IBM.

The Systems Journal (and their R&D journal) used to be a fountain of goodwill for IBM. Scientists took pride in publishing there, and hordes of researchers (not customers) browsed it and studied it carefully. It was a fountain of goodwill--with a direct route to IBM´s bottom line: it attracted the best scientists to IBM. Now that it´s in the hands of marketing, you can hardly find any serious scientist considering it as a potential outlet. If I were in IBM, I´d be fighting to change things around. But I´m not there, I can speak the truth as I see it, and I can just submit somewhere else. The BELL LABS Technical Journal also seems to be meeting the same "marketing department" fate. Don´t expect to see nobel prizes coming from these journals any time soon.

When these journals didn´t belong to marketing, they brought, at least to this observer, a huge amount of goodwill and good publicity for their respective companies. The HR department must have loved choosing among the best PhDs dying to get into IBM. Sad to mention, I doubt that the best PhDs are now begging to work on these companies anymore.

Yet, IBM could change things around. As could Adobe, SUN, Apple, Microsoft, Google, Yahoo, and many others. What I feel they should do is establish a platform for online paper submission, review, and publication. This platform should be made openly available for all scientific societies, for free. From the prestigious journal "Cognitive Science" to the Asia-Pacific Medical Education Conference, this platform should be free (to societies, journals, and conferences) and the papers published online should be freely accessible to all, no login, no paywall, nothing in the way. Copyright should remain in the hands of authors. Gradually, one after another, journals and conferences would jump ship, as the platform gained credibility and respectability.

Now here´s the kicker. It´s not only about goodwill. There´s money to be made.

Money

One crucial point is for the platform to be freely accessible to all. But you can do that, and still block the googlebot, the yahoobot, and all others "bots", but your own. Let´s say, for instance, that Microsoft does something of the sort. In some years time, not only it gets the goodwill of graduate students who are studying papers published by science.microsoft.org (as opposed to hey-sucker-pay-thirty-bucks-for-your-own-paper-Elsevier), but also the way to search for such information would be only through that website. As we all know, advertising is moving online: according to a recent study, the last year saw "$24 billion spent on internet advertising and $450 billion spent on all advertising". Soon we´ll reach US$100 Billion/year in advertising on the web. And imagine having a privileged position in the eyeballs of graduate-educated people, from medicine to science to economics to business to engineering to history.

I hope someone will pull something like this off. Maybe for the goodwill. Or maybe for the money.

Many companies could pull it off, but some seem specially suited to the task. My favorite would be Adobe--with buzzword and AIR and flash and pdfs, that´s definitely my choice. Google might want to do it just to preempt some other company from blocking the googlebot to get its hands on valuable scientific research. Microsoft, the Dracula of the day, certainly needs the goodwill, and it could help it to hang on to the MS-Word lock in. Maybe Amazon would find this interesting--fits nicely with their web storage and search dreams. Yahoo would have the same reason as Google.

I don´t see Apple doing it. I think it could actually hurt their market value, as investors might think that they would be over-stretching, ever expanding into new markets.

I don´t see IBM or SUN doing it either; in fact, if anyone in a board meeting ever proposed this, I can only see the exact same back-stabbing that must have gone through, years ago, in Apple: "Science-publishing? This guy has totally lost it. This is IBM, and that´s not the business we´re in." They´re to busy handling their own internal office politics, who´s getting promotion and pay packages. Innovation is hardly coming in from there (though both have been embracing open-source to a certain degree).

One thing is sure. The open-access to research movement is getting momentum everyday. It´s time to sell that Elsevier stock.

Just a final note. If any player is willing to do this, use an org domain name. Don´t name it "Microsoft Science". That won´t work with intelligent, independent scientists. Use a domain name such as science.yahoo.org, science.adobe.org, and name it as "Open science", "World of Science", anything... but please don´t try to push your name too far. Let it grow slowly.

And just in case someone wants to pull this off, and is actually wondering... I´m right here.

News from the Spaniard front

Maybe I should mention something about the Club of Rome meeting last week; some positive things happened for our growing Brazilian chapter. The first one of those was due to Claudia's immense efforts, and now we have a beautiful copy of Limits to growth: 30 year update in portuguese. We'll be working on that launch soon.

I had a little setback, which I plan to write about later on. But a learned a lesson from the Samurai: Good shoes, a good bed, and a great job. More soon.



Another thing I'm glad is the deal, with my good friends Rolando and Sebastian, who brought one of the first production models of the very cool US$100 laptop, to execute Digital World 2008 in Brazil. Also in the picture are Raoul Weiler (Belgium) and Yolanda Rueda (Spain). I'm not sure that I should publish much additional information here, but some of our partners are the World Wide Web Consortium, The "Comitê para Democratização da Informática", and of course NETMATRIX. We hope to bring Prof. Negroponte next year.



We finally had a chance for a meeting of the Brazilian Chapter, in a dinner. Here are (clockwise from center-left) Profa. Eda Coutinho, Prof. Heitor Gurgulino (now a vice-president of The Club of Rome), Claudia Santiago, me, Mrs. Lilian (Prof. Gurgulino`s wife), and Prof. José Aristodemo Pinnoti.


Oh, and Don Juan Carlos I, The King of Spain, such a nice fellow.

Thursday, October 4, 2007

Breaking into categories: a way of consume the world




The human essence for cognition is to divide the world into categories so that one can handle parts of that. These parts can be combined in order to build new parts. This is the essence of manipulating a language, where language is a form of knowledge representation.


Let see an example on comprehending human biological sensors. Seems to me that there are seven of them.


1. Sight
2. Hear
3. Taste
4. Smell
5. Touch
6. Kinetics
7. Temperature


But Kinetics may be seen as an internal touch and Temperature may be seem as micro touch. Well, we now have only (classic) senses. But hear might be also a physical micro touch. And smell has something related with taste. Some kind of chemical interaction. At least, on the limit, sight might be a collision between light package and the retina, configuring a king of Touch either.


Categorization is, in such way, our choice to put some things together and other not. We choose a similarity criteria and go. This choice shows our way of seeing life. Directs our perception and the way we consume the world.

Wednesday, October 3, 2007

Listen to the Samurai!

Really, I really mean it. Listen to the Samurai!

LISTEN TO THE SAMURAI!

And remember: boys don't cry.


Autoprogramming

I was trained as an operations researcher, both in my PhD with Horacio Yanasse (PhD, MIT OR center) and in my MSc with José Ricardo de Almeida Torreão (PhD, Brown Univ Physics).


An operations researcher is a decision scientist, a mixture of an economist with an engineer. Or a mixture of a computer scientist with business administrator. The basic idea is to find a business problem, to build a mathematical model of it, then solve it, as in obtaining the lowest-cost solution, or highest profit one. A model usually looks like this (rather simple one):


During all those years, of course I made an incredible lot of friends working in operations research.

Can you imagine what operations researchers talk about when they're not doing OR? When they're having dinner or a cup of coffee?

It usually goes like this:

"God, I still don't get that."

"What?"

"This thing, man, it's so depressing."

"What?"

"You know, the fact that industry practically ignores what we do. We keep on here doing amazing work which could save millions, perhaps billions of dollars, and we're practically ignored by industry. It's so hard to see a successful application in the real world. Why? How can this be? Isn't it unbelievable?"

All conclusions we have reached in the past were due to "the others". Businesspeople are just stupid. They can't grasp this. Or maybe that classic: "They'll spend 10 million in advertising to make 11 million, but they won't spend 1 million to save 10 million. Stupid, stupid, people".

At first I thought it was basically a Brazilian issue. The Brazilian OR community is strong; there's really world-class people in it. But it is hardly applied to industry around our jungles.

But then...

It's a worldwide phenomenon. Americans, Japanese, and Europeans share the same complaints.

So after many years I have come to a different conclusion. It's not that businesspeople are stupid. In fact, quite the contrary (Hopefully none of my friends still in the field will read this--but it's true).

OR isn't applied because of the nature of the work.

An OR model can indeed save billions of dollars--as some industries, such as airlines, have found out. But the problem lies in the static nature of models versus the dynamic nature of things. It doesn't reflect what the real world is like.

Let's say you've spent some years and developed a really groundbreaking model to solve, for example, fleet assignment. Airlines have numerous types of planes, each with particular carrying capacities, fuel consumption, flight range, and maintenance restrictions. How do you assign each one of your aircraft to each one of your flight legs while minimizing costs? That's a mathematical problem with a huge number of possibilities, an NP-Hard problem, which demands enormous computational effort and can only be solved to optimality if the dataset is small.

After you have a working system, then the problem becomes clear. If and when the rules of the game change, your math model doesn't reflect the new reality. It either has to have more restrictions, or, in the most usual of cases, it has to be rebuilt from scratch, with a whole new dynamics. Models are cast in stone, and business life shifts almost as rapidly as a politician's reputation. Airlines have been able to use models, as have other industries, but mostly in real life, the music is always changing and models can't dance according to the tune.

I wrote about machine translation as an avenue for computational cognitive scientists to make an impact in technology. Here's another one.

I've called it for years as "autoprogramming", and it is, I guess, a long lost dream for computer scientists. Imagine a model which is able to self-destruct automatically, as context has changed. A model which is able to self construct according to the new tune of the moment. This requires an immense amount of perception, learning from feedback, flexible adaptation, a high-level, abstract view of what's going on, and other stuff which shows up, for example, in the Copycat project, but is far, far away from current OR/management science.

This type of self-reorganizing model should, in principle, exhibit a whole spectrum of cognitive abilities. It should understand what's going on. As of today, it is pure science fiction. But it can be done, specially if one starts from restricted domains which can change only within some small boundaries.

There's a lot of research going on to make solution algorithms more flexible and adaptable, on meta-heuristics and on meta-meta-heuristics; however, it's one thing to have flexible solution methods, and another thing entirely to have a flexible diagnosis/model/solution system. The fact that the models and problems are changing practically weekly makes it hard to the extreme that industry will ever adopt them in a true large-scale manner.

This is largely unexplored territory, and cognitive technologies are specially suited to explore it. If a nurse can go through the diagnosis/model/solution cycle in the furiously fast changing scenario of a baby turning blue, then we know that it's possible, in information-processing terms, to do it. For the time being, "autoprogramming" is used in the ridiculously simple task of re-programming an RF tuning device after a power failure.

Meanwhile, the real thing I'm daydreaming here remains the stuff of science fiction.

Monday, October 1, 2007

Cognitive scientists: the next wave of entrepreneurs

There is today an immense flux of innovation going on on the web. Entrepreneurs are finding untold riches in all sorts of domains: from skype to google to youtube to blogs to buzzword to facebook, things which were unimaginable in 10 years have become part of everyday life.

But cognitive scientists are just not there. Not yet, I feel.

But I believe that the next huge wave of innovation will come from cognitive technologies. Bridging the gap from machine information-processing to human information-processing is something so large-scale that, as soon as the first big-hit cognitive engineering enterprise comes up, venture capitalists and scientists and engineers from all walks of life will start jumping ship.

We know a lot about the brain. We know a lot about perception. We know a lot about language, vision, we have all sorts of psychological experiments detailing human behavior and cognition. But we are still in a stage of a pre-foundational science. There is widespread debate about, well, just about everything. Consider this:

  • is logic essential?
  • is neuroanatomy essential?
  • is "a body" essential (as in the embodied paradigm)?
  • is the mind modular?
  • is the computer a good metaphor for the mind?
  • is the mind a "dynamical system"?
  • is syntax essential to understand language?
These are just some of the issues that one can find divisive battles in the literature and in our conferences. I didn't consult anything to bring this list up, and I'm sure that it could grow to pages if one really wanted to make a point. Our science in still in a foundational stage. We still need a lot of philosophy and a lot of new metaphors before settling into a set of common concepts, unified theories, and, of course, computational models.

I believe that a good starting point is by studying human intuition. I don't study logic, or the brain, or syntax. I'd like to understand, and build a computational model, of something as simple as Shane Frederick's bat and ball problem: "If a bat and a ball cost 110, and the bat costs 100 more than the ball, how much is the ball?"

I have built a computational model of human intuition in chess, the Capyblanca project. It is still short of a full theory of human thought and expertise, of course--and, to my regret, it has been rejected without review by two top journals, with the same reply: "we're not interested in chess, send to some specialized journal". I replied to an editor that it was not really about chess, but about general cognition, abstract thought, meaning, the works--and that the model provided essential clues towards a general theory (he then said I should resubmit, "re-framing" the manuscript towards that view).

The human mind has not evolved to deal with chess, or to watch soap operas, or to learn to read this sentence: книга находится на таблице. The human mind evolved to find meaning. It is an incredible meaning-extracting machine. And it evolved to grasp that meaning really fast; it has evolved because it's a life or death matter. When we do find apparently immediate meaning, that's intuition.

Sometimes intuition "fails", as in the bat and a ball problem. But, as Christian pointed out the other day, "that's not a bug, it's a feature". Intuition is a way for us to restrict the space of possibilities really rapidly, so it only fails because if the mechanisms weren't there, we would all be "Shakey the robot" or "Deep Blue"--combinatorial monsters exploring huge spaces of possibility (that is, of course, exactly what economists think we are).

If we have a model of how intuition works, the next step up is to include learning, in the general sense. How did that intuition come about? That's what Jeff Hawkins is now trying to do. I have an enormous appreciation for his work, and the very same objective: to build a general-purpose cognitive architecture, suitable for chess, vision, and one day, maybe during our lifetime, watching soap-operas. Hawkins is, I think, right on spot on the issues of the importance of feedback, the issue of representation invariance (which is something Capyblanca is all about), and repeating hierarchical structure. On the other hand, I feel the emphasis on the brain is counter-productive, and I have some criticisms of his theory which I might put in writing someday.

But let's get back to cognitive scientists as entrepreneurs. We have been having wave after wave of revolutions in computing. From mainframes to command-line PCs to Apple's Xerox's graphical interface, to the internet, and now this whole web2.0 thing. Each of these waves brought forth great innovation, raised economic productivity, and had a particular industrial organization. Each one of them established a platform for either doing business or connecting people. And as the entrepreneurs swarm over the latest web2.0 platform and it consolidates, as it is consolidating right now, the business space left open will be in the hands of computational cognitive modelers.

If you can connect people to either other people (skype, facebook), to information (google, wikipedia), or to things (amazon, ebay), better than others, you will find untold riches in that space. But current computer science, left alone, cannot provide some much-needed connections. And a huge empty space lies open for cognitive scientists of the computational sort.

As an example, imagine a web browser for the whole planet. You might be thinking your web browser can "go" to the whole planet. It can, but you can't. You can't go to a Nigerian website, then a Russian one, then an Iranian one, then a Brazilian one, and understand what is there. Machine translation sucks. And as entrepreneur Paul Graham puts it, your system has to pass the "does not suck" test. We don't need perfect translation. But it has to be better than the garbage we have today.

We are far from that goal. Current systems have a lot of computer-science, and hardly any cognitive-science. One set of systems goes word for word (basically), another set works in a statistical manner, having "seen" loads of previously translated texts, and using some advanced techniques to guess what the output should look like. If you're translating a text from news about russian politics, you might get the idea that it is about a law, and there are some tradeoffs the law brings, but you can't always get the exact feel of whether the article is pro or contra the law. Current systems can give you a vague idea of what a text is about. But what machine translation needs is to deal with concepts, meaning, experience, learning, culture, connotation explosions--all topics for cognitive scientists. All difficult, of course. But remember: it doesn't have to be perfect. It has to pass the "Does not suck" test.

Translation is one example. There are many other crucial areas in which cognitive technologies could have an impact. And let's not forget the lesson of history: generally, the killer applications were not conceived when the technology was first introduced.

I could be wrong in my own vision on how to model cognition. In all probability I am wrong in some counts; who knows?; maybe I am wrong in all philosophical and technical details. But someone out there is bound to be absolutely-right in turning this Rubik's cube. And in the coming decades we should start to see these cognitive scientists having a bold impact in technological innovation, far beyond the borders of our journals and conferences.

Microsoft Word has passed away. Time of death: 12.23AM.

Adobe has acquired Buzzword. Having one of the largest, and by the way, coolest, companies behind it will be the kiss of death to Microsoft Word.

With Buzzword, you get almost all word-processing functionality what really matters on your browser--with the addition of online collaboration. I was one of their first beta testers, receiving my invitation from Roberto Mateu on July 19th. I have Masters and PhD students writing up papers and theses with it, in real time collaboration, dispersed all around: Taisa is in the heart of the Amazon, Anne is in Harvard University, and Analize is in Curitiba, in the Brazilian South.

They have been adding features every month. It´s one of those things that, after you have it, you wonder how could you live without it. Soon, it will have the host of Adobe TrueType fonts, pdf support, offline functionality, etcetera. Yesterday it was alpha-geek only, but now it will spread like wildfire.

Adobe isn`t disclosing the financial part of the deal, but something like this just wouldn`t go for less than 100 million, perhaps some multiples of that. After this huge incentive, expect similar start-ups to jump in the Adobe AIR bandwagon and, twelve months from now, spreadsheets and more sophisticated presentation programs. In the future, expect everything all the way from charting to equation editors. Microsoft Office "ultimate" (¿?) goes for USD679,00. The "student edition" goes for USD149,00. (By the way, Apple should just give up this space and bundle iWork into new machines.) Scoble mentions that MS will still have some "office" revenue stream, yet: "There is blood in the water even if only the early sharks can smell it."

I`ve just replied to Tad Staley`s email, congratulating these folks.

As I wrote before, I´m almost feeling a little bit sorry for Mr Gates. But only almost. And only a little.

But, hey, maybe that cool Zune will make up for these lost sales?

Chase and Simon (1973) Perception in chess, Cognitive Psychology 4:55-81. A scientific blunder.

Blogging on Peer-Reviewed Research

Here´s an email I´ve sent some months ago to a number of very bright people.

The 1000 dollars offer holds until the end of this year.

Imagine if two famous biologists published a study, over 30 years ago, with two parts: in the first part, they unequivocally showed that sharks and dolphins had a strikingly different nature. In the second part, however, they tried to explain that difference by looking at the habitats of a dolphin and the habitat of a shark (i.e., the same data). Imagine that that paper would be cited by hundreds of people, for decades.

Now imagine that Chase and Simon, writing a study entitled "Perception in chess", in Cognitive Psychology 4, p.55—81 (1973), divided it into two parts. The first part (p.55—61) of the paper showed that when chess masters looked at a board for 5 seconds, they could reproduce it with enormous accuracy, while beginners could not reproduce it for more than a few pieces. This difference could not be explained by masters' greater memory, for, in randomized positions, the effect disappeared, with masters and beginners able to reproduce only a few pieces of the board. Sharks and Dolphins, it was clear, were different.

Now, what was the nature of the chunks like? The second part of the paper devised two tasks, a 'perception task', and a 'memory task'. These tasks looked at masters and beginners 'interpiece interval times' (within glances at the board, and in between glances) in reconstructing the boards. The results were unequivocal: the data was exactly the same for masters and beginners (figs 3 and 4). They pointed this out clearly:

[Perception task, p.65] "The first thing to notice is that the data are quite similar for all subjects. The latencies show the same systematic trends, and, for the probabilities, the product moment correlation between subjects are quite high: Master vs Class A=.93; Master vs Class B=.95, and Class A vs Class B =.92. The same is true for the between glance data… Thus, the same kinds and degrees of relatedness between successive pieces holds for subjects of very different skills."

[Memory task] "Again the pattern of latencies and probabilities look the same for all subjects, and the correlations are about the same as in the perception of data: Master vs Class A=.91, Master vs. Class B=.95, and Class A vs. Class B=.95".

The obvious conclusion is, of course, that whatever difference exists between Masters and Class B players, it cannot be obtained from this dataset. Nothing about the "nature of the chess chunk" can ever be obtained here.

Yet, with that dataset at hand, the authors proceeded to study the nature of the chess chunk: "These probabilities are informative about the underlying structures that the subjects are perceiving" (p. 68). How can they be, if a Master subject perceives the global meaning of the position, and a Class B perceives nothing?

"Our data gives us an operational method of characterizing chunks, which we will apply to the middle-game memory experiments of subject M[=Master]" (p.78). One wonders: why bother? Send the master home. They could gather all they needed a from a Class B subject, or from a yanomami, after that non sequitur.

Chase and Simon 1973 explained the difference between sharks and dolphins by looking at their habitats, and the whole world bought it. At the risk of running into utter humiliation, I will paypal one thousand dollars to the first person on this list that proves me wrong. The deadline for your thousand shiny dollars is 24h before the deadline for submission to cogsci in Nashville, when I will go on and commit scientific suicide.

Any takers?