Thursday, August 30, 2007

Hofstadter's advances (i): Emergence & Priming

Here's a slidecasting of class #2. In this class we discuss two of Hofstadter's groundbreaking ideas: emergence and priming.

Even if today most people don't associate Hofstadter with the idea that the mind is the emergent product of micro-interactions, and while connectionists seem to think that has always been an obvious idea, it's worthwhile to point out that that's not true. He does deserve large credit for the idea. GEB paved the way, as Margaret Boden wrote; and a chapter in Metamagical Themas, "subcognition as computation", spelled it out--back in 1985, way before the PDP bible.

Another incredibly important idea concerns priming. I show a bunch of Derren Brown's TV-style experiments, and talk about one or two serious experiments in priming--those interested should take a look at John Bargh's recent, incredible, work. I have never seen Hofstadter mention the word priming, but I feel he has provided the best computational model of it.

The next class will be dedicated to, well, understanding understanding.

Hopefully, we'll be able to post Jarbas' talk also, and soon. The videos used in the class are embedded below, as are links to the screencasts. [Someday, who knows, maybe this course will even be on a podcast in English language... But where will we find the time for that one?]



ScreenCast 1 (1 species of ants, 1 food source)

ScreenCast 2 (3 species of ants, 3 food sources)

Video 1 (Priming a specific word though a cell phone)


Video 2 (Priming yes/no responses)


Video 3 (Mentally conducting an orchestra)


Video 4 (Subliminal advertising)

Wednesday, August 29, 2007

Who's the boss around here?

Remember that old joke?

"Everything has changed since our manager died."
"I can imagine. How are you guys holding up?"
"Productivity has risen 130%".
Now consider the idea of emergence.

How do ants find food? Who's the boss? Here are some screencasts from Rennard's ant foraging applet. From his website:
An ant is quite a simple animal. Its behavioral repertory is limited to ten to forty elementary behaviors. Yet, anthills are very complex. One can find nursery, warehouses or kitchen gardens. Some individuals forage, others take care of the eggs, repair the nest or protect the anthill against miscellaneous threats. What is the secret? How can such mindless animals achieve such complex organization?


How do they forage, for example?

At the beginning, a number of ants are walking, more or less randomly, outside the nest. They are looking for food. All along their way, they deposit a light trail of pheromones. When an ant finds some food, it returns home, depositing a stronger trail (the intensity of the trail possibly depends on the richness of the discovered resource). Since ants have trail-following behavior, a growing number of individuals will tend to follow it and to reach the food. When they return, they reinforce the trail. Positive feedback (self-amplification) therefore occurs. More individuals reinforce the trail, attracting new individuals, who in their turn reinforce the trail... In this example, the ants don't communicate directly. Information is exchanged through modifications of the environment (here local gradients of pheromones).

At any point in time, an ant moves randomly, with a slightly higher than average probability to follow (or keep on following) a pheromone trail. There are some points I'd like to emphasize:
  1. Ants are myopic, they have no global vision of what's going on;
  2. Ants are selfish and go on without asking for help or for permission;
  3. Ants communicate through modifications of their environment.
Here are some screencasts, first, of ants foraging in a simple scenario: one food source.





In a second example, we have 3 species competing for 3 resources, with different rates of consumption for the resources. Now, what does this have to do with cognition? Subcognitive processes are like ants, and cognition is the emergent product of their tiny acts. Hofstadter uses the term codelet, a small program or procedure, which acts on a small part of a representation, and doesn't know or care what others are doing. Codelets are very much like ants:
  1. They are myopic, with no global vision of what's going on;
  2. They are selfish, going on without asking for help or for permission;
  3. They communicate through modifications of their environment--in this case, a representation of the situation being faced by the cognitive agent.
So the first thing in understanding Hofstadter's cognitive architectures is this: codelets run the show. They do a small bit of work here or there, then die. They can trigger other codelets. They often bump into each other, raising "confusion" in the system's mind--and for confusion we can use the term "Temperature". Finally, since codelets act in parallel and are always being triggered and dying, there should have a "place" where they wait to be executed; this is what's called the coderack--by analogy to a coatrack, in which coats are randomly taken away from it. The coderack holds a bunch of codelets prior to their execution.

Our Jekyll and Hyde-ness is modeled by having these subcognitive processes cooperating with and competing against each other. As The Economist once put it, neurons are cells that like to boss other cells around. However, there is no neuron boss to be found in the brain. And there is no codelet boss to be found in the mind.

So this is the first of ideas that Hofstadter brings to the table (and implements in a computational model, which is much harder). Cognition emerges from subcognitive processes--hence "subcognition equals computation", which is in stark contrast to Simon's view that "Cognition equals computation".

The other two ideas I find fascinating and I think Doug's models are right on track are priming, and understanding.

--
Note #1. Ponder Stibbons, a fantastic blogger, asks whether "subcognitive imply subconscious (or other way around)?" Tricky, tricky issue; and I hope to reply to her soon.

Note #2. Screencasts due to the amazing Jing project.

Wonderful, wonderful, tech world!

Let's just bridge the digital divide.

Monday, August 27, 2007

The strange case of Dr Jekyll and Mr Hyde

This Thursday I have a class about Hofstadter's cognitive theories, and, since I'm so deeply buried into that work, it's hard to know where to start. So here I'm thinking: Why do I feel it has so much promise? Why does Ariston and others immediately agree with me that, somehow, that's "exactly" how our minds work? Why do I feel it is the best right way towards the "cortical" algorithm? Why am I so impressed with this work?

~~~~

My good friend Eric Nichols and I are operating under the rules of almighty Alli. When you're under Alli, you know better than feasting yourself in places such as these. A part of you wants to live a normal life and eat like everyone else, another says "remember Dostoyevsky; don't touch that poison". As Schelling would say, Alli helps "enforcing rules on oneself".

~~~~

I once went skydiving in Chicago. Awesome experience. Very fun at the beginning, pages and pages of legal papers giving up any and all of your rights, and when the plane goes up, everyone is all smiles.

Not that the feeling will last for long.

When the door opens up, the smiles are gone. The chilling wind, and god!, the people disappearing at the plane's door, really make your heart beat--but not in a happy way. A part of you has planned to do this, has driven for almost an hour from downtown, has paid good money for it, and "hopes" that you'll jump. Another part of you, however, doesn't agree. This other part is screaming out: "are you f***ing out of your mind?", "You're gonna jump from a plane at 10.000 feet?", "No way I'll let you do it!"

Gladly, you had given up your rights before. All the guys have to do now is to throw you from the plane. Even if you'll become a human-flavored pizza 10.000 feet below, that will be no problem--for them at least. So they throw you from the plane.

Let me tell you, that's what they do.


~~~~


In this blog I have mentioned before the case of the would-be cigarette quitter, and I am yet to apply Schelling's strategic commitment in what refers my presence in The Club of Rome. The idea that, by reducing one's options, one can actually enforce the behaviors decided rationally previously is not only an intriguing part of human nature, but also has important implications in economics and strategic behavior.

But let me focus on the psychology here. It certainly feels to me that I have these subcognitive urges competing between themselves. What are these processes like? Selfish & myopic--they do not care at all about the big picture. They do not care that there may be other urges fooling around, saying "no!", or saying "go!" In your brain, each of them selfishly seeks all your attention. And they ignore any and all long-term consequences.

When "you" are halfway awake and halfway asleep, if "you" open your eyes for a brief instant, to close them for "just a little minute", only to wake up hours after "you" had originally desired, let me ask: who are the real you? The one that "briefly, but consciously" had closed the eyes, or are you the one that woke up only to regret those lost hours? The answer, strange as it may seem, is: you are "both", and you are "neither".

"You" are "both", in the sense that what you do, think, and desire, is a fruit of the fight of all of these subcognitive processes--the competition between those that say "awake now" and those saying "just a sec". At the same time, "you" are "neither" of those individual processes--as neither your behavior nor your thoughts can be traced to any single one of them. Like the strange case of Dr Jekyll and Mr Hyde, "you" are not one nor the other, "you" are the emergent product of many.

This is, in my opinion, the first groundbreaking proposal from Hofstadter: the mind is the emergent product of a number of subcognitive processes.

(Not that Doug had brought up the idea originally, but, as Margaret Boden has said in her history of Cognitive Science, in GEB he paved the way so completely that it would finally have to be taken seriously. The other two ideas that he has gone way beyond others, in my opinion, are in modeling priming and understanding.)

The Copycat Project: some screencasts

Here are three screencasts from Scott Bolland's Java implementation of The Copycat Project. Bolland also has a good Copycat Tutorial online.

ABC->ABD : IJK-> ?

ABC -> ABD : IIIJJJKKK -> ?

ABC -> ABD : XYZ -> ?

Sunday, August 26, 2007

Additional info over class #1

Here's some additional information concerning class #1 of our PhD Seminar.

Websites mentioned...

Geni (valuation... around US$100 Million)
Facebook (valuation... around US$7 Billion?)

Jeff Hawkins at TED


Here's Hawkins at Emerging Technologies Conference, MIT


And Hawkins at UC Berkeley

Thursday, August 23, 2007

A declaration of war!

Mr Hawkins, can we be your worst enemies, please?

Wednesday, August 22, 2007

How many daddies do I have?

Imagine the problem facing a newborn baby. The baby has a strong bond with the mother, and is, from very early on, used to her presence, touch, voice, smell, and body.

But not the father.

Each day, "daddy" appears in a new, very different form. One day he's bearded and smelling like the beer he had 20 minutes ago. Another day he's clean-shaved and with a completely distinct smell. His voice changes from day to day, from loud singing and playing and laughing, to a "normal tone", to low whispers. Now, the task for a baby--or better, for a baby's brain, is to answer "how many daddies do I have"?

Let me rephrase: given the immense variety of incoming stimuli, how can a baby find out that all that different mass of information originates from the same stable thing in the cosmos named "daddy"?

Welcome to the representation invariance problem.

Every single moment of your life you are bombarded with a huge amount of information. Your eyes alone have enough bandwidth leading information to your brain to account for an enormous number of phone calls. A million nerve cells provide visual information to your brain. 30 thousand nerve cells provide the input to "hearing". And the thing is, after these initial cells fire up the information inside the brain, things change. The image that your eyes receive rapidly ceases to be an image inside the brain, and becomes a pattern of neural firings so complex no neuroscientist has mapped it in detail. After some steps, the pattern of firings cannot be correlated to the original projected image anymore--your brain is working on something entirely different, a representation, a guess of what's out there in the world.

The brain--or better, the mind--sees the world. But it is dark, completely, absolutely, dark, inside your brain. The representation does not feel dark, but it is made in utter darkness.

Seeing is creating an interpretation of what's in the world. What kind of stable things are out there that could be generating this particular input images? This is what the brain does: under complete darkness, a pattern of firings become a lively mental image. Maybe it's "daddy", maybe it's "mommy", maybe it's a toy, maybe something else entirely.

And the original image is so huge, so immense, that in all probability, in your whole lifetime, it is most likely you will never see the exact same image projected to the exact same retinal cells. So your brain is receiving a gigantic flux of constantly changing information, and it has to provide, for your understanding of the world, a representation of what is out there. The staggering thing is: this representation does not vary easily. It is an invariant representation. You see the same person under a new light, or a new angle, and you never think that suddenly it's another person--for the representation is invariant. Now here's a quote from "On Intelligence":

The problem of understanding how your cortex forms invariant representations remains one of the biggest mysteries in all of science. How difficult, you ask? So much that no one, not even using the most powerful computers in the world, has been able to solve it. And it isn't for lack of trying. (Hawkins, 2004, p.78)
I think Mr Hawkins isn't accurate in this one. We do have a solution, "available, today"--but very few people know about it. This is the ace up our sleeve; our upper hand in the race to find the organizing principles of cognitive technology.

Tuesday, August 21, 2007

So close, yet so far...

Have you ever solved a Rubik's cube? Some 5 or 6 moves prior to arriving at the solution, it still looks like a mess.

I think--but perhaps this is a thought shared by all cognitive scientists--that we're some 5 or 6 steps towards solving the big problem in Cognitive Science. Like the Rubik's cube, it all looks like a mess now, though we do have huge amounts of knowledge in the neurosciences, linguistics, computer science, philosophy and psychology. However, central organizing principles that may cogently explain how humans can use language, imagine impossible events such as anything beyond one's light cone, play chess or go, have low-level vision and high-level conceptual imagery, or even conceive of something like continental drift or a light cone. How can a brain do so much that's so varied?

A term that many brain scientists usually use is "functional areas", to demarcate that here goes vision, there goes grammar, in that other area we find touch, and so on. This perspective, perhaps, has been a problem on the way to progress. Others may have been the view that the brain is a computer; the empirical dependence on static patterns, the separation between low-level perception and high-level conceptual research, the encapsulation of analogy as a tangential issue, and the lack attention to feedback mechanisms. These are the topics of the PhD seminar starting this week. To some of those we have, I think, a satisfactory solution, as Steve Jobs would say, "available, today".

But to others we have only glimpses and glances of what they look like. If the scientific challenges are met, the technological and business opportunities will be huge. I see the scenery today as in the days of pre-TCP-IP, in which a bunch of guys were daydreaming about a network in which the physical medium would not be important at all, and any point in the network--any point--could be disrupted and the network would not be affected. A basic breakthrough like TCP-IP brings forth hordes of technological advances, and this is the role Cognitive Science may play in the coming decades.

The operating system has been the major platform in the computer industry, at least up until SUN came up with a vision of (pre-web2.0) ideas. Now we're shifting technology to a "new operating system", based not on multi-threading or task scheduling or windowing, but an architecture of community interaction, interoperability between people and computer systems and devices (like your cell phone). Blah blah blah; this is history, right? But what's about to come may be even bigger. Now that we have the whole world connected, and hordes of information available, if a true absolute cognitive technology comes forth, huge sweeping changes will come with it.

I feel that we're in the pre-TCP-IP era of cognitive technology. The net, of course, turbocharges cognitive technology, as it turbocharges globalization, and many other things. But if we suddenly turn this Rubik's cube in the right way for a few more moves, we may just stumble into something really groundbreaking. Discover the missing gaps in our understanding of the mind, and you'll have the ultimate operating system in your hands. I'm betting my career on it; and I plan to do it in all ways: by teaching and writing scientific papers, by programming and designing and striving to see the minute technical details and the big picture, by discussing within the Club of Rome's forums the importance and magnitude of what's happening, by building other, perhaps for-profit, organizations to develop these models, by declaring "friendly war" with Mr. Hawkins, and by having entrepreneurs randomly jumping into my classes to see what they're all about.

Probably I won't touch these untold riches, probably it won't be me who'll turn Rubik's cube the right way, and maybe I won't even live to see this happen, but in any case I'll be glad to help those with enough energy and intelligence to jump into one of the deepest, world-altering, questions of science and technology ever to be found.

Thursday, August 16, 2007

Quick link

Today I'm flying to the Amazon for a few day's work. As I prepare the trip, The Economist is putting out a "correspondent's diary" about the amazon. Like all of their "diaries", it's enormously rich--and entertaining.

Sunday, August 12, 2007

Women, men, and the teaching of horrible classes

I had an epiphany the other day. Sometimes I teach about horrible, horrible stuff, such as WWII details, or where the nuclear arms race has led us, or babies suffering from pneumopericardium. One thing that's always amazed me is that, while talking about something like the second world war, women and men respond differently. Men tend to be much more interested and to think--or at least to react as if they thought-- that "now this class is going somewhere. This is something real, something important". Women, on the other hand, look at WWII's details and respond, sometimes verbally, that "That is so horrible! Can we move on to the next topic or to another example, please?"

Here comes my epiphany: the exact same behavior is switched if the horrible topic is a baby turning blue in a neonatal intensive care unit. The baby turns blue and is about to die within minutes. It is a real story with documented evidence, so it's not a popcorn & movie talk. But now the women are paying attention with wide eyes and asking questions while the men are clearly either not paying attention at all or missing the point entirely.

Or going:

"That's just horrible, horrible stuff! Don't you have anything better to say? Move on, dude."

Saturday, August 11, 2007

Course info

PhD Course: Computational Modelling of Human Intuition

ALL INFO ABOUT THE COURSE WILL BE AVAILABLE AT http://groups.google.com/group/FARG. Please subscribe to follow the latest info.

Course Schedule

If you use some electronic calendar (Outlook, Apple's Calendar, Google Calendar, etc.), you may want to subscribe to the course calendar:

OR view the course calendar right here in this page:



Thursday, August 9, 2007

Hiring is obsolete

As I write this I'm thinking of a bright student who left my research group.

Jair Koiller, a mathematician who's got the sharpest, sharpest mind, mentioned the other day that "so many students just want to wear a suit and get a job downtown". So true, so sad, and so anachronic. The world had, and is, changing so much, so rapidly, and people just don't seem to see it. Perhaps I'm unable to convince people, but this is an excelent, excelent, article from Paul Graham. Here are the starting phrases:

The three big powers on the Internet now are Yahoo, Google, and Microsoft. Average age of their founders: 24. So it is pretty well established now that grad students can start successful companies. And if grad students can do it, why not undergrads?

Like everything else in technology, the cost of starting a startup has decreased dramatically. Now it's so low that it has disappeared into the noise. The main cost of starting a Web-based startup is food and rent. Which means it doesn't cost much more to start a company than to be a total slacker. You can probably start a startup on ten thousand dollars of seed funding, if you're prepared to live on ramen.

The less it costs to start a company, the less you need the permission of investors to do it. So a lot of people will be able to start companies now who never could have before.

The most interesting subset may be those in their early twenties. I'm not so excited about founders who have everything investors want except intelligence, or everything except energy. The most promising group to be liberated by the new, lower threshold are those who have everything investors want except experience.


Read it in full, you won't regret it.