Saturday, June 30, 2007

On the asymmetry of associations

Associations are not symmetrical. If A primes B, then not necessarily B will prime A. Some examples?

"Polar bear" primes "white" to a higher extent than "white" primes "polar bear".

"Niels Böhr" primes "physics" to a higher extent than "physics" primes "Niels Böhr".

"Monica Lewinsky" primes "impeachment" to a higher extent than "impeachment" primes "Monica Lewinsky".

222 primes 2 to a higher extent than 2 primes 222 (if it does, actually).

My favorite example comes from the research in concepts showing that people tend to think that numbers such as 22846221 are "more even" than numbers such as 13. It's crazy but natural; those numbers prime "even-ness" much more than 13 can ever aspire to.

With that in mind, we may proceed to develop NUMBO's slipnet.

Brainstorming over NUMBO's slipnet

Ok, now that nodes, links & activations seem to be working, perhaps the moment of truth hath arriveth. Maybe it's time to build NUMBO's slipnet. What should it contain? Here's what Scott Boland uses in his thesis (from p. 105):

"there are three main types of concepts in memory: Numbers (salient numbers between 1 and 150), addition_instances and multiplication_instances. Addition and subtraction [NOTE: I think he means multiplication here] instance nodes are connected to numbers through three types of links, op1, op2, and result, representing the two initial numbers that are combined and the subsequent result. Example facts stored in the network include facts that 1+2=3, 2+3=5, 2*2=4 and 3*4=12. The current implementation has a collection of 45 salient numbers, 30 addition facts, and 32 multiplication facts stored explicitly in the network, although this number can be readily extended." (p. 105)
So, brainstorming a little, we get...

  • numbers: 150+ (from 0?, 1, 2, 3, ..., 150).
  • multiplication: 32
  • addition: 30

  • salient numbers: 45 (I'm thinking the 12 squares, plus the 15 (x*10) numbers, and god Boland knows what else).
  • Addition: 30*3 = 90
  • Multiplication: 32*3 = 96

Perhaps in the future we may want to include nodes for mathematical operations, such as (n1+n2=n3) linking to things such as (n2=n3-n1), and so on. But not at this stage. Let's stick to NUMBO for now.

Nodes and Links, decoupled

One constraint in this problem was to use the observer pattern, so that activation spreading is easy to implement, and new associations can be created during a run. (Though this is something that Good-Old-Farg, such as Copycat, Tabletop, or Metacat don't do, it should be built in, for learning). The other constraint was that the system should be decoupled. If class A uses class B, then class B cannot use class A in any way.

The first implementation of activation spreading had nodes as observers and nodes as observable subjects. Whenever a node was spreading activation it simply notified its registered observers. So, if you'd like to create additional functionality by bringing in a link class with different types of links, different distances, and so on, this class would depend on the node class. The node class, on the other hand, would also depend on the link class, such that whenever activation spreading took place, information from the link type, link distance, etc, should be brought in. That certainly wouldn't respect the Hollywood principle.

A better design is to decouple, by making the TLink (the link class) as the observer of TNode. So whenever Tnode notifies changes in activation, instead of other nodes receiving these messages, the links would receive them, and the links include a Tnode as a "destination" node. So now, to create a link, the main program has to create 2 nodes, create a link with the destination node, and register that link in the origin node. The design is decoupled, in the sense that any node can have changing associations being assigned and deleted during runtime. Moreover, to change node functionality, the class Tnode is changed (or subclassed), and to change link functionality, the other class changes with no regard to the implementation details of each other.

So here's what the new design looks like:

TActivationObserverClass = Class (TInterfacedObject, IObserverActivation)
{Observer Interface here}
procedure Update(Received_Activation: Tactivation); virtual;abstract;

Tnode = class (TInterfacedObject, ISubjectActivation)
associations: tlist;
previous_level: real;

activation: Tactivation;
constructor create;

{Observable (Subject) interface here}
procedure RegisterObserver(const Observer: TObject);
procedure UnRegisterObserver(const Observer: TObject);
procedure Notify (Sent_Activation: TActivation);

TLink = Class (TActivationObserverClass)

link_drag: real;
Dest_Node: TNode;

constructor create (destination: TNode; drag: real);

{Observer Interface here}
procedure Update(Received_Activation: Tactivation); override;

Links and Nodes: here's a coupling problem

Here's a coupling problem. Consider a class TLINK, which should, of course, link two nodes, providing an association or connotation between them. It should obviously have something like:

TLINK.Create_Link (Node1, Node2, Link_Type, Link_Distance, etc.)

but here's a problem I'm currently brainstorming. This class uses the class of Nodes, right? Ok. Now, the Nodes Class also uses this (Link) one, because, when spreading activation (through observer pattern behavior: nodes observe other nodes's activation levels), one needs to know at least the link_distance. Whether or not link_distance is constant throughout a run, it is needed in the spreading activation function. So we end up with the following situation:

Class A (Links) needs objects from Class B (Nodes)


Class B (Nodes) needs info from Class A (Link_Distances) in order to function.

It would be just fine to strongly couple them for now. But this is a class library, and in the future, when someone wants to create an entirely new application that can't be foreseen now, strong coupling just won't work.

Remember the Hollywood principle: "Don't call us, we'll call you".

So I'm brainstorming on this one for now...

Friday, June 29, 2007

More on the TACTIVATION class

Here's an update on the TACTIVATION class. Here's what it looks like (at this stage).

Tactivation = class
number_discrete_steps: integer;
current_state, level, increment: real;
procedure Recompute_Level;

function GetNumSteps: integer;
Constructor Create(steps: integer);
procedure increase (steps: integer);
Function Get_Level:real;
function Get_CurrentState:integer;
function Get_Increment: real;
Procedure Reset_Increment;
procedure decay;

Only two methods are actually larger than 3 lines: Recompute_Level and increase. Recompute_level is private and computes the sigmoid as a function of the current_state. I'll talk a little about increase here, because it's used also for spreading activation. I'll be updating the source to the googlegroup. Here are the descriptions of the public functions:

Constructor Create(steps: integer);
Gets the stage ready.

Function GetNumSteps: integer;
Function Get_Level:real;
Function Get_CurrentState:integer;
Function Get_Increment: real;
These are "Getters", so that nobody should mess around with the data directly.

Procedure Reset_Increment;
Sets variable increment to zero;

procedure decay;

This is obvious. Decays (currently) simply brings current_state one step down and recomputes the activation function.

One thing that's not really needed now, but for the future. Both decay and recompute_level should be in the future generalized as strategy patterns. After all, Copycat implemented an activation function, Tabletop a different one, Phaeaco a different one, and, while I do believe that there must be only one that's a best fit with psychological plausibility, the only way I'd envision that discovery would be to have modularized versions being compared against each other.

That's not a concern for now, but someday in the near future it really should be done.

procedure increase (steps: integer);
if (increment<(steps/number_discrete_steps)) then
If (Increment>0) then
If (Current_State>1) then Current_State:=1;

The objective of this method is to increase activation as a function of the number of steps (passed as a discrete parameter). It does have that weird "increment" variable thrown in, though. That's for activation spreading to other nodes, more on this later on.

So this method will be called by some codelets that want to increase activation of something; and also by nodes spreading activation to a new node. (NOTE that TACTIVATION is not a node, it is just a property of a node, we'll get to TNODES soon). Our implementation of activation is different from Copycat and from Tabletop, but maintains the same general spirit.

If this method is called by some codelet for the first time, then increment will be just the number of steps passed to it, divided by the maximum number possible (remember Harry's work: "steps" here on the x axis are discrete; and we're using 100 maximum steps). However, if the method already had some "increment" received from a node or codelet or something, it will add that increment to the receiving activation, in order to "correctly" spread activation to forthcoming nodes.

You might ask, that is,
of course, if you are geek enough to be interested in these types of things, "why not simply update with steps and be done with it?" Why use increment? Moreover, increment just keeps on adding up and up and up? When should it go down?

First, why use increment instead of steps? Increment's real difference is to increase activation when it is coming from multiple sources. It is then and there that increment will be way better than just steps. Next: when should the bloody thing go down? It goes down (to zero) whenever Reset_Increment is called.

Ask not when Reset_Increment is called, ask who can call it. Here we go into a basic philosophical principle of software design: the Hollywood principle

The hollywood principle. Don't call us, we'll call you!

Just like Hollywood, you have dreams of becoming a movie star, only to hear, in studio after studio, "don't call us, we'll call you". The basic idea behind the Hollywood principle in OO design is that you shouldn't have a class A calling a class B calling a class C calling a class A, and so forth. Why s that? Well, whenever you need to understand how the whole thing fits together, in order to change something, this intermingling of classes will drive you nuts, especially in large, complex systems. So the idea is that a class like TACTIVATION should get Reset_Increment called whenever a node X that's spreading activation to neighbors finishes that job. Then and there increment goes to zero and a new round of activation must be received in order for the node to spread it.

More on nodes & spreading activation soon.

Wednesday, June 20, 2007

Connotation explosions and activation spreading

Concepts never get activated alone. An active concept always calls up its associations, or connotations. I have written a little about connotation explosions, in the Elisa Effect, or in the ultimatum game. These explosions come from the psychological mechanism of activation spreading: active symbols spread activation to their neighboring nodes.

Think about a computer summing up some number to "2". Now consider this fragment from Goedel, Escher, Bach (p. 677-678):

"a machine that can pass the Turing test may well add as slowly as you or I do, and for similar reasons. It will represent the number 2 not just by the two bits "10", but as a full-fledged concept the way we do, replete with associations such as its homonyms 'too' and 'to', to the words "couple" and "deuce", a host of mental images such as dots on dominos, the shape of the numeral '2', the notions of alternations, evenness, oddness, and on and on... with all this extra baggage to carry around, an intelligent program will become quite slothful in its adding."
I can still remember reading this, around 10 years ago, in a restaurant in SJC, before going to the space research institute for a PhD seminar or something. I remember so vividly thinking about how genius this idea was, and how far from current technology it was (or seemed to be given the mainstream scientific journals; it would take me some months to find out that Doug's team was already working on it. In fact, that they had already developed Numbo, with some of this functionality.) It may not seem a grand idea, but to me it felt like such an enormously radical departure from what was going on in the mainstream.

So how should concepts spread activation? What should the classes look like? Here's a draft, which perhaps most FARG designs have used, in some form or other:

Tnode = class
Activation: TActivation;
Associations: Collection of Tnodes;
procedure spread_activation;

Collection could be an array, a list, or anything else. I think that this seems a natural way to go, with a node having a spread_activation method going through associated nodes. If the collection is hardcoded in the source of the node, this design is faulty. If you (or the system) add a new node to the conceptual network, you have to change the code of all nodes that should spread activation to it. Why should these nodes care at all about a new node? If we'll be designing next-generation technology able to construct fluid concepts on the spot, it must have the flexibility to do it without any intervention. I'm about to suggest that the best design here, for many reasons, is to use the observer pattern.

A discussion group for the evolving Fluid Concepts framework



Congratulations: you've successfully created your Google Group, Fluid Concepts Framework. Here are the essentials:

* Group name: Fluid Concepts Framework
* Group home page:
* Group email address

And here are links to a few more Google Group-related goodies:

* Change group settings:
* Invite more users:

If you have questions about this or any other group, please visit the Google Groups Help Center at

Enjoy your group and make us proud!

The Google Groups Team

Don't think of an elephant: Activating symbols

George Lakoff says "Don't think of an elephant". Can you? Of course not; Not anymore. At the moment we're primed by a word, there's no turning back. It activates the imagery of a concept in all its fullness. This automatic, involuntary activation is a basic psychological mechanism at the crux of human intuition. As it turns out, it's also at the centerpiece of Hofstadter's architectures.

The Republicans, Lakoff argues, are masters in exploring this phenomenon. As the host of FOX NEWS will tell liberals: "How can you be against tax relief? I don't understand how any sane person in America can be against tax relief." What's biasing the debate here, of course, is the activation of the word relief--it stands in stark contrast to, for example, a "tax cut".

Yesterday Christian convinced me that we should start work on designing and programming the thing; "it will help us to think"; so that's what this, and upcoming, posts, are all about.

So how should these activations be modeled? I'm following here Harry Foundalis's award-winning thesis (see, for instance, pp. 156--163). Harry suggests that a sigmoid function is the best option for the case. In "current_state", the x axis, we have a discrete number of steps. Concepts can, by propagating activation, increase the current_state of any node. Time will take care of decaying all this exciting activation. So here's Harry proposal:

The advantage of using a sigmoid is that, at low levels of activation, the system should act "prudently", trying to maintain the concept node active; or at least not letting it decay too quickly. At the other end of the scale, when activation is high, it should remain high for some time: the system acts "conservatively", trying to find out what the current thought is all about. Why on earth is "elephant" active at this point?

This is the start of my framework: A class to handle activations. For the mathematics involved, here's an online spreadsheet with the function and a graph. The class architecture, implemented in Delphi, follows:

Tactivation = class
number_discrete_steps: integer;
current_state, level: real;
last_update: tDateTime;
std_wait_time: real;
procedure Recompute_Level;

Constructor Create(steps: integer);
procedure increase (steps: integer);
Function Get_Level:real;
procedure decay;

Most of the variables should be trivial. I'm using around 100 states (but the class is flexible). std_wait_time is multiplied to the actual activation level (obtained by calling Get_Level) before any decay, producing some interesting behavior: active concepts stay for quite some time, disproportionately to their activation level.

Of course, this class follows the basic principle of being (i) closed for modification, while it's also (ii) open for extension. It's closed for modification: unless a bug is found, no line of code should be changed here. But say, for instance, that someone wants to test another function (instead of the sigmoid): it's just a question of overriding Recompute_Level. To modify the decay behavior, just override decay. So nothing that's done and tested and running should be changed, ever. That simple.

I wonder whether Eric, Abhijit and Francisco are also using the sigmoid.

Tuesday, June 19, 2007

Goodbye world

A large pizza (avec the magical blue pill)? Check.
Industrial quantities of diet coke? Check.
All phones off the wires? Check.
Hôtel Costes on the Stereo? Check.
Enthusiasm of a foolish teenager? Check.
Books lying all around to help you develop version 1.0 of the slipnet? Check.
The whole night still ahead to finish the job? Check.

Goodbye world.

At least for now.

Tomorrow's gonna be beautiful.

Thursday, June 14, 2007

Design Patterns, refactoring, and a FARG class library

This post is the first of a series intended to develop basic technology for FARG architectures. It will be part of a PhD seminar held at FGV (more info below, sorry: in portuguese). There are two objectives involved in this course: first, to create a running application that models human intuitive information processing during simple tasks. This model should directly contradict an important assumption of current literature in Cognitive Science.

The second objective is to develop basic technology. To create a class library to enable different architectures to approach different problems. While we do have megalomaniac ambitions for future web applications, let's leave that aside for the time being and use the blog to brainstorm how FARG should be implemented, in an elegant, natural way, which should follow the open closed principle: A library is open for extension, and closed for modification.

For those involved in the course, the tag will be fluid concepts, so all posts can be obtained here.

And here's the outline of the course (in portuguese only).

Modelagem cognitiva computacional da intuição humana

Em 1998, Ron Suki King, um campeão mundial do jogo de damas, disputou simultaneamente contra 385 oponentes . Um cálculo simples mostra que, se King usou, em média 2 segundos por movimento, cada um de seus oponentes dispôs, em média, de 12 minutos e 30 segundos para responder. King venceu todos seus oponentes. Mesmo com a enorme vantagem de tempo, nenhum dos oponentes de King conseguiu derrotar a resposta imediata, intuitiva, de King. A intuição humana é capaz de julgamentos incrivelmente precisos em míseras frações de segundo.

Em Junho de 2001, o Presidente George W. Bush conheceu Vladimir Putin. Na ocasião, Bush declarou que havia "olhado nos olhos do homem... e pôde ter uma idéia de sua alma" [*]. Bush concluiu, e afirmou publicamente, que Putin era um "homem confiável". Putin, entretanto, possui notoriedade como um espião da antiga KGB, e foi, durante toda sua carreira, rigorosamente treinado para mentir e esconder suas reais intenções--ironicamente, dos americanos em especial. O julgamento de Bush foi errôneo, e a relação entre os respectivos países deteriorou-se intensamente ao longo dos anos. A intuição humana também é capaz de levar a julgamentos e decisões errôneas.

Como formamos julgamentos intuitivos? Como estes julgamentos interferem com a intuição humana, e afetam nossas decisões, nos mais diferentes níveis?

Neste curso vamos explorar estas duas questões via a modelagem cognintiva computacional. O objetivo do curso é desenvolver modelos computacionais, na linguagem JAVA (ou alternativamente, em OOPascal), que expliquem viéses e capacidades intuitivas. O objetivo final da disciplina é gerar modelos científicos publicáveis em periódicos internacionais, como Management Science, Decision Sciences, ou Judgment & Decision-Making. As referências-chave da disciplina são:

[1] Kahneman, D (2003) Maps of bounded rationality. Nobel Lecture; published in American Economic Review, and in American Psychologist.

[2] Hofstadter, D (1995) Fluid Concept and Creative Analogies, Basic Books, New York.

[3] Linhares, A., e Brum, P. (2007) Understanding our understanding of strategic scenarios. Aceito para publicação, Cognitive Science.

[4] Artigos selecionados dos periódicos Management Science, Judgement and Decision-Making, Decision Sciences, Decision Analysis, Cognitive Science, Cognition, e Cognitive Systems Research.

[5] Teses de doutorado selecionadas.

PRÉ-REQUISITO: Linguagem de programação JAVA: Orientação a objetos, refactoring, design patterns.

(*) President Bush "looked the man in the eye ... and was able to get a sense of his soul". (US NEWS & WORLD REPORT;

Wednesday, June 13, 2007

Jerry Fodor's strange logic

So here I am at EuroCogsci 2007 waiting to talk to Gerg Gigerenzer. Gigerenzer just did this amazingly interesting talk about intuition, decision, and his work with those fast and frugal heuristics. So there I am, in line, right after this lady, to ask Gigerenzer about his stance on Gary Klein's work. Quite surprisingly, out of nowhere, jumps out this rather strange, annoyingly enthusiastic stalker, lowdly throwing criticisms towards Gigerenzer's talk:

"You know that all of that is nonsense, right? What you've mentioned that people don't actually use logic in decision-making. After all, logic is the norm to which decisions or choices should be compared to."

At this point I start thinking; hey, fellow, there's a line in here... and so does the lady who was interrupted. Gigerenzer starts to mold a response, only to be interrupted again with something along the lines:

"but it is logic that drives it all; you can't have inconsistencies in the system; logic is the norm; if you drive on the left side of the road and I drive on the left..."

By then, I and the lady are really ready to throw away this uninvited guest to our little chat. She starts mentioning that 'sides of roads' have nothing to do with logic; "these are conventions", she insists. My strategy of driving him away by yawning doesn't work either, so I end up picking up the fight.

"Only in a metaphysical sense logic should be the norm." "For something to have a truth value, even in these modern and quite bizarre forms of logic such as the nonmonotonic ones or fuzzy logic, if you want to say the the water feels warm, any statement will only--or better, can only--have a truth value if there's a precise definition of all the concepts involved; water, feelings, warmth, is-ness, and so on".

His reply, of course, went on like this: "do you know how hard it is to define concepts? how to come up with precise definitions of concepts?", and on and on. Right on display a few feet from us was Jerry Fodor's book, "Concepts: where cognitive science went wrong". At that moment, I was already convinced that this guy was Jerry Fodor himself; and the lady was gone, mad as hell, and Gigerenzer, in a stroke of magic, had disappeared from the freak show. I had written about logic and its unspoken, untenable presupposition of the doctrine of metaphysical realism, so I was feeling really excited to finally show the truth to someone on this world. So after some more voice raising and euphoric defense of radically incompatible positions, Fodor mentions "Gotta go now", and runs to some conference room. Talk about a strange and rather funny experience. Of course I think Fodor is incredibly wrong about most things cog-sci; It's also incomprehensible to me how he can have such influence; but one thing was great, in fact: the guy had no idea who I was, but didn't care at all; all he was in for was a good fight. Kudos to him for that.

After some 5 minutes I can find Gigerenzer, and I finally have a chance to talk to him:

"Was that Jerry Fodor?"
"Yes", he says.
"What a bully!".
"Oh yes, he's a bully", Gigerenzer responds.

In a bizzarre way, I got more respect for Fodor than I had by reading his books and all that foolishness. Despite all the recognition he's got, he's really in for a good fight. He might as well be, I suppose. If you're going to be forever wrong in your scientific career, at least do it in a bizarrely entertaining way, I'd guess.