Saturday, November 29, 2008

I just accidentally a coca-cola bottle. Is this bad?

I just accidentally Pearl Harbor. Is this dangerous?

I just accidentally the entire Pacific Fleet. Is this dangerous?

I just accidentally International Business Machines. Is this dangerous?

I just accidentally a whole human race. Is it dangerous?

I just accidentally women's intelligence. Is that dangerous?

I just accidentally Microsoft. Is it bad?

I just accidentally the Constitution of The United States of America. Is it dangerous?

I just accidentally the Roman Empire. Is that dangerous?

I just accidentally the Universe. It is dangerous?

I just accidentally the United States Dollar. Is that bad?

I just accidentally consistency in Mathematics. Is it bad?

I just accidentally an atom. Is it dangerous?

I just accidentally the unipolar moment. Is it dangerous?

I just accidentally reason. Is it dangerous?

I just accidentally God. Is that bad?

I just accidentally my freedom. Is it dangerous?

The answer is yes. Yes, humor lies in analogy. As for the background, trust me, you don't really want to see it.

Sunday, October 26, 2008

Analogy at the core of the financial meltdown

People in my research group are tired of hearing the phrases that either meaning is constructed out of experience, or it is constructed out of analogies. Over and over they have heard the phrase: "consider DNA: DNA is like a zipper, computer code, etc..." (I think I can safely assume readers of this blog know the drill.). Right after the DNA thing: I ask, now, what's this thing called a collateralized debt obligation that's just bringing the whole financial meltdown? And we get astonished faces as nobody has any good analogy (or anchors in semantic space--a rather technical name for it).

But now the fun has been spoiled. Check this out:

Crisis explainer: Uncorking CDOs from Marketplace on Vimeo.

Here are more: The credit crisis as Antartic expedition, & untangling credit default swaps. These are very worth of your time, unless you happen to be the George Soros amongst our readers.

Finally, here's a link Anna's just pointed out: The Metaphor Observatory.

Wednesday, October 1, 2008

Slashdot needs a Finance/Economics section

With all the turmoil going on in the world of finance & economics, with the sh*t about to hit the fan, Lehman Brothers gone, and worst-case scenarios rapidly unfolding, slashdot should take a serious look at this. Slashdot needs a finance/economics section, for at least two reasons: (i) whatever happens in Finance and Economics will reflect on the tech/science scenario soon; (ii) there are loads of geeky Finance and quant Economists already there.

I, for one, .....ah, you get it.

Monday, September 29, 2008

Have a nice day

Capyblanca is sorry to report that physicists failed to end the world as promised. Luckily, the economists went ahead anyway and brought us the apocalypse as promised.

Now that everything belongs to the state and all decisions are centralized and we are all communists, fellow comrades, here's a third historical event, brought to you by those nasty capitalists: the first private-funded space launch.

Some of the most amazing 9 minutes of video ever! Except, of course, for those that include Scarlett Johansson.

I wish Carl Sagan was alive to see this. As one commenter said, despite it all, the future's looking better. Or at least it does from high up there.

Monday, September 8, 2008

The final post!

So this is it for Homo Sapiens sapiens. Apocalipse duly scheduled for Wednesday. I hope the large hadron collider is on twitter, so we can follow the end of the world with the obligatory final "WTF?" tweet.

Hey, it was fun while it lasted, wasn't it?


"I'm going to heaven for the weather, and to hell for the company"

--Mark Twain

Friday, September 5, 2008

Legislating a change

"Introducing a technology is not a neutral act--it is profoundly revolutionary. If you present a new technology to the world you are effectively legislating a change in the way we all live. You are changing society, not some vague democratic process. The individuals who are driven to use that technology by the disparities of wealth and power it creates do not have a real choice in the matter."

Karl Schroeder

Sunday, August 24, 2008

Search intensity versus search diversity: a false tradeoff?

By A. Linhares & Horacio Hideki Yanasse

Abstract. An implicit tenet of modern search heuristics is that there is a mutually exclusive balance between two desirable goals: search diversity (or distribution), i.e., search through a maximum number of distinct areas, and, search intensity, i.e., a maximum search exploitation within each specific area. We claim that the hypothesis that these goals are mutually exclusive is false in parallel systems. We argue that it is possible to devise methods that exhibit high search intensity and high search diversity during the whole algorithmic execution. It is considered how distance metrics, i.e., functions for measuring diversity (given by the minimum number of local search steps between two solutions) and coordination policies, i.e., mechanisms for directing and redirecting search processes based on the information acquired by the distance metrics, can be used together to integrate a framework for the development of advanced collective search methods that present such desiderata of search intensity and search diversity under simultaneous coexistence. The presented model also avoids the undesirable occurrence of a problem we refer to as the ‘ergometric bike phenomenon’. Finally, this work is one of the very few analysis accomplished on a level of meta-meta-heuristics, because all arguments are independent of specific problem handled (such as scheduling, planning, etc.), of specific solution methods (such as genetic algorithms, simulated annealing, tabu search, etc.) and of specific neighborhood or genetic operators (2-opt, crossover, etc.).

Accepted for publication in Applied Intelligence.

Tuesday, August 19, 2008

Watch your brain to feel less pain

There is a well known but ultimately ungrounded myth that deeper probing and understanding of the human brain and behavior threatens our agency and freedom. Here I'll show a fascinating brief presentation intending to conclude otherwise, how it largely increases our elbow room.

Historically, psychotherapy slowly co-evolved with the behavioral and life sciences of their age, generally with a lag of up to two decades. The Gestalt psychoterapists were influenced by trends of Gestalt psychology. The biopsychiatry revolution was only possible due to enormous research in neurochemistry. The relatively recent Schema Therapy emerged from the advances of second generation cognitive science. So, could the technological developments of the Decade Of The Brain grant an analogous novel contribution to mental well-being? Neuroscientist Christopher DeCharms shows that the answer is positive.

Functional magnetic resonance imaging is now sufficiently advanced to allow us to contemplate in real time the underlying neural correlates of our mental life. This level of organization of our behavior is no longer a black box that can only be interfered by neurosurgery. Present knowledge of the patterns of activation of your own brain states can be used to guide your next mental states.

Here's the research paper of deCharm's work on chronic pain patients subjected to Omneuron's fMRI technology (deCharm's company).

Friday, August 15, 2008

Some papers out there...

[Planning to keep this page updated]


Dynamic sets of potentially interchangeable connotations: A theory of mental objects

Alexandre Linhares

Abstract: Analogy-making is an ability with which we can abstract from surface similarities and perceive deep, meaningful similarities between different mental objects and situations. I propose that mental objects are dynamically changing sets of potentially interchangeable connotations. Unfortunately, most models of analogy seem devoid of both semantics and relevance-extraction, postulating analogy as a one-to-one mapping devoid of connotation transfer.

Accepted commentary, Behavioral and Brain Sciences


Search intensity versus search diversity: a false tradeoff?

Alexandre Linhares and Horacio Hideki Yanasse

Abstract - An implicit tenet of modern search heuristics is that there is a mutually exclusive balance between two desirable goals: search diversity (or distribution), i.e., search through a maximum number of distinct areas, and, search intensity, i.e., a maximum search exploitation within each specific area. We claim that the hypothesis that these goals are mutually exclusive is false. We argue that it is possible to devise methods that exhibit high search intensity and high search diversity during the whole algorithmic execution. It is considered how distance metrics, i.e., functions for measuring diversity (given by the minimum number of local search steps between two solutions) and coordination policies, i.e., mechanisms for directing and redirecting search processes based on the information acquired by the distance metrics, can be used together to integrate a framework for the development of advanced collective search methods that present such desiderata of search intensity and search diversity under simultaneous coexistence. The presented model also avoids the undesirable occurrence of a problem we refer to as the ‘ergometric bike phenomenon’. Finally, this work is one of the very few analysis accomplished on a level of meta-meta-heuristics, because all arguments are independent of specific problem handled (such as scheduling, planning, etc.), of specific solution methods (such as genetic algorithms, simulated annealing, tabu search, etc.) and of specific neighborhood or genetic operators (2-opt, crossover, etc.)

Accepted, Applied Intelligence


Decision-making and strategic thinking through analogies

Alexandre Linhares

Abstract. When faced with a complex scenario, how does understanding arise in one’s mind? How does one integrate disparate cues into a global, meaningful whole? Consider the chess game: how do humans avoid the combinatorial explosion? How are abstract ideas represented? The purpose of this paper is to propose a new computational model of human chess intuition and intelligence. We suggest that analogies and abstract roles are crucial to solving these landmark problems. We present a proof-of-concept model, in the form of a computational architecture, which may be able to account for many crucial aspects of human intuition, such as (i) concentration of attention to relevant aspects, (ii) how humans may avoid the combinatorial explosion, (iii) perception of similarity at a strategic level, and (iv) a state of meaningful anticipation over how a global scenario may evolve.

Under Review, Cognitive Systems Research


Questioning Chase and Simon’s (1973) “Perception in Chess”

Alexandre Linhares & Anna Freitas

Abstract. We believe chess is a game of abstractions: pressures; force; open files and ranks; time; tightness of defense; old strategies rapidly adapted to new situations. These ideas do not arise on current computational models, which apply brute force by rote-memorization. In this paper I assess the computational models of CHREST and CHUMP, and argue that chess chunks must contain semantic information. This argument leads to a rather bold claim, as we propose that key conclusions of Chase and Simon’s (1973) influential study stemmed from a non-sequitur.

Under Review


A note on the problem of inappropriate contextual ads

Alexandre Linhares, Paula Mussi França, & Christian Nunes Aranha

Abstract. A contemporary industry of growing significance is web advertising. Ad inserts are made automatically in these systems: engines access the content of a search or of a webpage and attempt to find, using advanced economic and statistical models, a “contextual” insert of maximum expected utility. In this work we present the problem of inappropriate contextual ads. We distinguish between three types of undesirable contextual ads: (i) non-contextual ads; (ii) token-substitution ads; and (iii) inappropriate contextual ads. Inserts can be extremely inappropriate: in fact, shocking, outrageous, and disrespectful. We denominate such cases as catastrophic contextual ads. Despite being relatively rare, these catastrophic inserts might occur in large absolute numbers. We identify a series of reasons, following recent studies from cognitive science, for such phenomena. Finally, we propose some tentative solutions to the problem.

Under Review


Theory of constraints and the combinatorial complexity of the product mix decision

Alexandre Linhares

Abstract – The theory of constraints proposes that, when production is bounded by a single bottleneck, the best product mix heuristic is to select products based on their ratio of throughput per constraint use. This is not true for cases when production is limited to integer quantities of final products. We demonstrate four facts which go directly against current thought in the TOC literature. For example, there are cases on which the optimum product mix includes products with lowest product margin and lowest ratio of throughput per constraint time, simultaneously violating the margin heuristic and the TOC-derived heuristic. Such failures are due to the NP-hardness of the product mix decision problem, also demonstrated here.

Under Review

Tuesday, July 29, 2008

Can RASCALS become truly evil?

Greetings! This is my introductory entry and it’s a great honor to be able to contribute to this weblog - many thanks to Alexandre and the other colleagues from our local FARG group.

I’m an undergraduate student in Philosophy with a bit of (rather unsuccessful, unfortunately) baggage in Computer Engineering deeply interested in cognitive science and how its empirical research intersects with traditional problems in the philosophy of mind, my main obsession being human consciousness. I intend to provide interesting commentary, spanning from fields such as neuroscience, AI and evolutionary psychology.

There has been this year some media coverage over an ongoing and novel AI research program, the RASCALS (acronym for Rensselaer Advanced Synthetic Character Architecture for “Living” Systems) cognitive architecture. Based on the Department Of Cognitive Science at the Rensselaer Polytechnic University, RASCALS was remarkable in deploying over the virtual environment of the famous massively multiplayer online game/social networking community Second Life two functional avatars Eddie, a 4 year old boy and Edd Hifend, a robot. Here's Eddie during a demo, facing a well known experiment in developmental psychology:

RASCALS is logic based-AI with some unconventional twists. According to the researchers's Game On conference paper, the main ambition behind RASCALS is designing in a relatively quick pace autonomous agents that satisfy contemporary theories of personal identity, which is quite a hard task.

How does one design a synthetic person that doesn't merely perform evil acts but is genuinely evil? What does it take for an autonomous virtual agent to truly have a moral character or at least a toy model of it? Merely exhibiting convincing complex evil behavior, something that several videogame characters can already accomplish, is insufficient. Moral character demands advanced conceptualization skills, rich knowledge representation and a belief system besides behavioral dispositions. The main theoretical virtual persona mentioned in the article, referred to as E, is modeled after a fictional cinematic post-apocalyptic warlord drawn from prominent examples of antagonists in the entertainment industry (I suppose General Bethlehem from the motion picture The Postman is a good candidate). So, how to make E embody evilness? The strategy of the design team involves an adequate formal definition of evil, a way to deal with propositions such as the agent's goals, beliefs and desires in an extremely expressive fashion, a contextually appropriate knowledge base, sufficient fluency in a natural language and a believable presentation (the RPI team designed for another demo a sophisticated facial expression system for head avatars).

The RASCALS logical inference system is pluralistic, encompassing probabilistic inference for a better grasp of human-like reasoning besides standard forms of logical inference. Following a well known (and virulently disputed) tradition in cognitive science and artificial intelligence, the architecture employs a language of thought, all cognitive phenomena are unified in a formal language, in that case a bundle of symbolic logic, first-order logic sufficing for some processes while higher-level cognitive procedures use complementary logics like epistemic and deontic logics. Communication is possible due to formal mentalese being converted via a Natural Language Module into plain english in a highly sophisticated process.

Here comes another distinctive feature of RASCALS, the epistemic robustness of its agents. Merely reaching out via logical analysis the correct answers to a certain query provided in natural language is insufficient. For actual understanding (or quasi-understanding, being charitable to the difficulties associated with intentional states) those answers should be able to be justified by the agent. The implication is that for every answer in natural language there is a corresponding justification in formal logic which can also be translated into natural language, based on the agent's knowledge base and its reasoning capabilities.

Next October, a RASCALS agent with a very large, still in development, knowledge base will run on Blue Gene and interact with humans. However unimpressive those results may turn out to be (although optimists abound), this cognitive architecture alongside the new wave of digital entertainment industry are refreshing and regaining the interest and enthusiasm that once permeated AI researchers on its ambition to face and model realistically human-like behavior and cognition, in this case functionalizing propositional attitudes.

Monday, July 28, 2008

America's long-term strategy over the US Dollar

Follow Zimbabwe, where 100 Billion dollars can get you three full eggs.

May god bless America.

Hat tip.

Wednesday, July 9, 2008

Capyblanca is now open source (under GPL)

In 1995, Douglas Hofstadter wrote: "A visit to our Computer Science Department by Dave Slate, one of the programmers of Chess 4.6, at that time one of the world's top chess programs, helped to confirm some of these fledgling intuitions of mine. In his colloquium, Slate described the strict full width, depth-first search strategy employed by his enormously successful program, but after doing so, confided that his true feelings were totally against this type of brute-force approach. His description of how his ideal chess program would work resonated with my feelings about how an ideal sequence-perception program should work. It involved lots of small but intense depth-first forays, but with a far greater flexibility than he knew how to implement. Each little episode would tend to be focused on some specific region of the board (although of course implications would flow all over the board), and lots of knowledge of specific local configurations would be brought to bear for those brief periods."

That was, of course, many years before I would meet Doug.

How do chess players make decisions? How do they avoid the combinatorial explosion? How do we go from rooks and knights to abstract thought? What is abstract thought like? These are some of the questions involving the Capyblanca project. The name, of course, is a blend between José Raoul Capablanca, and Hofstadter's original Copycat Project implemented by Melanie Mitchell, which brought us so many ideas. Well, after almost 5 years, we have a proof-of-concept in the form of a running program, and we are GPL'ing the code, so interested readers might take it to new directions which we cannot foresee. Some instructions are in the paper, and feel free to contact me as you wish.

The manuscript is under review in a journal, and a copy of the working paper follows below. Interested readers might also want to take a look at some of our previous publications in AI and Cognitive Science:

(i) Linhares, A. & P. Brum (2007), "Understanding our understanding of strategic scenarios: what role do chunks play?", Cognitive Science, 31, pp. 989-1007.

(ii) Linhares, A. (2005), "An active symbols theory of chess intuition", Minds and machines, 15, pp. 131-181.

(iii) Linhares, A. (2000), "A glimpse at the metaphysics of Bongard Problems", Artificial Intelligence, Elsevier Science , 121 (1-2), pp. 251-270.

Any feedback will be highly appreciated!


Read this document on Scribd: Capyblanca paper under review

Thursday, June 26, 2008

Dropbox is an AMAZING start-up!

Now, this is amazing!

Monday, June 23, 2008

I, for one, welcome our new übergeek overlords!

Slashdot, my favorite L337 geek hangout, is discussing an interview with DugHof. The discussion is actually pretty cool, the long mentions of "the singularity that is Kurzweil" notwithstanding.

Though Doug usually dismisses hacker culture, I don't, and I think we should really welcome our new slashdot overlords. Two basic reasons here, beyond the whole power to the people cliché: first, some /. discussions are really worthwhile, and some participants really bring very insightful analysis in their comments--actually, a great way for learning about all things technical is, right after the obvious wikipedia lookup, by googling " whatever you're after, dude" and catching up with the discussions. And who knows? Maybe one day this blog will even be slashdotted. That would be nice for our pagerank and world domination plans--which bring me to the second reason.

Now the second reason is a serious one. As progress in FARG architectures evolves, we will need more and more lookups in the most cutting edge stuff, such as GPGPU or reflection. A general FARG framework is essentially an operating system, from the inside and from the outside. From the inside it packs application and problem loaders, various types of memory management (external, working memory, semantic memory, episodic memory, etc), task allocation and scheduling, and parallel multiprocessing. From the outside, it is also like an operating system, enabling new kinds and types of "FARG apps". This is, in fact, the coolest operating system to be working with, and I am astonished that companies like Microsoft or Sun or IBM just plainly do not know what this is all about. We could have some serious long-term contributions to computer science, yet, sometimes, it feels that even with all geekdom love that Doug eventually gets, the word in FCCA and later works is yet to be spread.

Or, to put it in /. terms, I feel that FARG==new (PARC). If you don't agree, then; seriously, You must be new here.

Monday, June 16, 2008

A car and a person?

Surprisingly the blob on the right is identical to the one on the left after a 90deg rotation.

In the absence of enough information of object's identity one search for contextual evidence to force fitting categorization with respect to the world regularities.

As we seen before in this blog, the contextual cognitive module might be unique and it acts as the same for every human task. This is an image processing example, but it could be a natural language processing example.

Even when objects can be identified via intrinsic information, context can simplify the object discrimination by cutting down on the number of object categories, scales and positions that need to be considered.

Monday, May 5, 2008

Slaves wanted!

This is tomorrow's presentation at FGV. We're looking for ambitious undergrads who want to take a shot in making something meaningful. Hopefully, someone will be interested.


Wednesday, April 23, 2008

Brain reacts to fairness as it does to money and chocolate

In the study, subjects were asked whether they would accept or decline another person's offer to divide money in a particular way. If they declined, neither they nor the person making the offer would receive anything. Some of the offers were fair, such as receiving $5 out of $10 or $12, while others were unfair, such as receiving $5 out of $23.

Saturday, April 19, 2008

Help wanted: Open sourcing a slipnet viewer

(This page will be updated with further details as soon as soon as possible)
Hello world!

After reading this fantastic book and playing with this, I think one good way to proceed is to open-source some parts of a FARG framework which are not its core, but are extremely useful and everyone could benefit from them.

I'm thinking first about a slipnet viewer. A java class that receives a list of nodes and links, and creates a nice view of the ongoing slipnet at any point in time. A node might consist of its activation levels and a bitmap to display inside the node (sometimes we may want to display something other than a string), while a link might include just the nodes it connects, (perhaps) a direction, and a string (to show up distances and for those with IS-A beliefs).

The class would get this information and create another bitmap, now with a beautiful view of the current slipnet: close nodes appear close to each other, distant nodes appear distant, and their activity levels are displayed. From my past life in combinatorial optimization, I have a hunch that this is NP-hard, so we may be resorting to some heuristic that works.

It should be in java, to run in everybody's machine, and also because everyone knows java and could either make a call to it from their own weirdo language or rewrite the code for their project.

In this initial stage, no windows or anything fancy should be done. Just get the data in and output a bitmap with the slipnet. But if our collaboration works, we could go bigger, triggering a window in a new thread and having a great display running in true parallel style. That would, I think, be a first step that everyone would benefit from.

This is small stuff, of course, but it's annoying to redo it everyday in every single project. It takes some time to do, and distracts from the core issues. Our productivity will rise. So, as Micheal Roberts once said, instead of having "obsessive geniuses" working under the basement, we should finally stop doing the same things over and over again. We should finally start collaborating like a small research group.

Or like a start-up company.

HP Upline: a disappointing bet

Here's the email I've received from HP's Upline program.

On Sat, Apr 19, 2008 at 12:46 AM, HP Upline Paypal Notifications wrote:

Dear HP Upline Service subscriber,

On Thursday, April 17th, HP suspended operation of the HP Upline Service. We fully anticipate that suspension of the Upline Service will be temporary and short in duration, and will notify you when the Upline Service is operational again.

Please accept our sincere apology for this unanticipated interruption of your access to the Upline Service. We appreciate your patience as we launch this new service, and are working hard to minimize inconvenience caused by this service interruption.

If you are a resident of the United States, your subscription will remain in effect and you will be able to continue using the Upline Service for the duration of your subscription period once the Upline Service is operational again. Thank you for your patience, and we look forward to providing you with the HP Upline Service.

If you are not a resident of the United States, we regretfully must inform you that the initial launch of the HP Upline Service was intended for United States residents only. Unfortunately, our filtering tools did not adequately screen for subscribers residing outside of the United States. We thank you for your early adoption of the Upline Service, and look forward to being able to provide the HP Upline Service to you when we launch it in your country of residence. Since the HP Upline Service is presently offered for use within the United States only, we will be discontinuing your current subscription. After we notify you that the Upline Service is operational again, you will have a limited period of time to access and download files that you have uploaded onto the HP Upline Service servers. After that time period, you will no longer have access to your present HP Upline Service account. If you would like to be contacted by us when the HP Upline Service is made available in your country of residence, please send us an email at We apologize for any inconvenience.


The HP Upline Team

And here's my response:
FROM: Alex Linhares
Dear Hulu provincians,
you should bear in mind that the web is international. --Alex Linhares

Well, what else could I say?

Thursday, April 17, 2008

This world is changing, brother...

...and the speed of change is accelerating... and I would like to invite readers in Rio de Janeiro to our Pangea day Broadcast.

Wahhabism is slowly going down and out...

Technology which costs thousands and takes years to develop goes for 50 bucks and is developed in 5 months...

The gigantic exodus toward cities and mega-cities might actually be a good thing...

The Pentagon might learn something from failure...

And this might be just a temporary fad, or a huge turning point...

Thursday, April 10, 2008

Monetizing 2010's social networks

Mike Arrington is proposing a new, mostly phone-based, social network. It's really a great reading. The basic idea is that you could browse people on the go--and find out who's around you in a restaurant or other places. You would broadcast your profile and receive broadcasts of other people's profiles. With privacy settings, of course. I wrote about this on newsvine back in 2006.

There are 2 types of comments in techcrunch: “absurd!” or “awesome!”

I would bet that there is a strong correlation to the commenter’s age.

The people that say absurd! fall into some camps:
(i) "girls will never use it"
(ii) "only übergeeks will use it"
(iii) "are you a lunatic? What about government oversight?"
(iv) "there's no way to monetize it. Ads on a small phone screen?!?"
(v) "Don't you think we are getting geeker and geeker all the time? Don't you think we are looking more and more at screens all the time?"

These objections are wrong, and Micheal is absolutely right. This is a game changer, and is a huge billion dollar thing. I wrote about this in 2006 (back when I thought newsvine was going somewhere).

Here are my views on the objections people have placed there:

To those concerned with government oversight: That’s a serious issue, but the way to handle it is to guarantee users that no government entity of any kind can be allowed as a “business”, and no info will be sent to them, unless users explicitly agree. They might make fake profiles of users, but they won't be able to communicate with you. No email, phone, or other info should be broadcast (unless someone is really adventurous).

Why girls will use it: Girls will broadcast only minimum info; a photo and a name or even a nickname. But they will stalk the guys, looking at the photos, and videos, and resume, to find out who he is. To the ones that they like, they may enable their full profiles. To women, if I've learned anything about them at all, it is that it is more important to know who a guy is than his looks alone. Today, they look around and see only “random” guys, so appearance is the only factor once you're in a restaurant. Now, if you’re broadcasting info that shows that you have a future, girls take notice.

Why everyone will use it, not just übergeeks.

Ever heard that beautiful quote, “the future is unevenly distributed”?

These people are reasoning: this has no value for me, so I don’t want it.

But this is like email was in 1992. I had an account, but nobody had, so its value was zero. Take away my email(s) now and I can’t communicate. Early adopters will be geeks, as always, but soon the network effects will kick in and the value will increase rapidly (for everyone, including grandma). So these guys who are saying they’ll never use it are in for a kick when they’re standing on a line for 20 minutes and some server tells: “your luggage has been found, Mr Arrington”. “How, if he wasn’t even in this bloody lost luggage complaint line we’re standing?” “Well, he was broadcasting his info to our system (which is on the network and is the way to finance it).” So your phone picks up that you’re on a luggage complaint system, you click on it and fill up a form. The old fellows look like penguins on a line. And you get to be served first. Then they finally ‘get it’. The thing has value. Of course, we are getting geeker and geeker all the time. We will be looking at screens more and more. Yet, life will be smoother.

the iPhone is now the only phone where the user interface could be smooth, without having to press a precise sequence of 50 buttons to browse people around. But perhaps android will be another, and perhaps the other phone companies can catch up to it in some years.

About facebook: it should obviously get in on this right now or become MySpace face the consequences. But facebook should first change to include the many dimensions a person has.

* I have a PhD in computer science, and I’m working on computational cognitive models. I might want to meet people with similar pursuits; but that’s only part of someone’s dimensions;
* I am crazy about Hôtel Costes, or about In Search of Sunrise
* I am an associate member of the club of rome, and I would certainly like to meet fellow members if/when we are close by.
* I am a professor of management science, I might want to broadcast that info in some places, and not others
* I am an entrepreneur, and I’d like to meet other similar creatures (or broadcast that info) to stalking Venture Capitalists.

Facebook only offers one dimension; and that is a serious shortcoming; because you want to select the specific info that will be broadcast. You don’t want to broadcast your funny drunk party picts or serious business info everywhere (in a random bar here in Rio you could actually be kidnapped). But you might do it in a high-profile scenario.

Now, here’s how to monetize it: free for users; & businesses pay a relatively small fee. This could be attractive to businesses because they could attract people, by broadcasting their existence, bookmarking people and offering discounts and automatic self-serve.

Jeffreys has commented on how to monetize: “How do you monetize it? When you walk past a store with a sale you might be interested in, it tells you. Like Amazon’s recommendation engine. Most of the time you’ll ignore it, but it will alert you to something you want to buy often enough to pay for itself.”

Here’s some building on top of that: the device NEVER, NEVER, distracts you. It receives your info, and bookmarks you, but never sends you an email or call or anything spam-like. If you want to know why, read "on intelligence".

Suppose you’re an owner of a restaurant. You pay to access the service as a business. They some smoking hot girls come in, and you bookmark them as interesting for your place. Then, on slow, empty days, you have all these cooks and waiters you’re paying for, but no customers: you send out, to 40 chicks, an offer for $50 dollars, valid for 1 hour. If you have bookmarked hundreds of prospects, people will start appearing. And people in a restaurant attract more people.

But nobody should ever be bothered personally. Instead, they should have their own offers page: They go in that page and see offers for free drinks at place X, 80% discount (at an empty restaurant), 50% discount at an empty hotel, or 70% discount at an empty seat on a flight to New York. All of these offers have an expiration time. So you can take it, think about it for a while, or leave it.

WHY THIS IS ENORMOUSLY VALUABLE: because once a plane has took off, then each empty seat costs a load of money, but gets no revenue--basically, that's money down the drain. Businesses have excess capacity, and price flexibility helps in balancing that capacity with actual demand–an incredible economic incentive for businesses to pay something like $1000/year, or $1/customer bookmark/year, or maybe $1/100 offers, or more. So, in the long run, the winning network will be getting buckloads of money from thousands of businesses small and large, without annoying anyone.

And businesses will be happy: The problem of managing capacity, utilization or operation levels, and demand, will be minimized for businesses that jump in. Life will be smoother for businesses (better yield management), and for people (great personalized offers, no intruding ads).

This is one of the most promising ideas right now. In the long run, there will be only one social network people actually log in, and that network will be the one in which they can browse people. On the go.

By the way, there are many more ideas, and I would be considering your job offers now. See you soon, Zillionaires!

Saturday, April 5, 2008

What is a relation?

Each relation has some elements, each element usually has a role within that relation. Moreover, there is a function, which takes this elements and creates new elements. A relation finds items of certain kinds (their roles), and creates other items of certain kinds, maybe even with a particular value

For example, in NUMBO:
Multiplication: item1 (operand) item2 (operand) item3 (result)

SUCESSOR: item1 (letter_value) item2 (Letter_Value) Alphabetic_Distance(Item1,Item2)=1 (number);

CHESS: Attack: item1(piece) item2(piece) move_distance(item1,item2)=1(number) (attacker in item1) (attacked in item2)

This is why I've found the quote from DeMorgan so sinister.

Wednesday, April 2, 2008

Decision, Intuition & Perception

So here's the presentation I've done in FGV today. Unfortunately, we didn't get it on tape, so no sound. I hope it may still be of some value for those interested.

Tuesday, April 1, 2008

That's some massively parallel temperature right there, Dude!

I've been thinking about massively parallel FARG, distributed temperature, and distributed coderacks:

Now, whenever a codelet is about to change something up, why add it to the global, central, unique, coderack? I don’t see a good reason here, besides the “that’s what we’ve always done” one. If a codelet is about to change some structures in STM, why not have (i) a list (or a set, or a collection, etc.) of structures under question & (ii) create a list-subordinated coderack on the fly? Instead of throwing codelets into a central repository, they go directly to the places in which they were deemed necessary. There are multiple repositories for codelets, multiple coderacks.

I argued that I liked the idea because (i) it enables parallelism of the true variety, (ii) it helps us to solve the stale codelets issue, and (iii) programming can (in principle) be done gradually, still in simulated parallel.

Now, I was wrong about temperature all along. Here's a new idea:

Imagine that each of the coderacks has the following behavior: Get a RANDOM codelet, then run it.

That's massively parallel temperature right there. Have a nice day. Thanks for stopping by.

Unconvinced? Think about this: some coderacks will start to become really small (as Abhijit pointed out in the comments previously), with one or two codelets, then being emptied and destroyed. That means that at that particular point (or thing) in STM, temperature is really low. However, other coderacks will be full of stuff waiting to run; which means that there, temperature is running high. Distributed temperature with high randomness in hot spots, low randomness in cool spots.

Maybe this has to be coupled with some information about concepts, but I'm not sure anymore. I think that it just might be one of those wonderful, marvelous, emergent effects we take so much pleasure in playing with.

Sunday, March 23, 2008

How to escape email tsunamis

Mike Arrington at TechCrunch is crying like a baby Scoble, as he faces upwards of 2400 unread emails.

How did we get here?  And how to get out of this mess?

There are two aspects here: human psychology and technology.  Email was designed with the wrong metaphor in mind:  email was designed as a way to send letters.  But the cost of sitting down and writing a letter and sending it through the post office was way higher than writing up an email and pressing send.  

The right metaphor for email is workflow.  And instead of one inbox, each of us should have something like 15 different inboxes, which should help, and show to others, our workflow (and how much we are behind).

How long does it take to read and handle 2400 emails? All eternity, of course. Looks like we've finally found a reason to be immortal, after all.  

But how long would it take to fill up 15 spots for a job, given 2400 applications?  About three to five hours, most likely.  As soon as you take a look at the applications, psychologically, you know what you don't want, so that speeds up the process enormously.  You are on job applicant reviewing mode, and that focus your attention and effort.  It is an entirely different thing than reading and replying to email.

Mike says there's a real opportunity for entrepreneurs out there; and here's my reply.  Here’s what you’re looking for: 90% of anyone’s inbox can be classified into 5, 10 or 20 different issues. For instance, someone might:

(i) want an interview with you
(ii) want to discuss a “serious” issue in a published post in TC
(iii) want you to know about their “hot” startup
(iv) want to invite you to speak/participate at an “key” event

…and on and on it goes. Your decisions come down to the evaluation of what “serious” really is, or how “hot” the startup is, etcetera.

REAL friends might be sending out the stupid youtube links and photos and such, but most people have these categories. Which the user can determine, and create forms for. 

So, I go into gmail and type Mike’s address. Gmail puts me on hold: “gathering Mike's workflow requests for you”. Then a list of, say, 15 items like the above comes up. Then if I want to “invite to speak/participate in an event”, I fill out a form with the fields you have defined. If I still want to send an email, then I do it knowing that I’ll be breaking your workflow and you may never reply/read.

Whenever you have the time, you can review all such requests. And software could even rank the requests based on your own settings.  If a field in the form is amount, it is quite probably important.

This would improve workflow tremendously. Most of the time we would be on “review of interview request’s mode”, or “review of employee travel request mode”, or “review of relevant hilarious stuff not on Digg”, or review of something else mode.

Strict workflow categories, and user-designed forms, might even reduce spam, as spammers would have to target an individual's form, instead of the free-for-all that is email. 

Finally, users could also define post-mortem actions on forms.  For example, if one of your forms is "employee travel request", when you review those, that could even generate another form for your boss, or for the accountant. 

Please god Google, go build it; then make it a web standard. 

I really need it.

This is NOT alive. It is NOT an animal. But is it like your toaster?

Recently in FARG's internal mailing lists we have discussed hyperbole in cognitive science; and all the fantastic claims that numerous cognitive scientists make. Every would-be Dr. Frankenstein out there seems to claim to have grasped the fundamental theory of the mind, and in next year we will finally have the glorious semantic web, we will be translating War and Peace into Hindu in 34 milliseconds, we will be having love and sex with robots, and, of course, we will be able to download our minds into a 16GB iPhone and finally achieve humanity’s long-sought after ideal of immortality.

Doug Hofstadter, of course, has long been dismissing these scenarios as nothing short of fantastic.

I think it’s safe to say that, in these sacred halls of CRCC, we are Monkeyboy-Darwinist-Gradualists who really disgust “excluded middle theories”: Either something understands language or it doesn’t. Either something has consciousness or it doesn’t. Either something is alive or it isn’t. Either something thinks or it doesn’t. Either something feels pain or it doesn’t.

I guess it’s safe to say that we believe in gradualism. The lack of gradualism and the jump from interesting ideas to “next year this will become a human being” goes deeply against my views. So my take on the whole issue of grand statements in Cognitive Science is that much more gradualism is needed. People seem to be having enormously simplistic views of the human mind.

As gradualists, we do, however, believe in the longer-term possibility of the theories being developed and cognitive mechanisms being advanced and machines becoming more and more human-like.

In fact, Harry has even stopped (but note that “stopping” is temporary, and is different from “quiting”, or “leaving”) his work on Bongard problems. Harry feels that our work will lead to dreadful military stuff. In fact, it is already happening, as he points out, and here is an eerie example. (Look at how this thing escapes the near certain fall in the ice.)

This “baby” is called the BigDog, and, yes, it is funded by DARPA. So there we have it, Harry: already happening. The military will get their toys, with or without us.

And this is gradualism at its best. Remember: this thing is not an animal. It is not alive.

But is it just as mechanic as a toaster?

Friday, March 21, 2008

Three types of connotations

I believe, and this is a central aspect of development in the Human Intuition Project Framework, that there are three types of connotations: properties, relations, and chunks.

A property is anything that has a value. It could be a numerical value, a name, or anything else.

A chunk is a mental object, holding stuff together. Any mental object is a chunk.

Finally, a relation maps from a set of (properties, chunks, and relations) to create new properties, chunks, or relations. It is very much like the term used in mathematical relations. And this quote from Augustus DeMorgan, mixing psychology and mathematics, is just eerie to my ears:

"When two objects, qualities, classes, or attributes, viewed together by the mind, are seen under some connexion, that connexion is called a relation."

Thursday, March 20, 2008

Ohhh I'll be sooo popular during the Apocalipse!

The Exorcist Economist is running a story (now on the cover) about the financial meltdown and the Fed's rate cut. Dramatic times. I've placed the following comment, and, if anything, I will be really popular as the apocalypse unfolds and we start to eat rats. Here's my top-rated comment, followed by some favorite ones:

linhares wrote:March 18, 2008 22:38

Ok. I am a little on the slow side. So let me get this straight.

The US is a country that lives on borrowing.

The dollar is falling like a skydiver.

Commodity prices are soaring, and lower US demand won't change much of that.

By cutting the rates, correct me if I'm wrong, those trillions of dollars held by the Chinese, Indians, Arabs, Brazilians, and so on, will lose value even faster.

So, if these countries ever decide to protect their (hard-earned) cash, they should switch. Perhaps to the new alternative in town, the Euro.

And if they switch, which they should rationally do, the dollar ceases to be the world standard, inflation in america skyrockets overnight, and the value of goods inside the usa becomes a huge unknown.

But of course I'm wrong. The best way to treat a (debt) alcoholic is to give it an ample supply of liquor, for sure.

Recommended (57)
Now take a look at this:
Great Cthulhu wrote:March 19, 2008 17:14

Personally, I am doing everything I can to rack up over $1 billion in personal debt, knowing full well that the US government will bail me out, as I'll be someone "too big to let fail" at that point. The problem is in getting enough credit cards to max out. You'd think with all the junk mail those credit card companies send out, I'd have over $1 billion in my back pocket by now, but I don't. With a credit limit of even $1 million per card, I'd need a thousand of the things to hit my target debt. Most only start with $25,000-$100,000, depending on what fake information I've used to get free subscriptions to magazines that target corporate executives, and that means I'll need about 10,000-40,000 credit cards for my project.

I guess I should just face it. I'm too poor to matter to the Fed. Oh well... a dollar collapse will at least make illegal immigration a moot issue, leave the US unable to pay for its wars overseas, and will give me the opportunity to discover a new career catering to the wants and needs of foreign tourists here in the states... perhaps I could supplement my income as a taxi driver at nights and earn some precious Euros, Pounds, Canadian Dollars, and Pesos in my tips... that would be something!

Recommended (23)

Or my personal favorite:
cognate wrote:March 18, 2008 22:00

Ahhhh, the wonders of the welfare-warfare state.

Better brush up on your potato planting, chicken feeding, and goat milking skills - just like in Doctor Zhivago.

Recommended (11)
Humorous remarks aside, this is of sobering consequence. The real risk is that of a change of historical proportions.

The USA has benefited for over a century now, as the dollar became the world standard, the international safe haven against bad times. But there is an immense, unsustainable, amount of dollars stashed in the Bank of China, or in the Brazilian Central Bank, or with the Arabs.

If these folks decide that they want to protect their reserves, they will switch. And if there is such a switch, it will quickly become into a massive free-for-all international panic against the dollar. God knows what might happen afterwards.

And what's most eerie about the whole thing is the following set of facts:
  1. I've yet to see Hillary talk about the weak dollar as America's largest problem
  2. I've yet to see McCain talk about the weak dollar as America's largest problem
  3. I've yet to see Obama talk about the weak dollar as America's largest problem
The dollar's skydiving adventures, and the myopia with which one of America's greatest assets is being handled gives to me an awful feeling of a dramatic change without parallel or precedent; something that could make 1929 look like a walk in the park.

(For what it's worth, I'm stacking on Euros... and I'm leaving Citibank.)

Maybe we should even start praying... please god... just prove this scenario wrong.

Thursday, March 13, 2008

The Economist's look at Jeff Hawkins

The Economist is finally mentioning Jeff Hawkins work, in its current technology quarterly.

Mr Hawkins's fascination with the brain began right after he graduated from Cornell University in 1979. While working at various technology firms, including Grid Computing, the maker of the first real laptop computer, he became interested in the use of pattern recognition to enable computers to recognise speech and text. In 1986 he enrolled at the University of California, Berkeley, in order to pursue his interest in machine intelligence. But when he submitted his thesis proposal, he was told that there were no labs on the campus doing the kind of work he wanted to do. Mr Hawkins ended up going back to Grid, where he developed the software for the GridPad, the first computer with a pen-based interface, which was launched in 1989.
Unfortunately, the piece is much more focused towards the man than towards Numenta's work.

And I, of course, couldn't resist commenting:
Hawkins is certainly right in his "grand vision", but he is also certainly right to stumble into 3 serious problems that will take decades to solve.

First, he believes "pattern-recognition is a many-to-one mapping problem". That is simply wrong, as I have pointed out in the journal "Artificial Intelligence", ages ago. If he is a rapid learner, he will backtrack from that mistake soon. Otherwise he may spend ages on this classic error.

Secondly, his HTM model is currently using a statistical model with numerous design decisions. That by itself would not be problematic if not for the fact that ALL nodes (and here we are talking about gigantic amounts of those) would be following precisely the same statistical rule. The problem with that approach is that the slightest, imperceptible error in a parameter setting or a design decision will propagate rapidly, and amplify into utter gibberish.

Finally, it is virtually impossible with current technology to "debug" NUMENTA's approach. We are talking about gigantic matrices filled with all kinds of numbers in each spot... how does one understand what the system is doing by looking at some tiny thousands (at most) cells at a time?

I have given PhD courses concerning "cognitive technology", and I do believe that a new information-processing revolution is going to hatch perhaps in a decade. However, we are dealing with much harder territory here than creating successful silicon valley startups. The tiniest error propagates throughout the network, and is rapidly amplified. It is impossible to debug with current technology. And some of his philosophical perspectives are simply plain wrong.

While I do think Hawkins will push many advances, including by firing up youngsters and hackers leaving web2.0, there are others which are building on a much more promising base (google, for instance, Harry Foundalis).

The Holy Bibruq Hath Spoken!

Directly from the pages of the "prophet".

Tuesday, March 11, 2008

Massively parallel codelets?

Some of the things I've been thinking about concern this question: how to make FARG massively parallel? I've written about parallel temperature, and here I'd like to ask readers to consider parallel coderacks.

Like temperature, the coderack is another global, central, structure. While it only models what would happen in a massively parallel minds, it does constrain us from a more natural, really parallel, model. Though I'm not coding this right now, I think my sketched solution might even help the stale codelet problem Abhijit mentioned:

We need the ability to remove stale codelets. When a Codelet is added to the Coderack, it may refer to some structure in the workspace. While the codelet is awaiting its turn to run, this workspace structure may be destroyed. At the very least, we need code to recognize stale codelets to prevent them from running.
Consider that most codelets fit into one of three kinds: (i) they can propose something to be created/destroyed, (ii) they can evaluate the quality of such change, and (iii) they can actually carry it out.

Now, whenever a codelet is about to change something up, why add it to the global, central, unique, coderack? I don't see a good reason here, besides the "that's what we've always done" one. If a codelet is about to change some structures in STM, why not have (i) a list (or a set, or a collection, etc.) of structures under question & (ii) create a list-subordinated coderack on the fly? Instead of throwing codelets into a central repository, they go directly to the places in which they were deemed necessary in the first place.

Why do I like this idea? First, because it enables parallelism of the true variety. Each of these STM-structure-lists-bound coderacks can be running in their own thread. Moreover, it helps us to solve the stale codelets issue, by simply destroying the coderack when something needed inside the lists is gone. If a structure is destroyed, and a codelet was waiting to work on it, the codelet--in fact all the coderacks associated with the structure--can go.

(I don't know when I'll be able to try this idea out, but hopefully soon.)

Does that make any sense?

Tuesday, March 4, 2008

Cheering over Harvard Girl!

We here at Capyblanca are cheering over our own Harvard girl; who would imagine?

More seriously, we are celebrating the thesis defense of Mrs Anne Jardim, on the ultimatum bargaining game. Anne is an economist, and she spent the last months completing her research at Harvard Law School. We would never miss the chance to poke some fun at her celebrate her achievements. Here's a peek at its conclusion.

Most of economic theory and the literature on decision-making rests upon the assumptions of rationality and maximization of utility. In this thesis, we have provided a review of the modern research literature concerning the ultimatum bargaining problem.

The ultimatum bargaining problem arises in asymmetric situations in which a known amount will be split between two actors--one of which is a proposer for the split, while the other, the responder, accepts or rejects the offer. While the proposer is in a better strategic situation, the responder has the power to block the deal, to the detriment of both proposer and responder. This is not only a recurring problem in applied game theory and economics, but also a theoretically interesting one.

It is recurring because it models a large type of ultimatum situations. It should arise in domains as diverse as biology to human relationships to economic behavior between firms to international relations. When a male marks its territory, that is a kind of ultimatum; it is up to other males to accept it or reject it by fighting. When companies fight publicly, they usually send ultimatum offers through the press: "Unless Apple is willing to alter pricing behavior, NBC will stay out of iTunes". In fact, in many kinds of conflicting interest scenarios, ultimatums are an important part of the bargaining process. The particular model studied here represents an important set of these situations, and is of great importance in the real world.

Moreover, it is also theoretically interesting, because humans do not respond as economic theory would predict. Quite the contrary: human behavior is enormously far from the expected rational behavior.

This fact has triggered an enormous amount of scientific interest in this game. Many different types of studies are being conducted now. On the table below we present a taxonomy/classification of such studies. This table characterizes our critical review of the literature.

There is not yet a consensus on why people deviate from the expected Nash equilibrium, but these deviations from rationality are informative about human cognition. Current economic theory is based on the normative model of decision-making: decision-making is treated as maximization of utility. However, if that cannot be expected to hold even in very simple scenarios, such as the one studied here, new mathematical models may eventually replace the standard "rational actor" model.

These new models should be as general and applicable as the standard rational actor. But they should also be psychologically plausible. As we have seen, progress in understanding ultimatum bargaining is steady. In the coming decade, as new data and new models are discussed, a consensus may form. As we have seen, ongoing research on ultimatum bargaining, ultimately, may turn out to bring sweeping changes into the nature of economic theory.

Monday, March 3, 2008

Will psychology beat the traditional math methods?

The Netflix challenge will pay $1 million for anyone who improve Netflix's customer suggestion system by 10%, i.e., achieves an error score less than 0.8563. The best one until now is 0.8675 from When Gravity and Dinosaurs Unite and the increase and lower and lower.

When no one expected, Just a guy in a garage appeared as an outside contender. He is a psychologist that says he has out-of-the-box strategies and the others are suffering from a kind of "collective unconscious". His name is Gavin Potter. He's a 48-year-old Englishman, a retired management consultant with an undergraduate degree in psychology and a master's in operations research.

We are cheering for you, Gavin Potter.

Monday, February 25, 2008

A burst of FARG activity

Last week saw an immense burst of FARG activity. A new blog has been set up, as have other initiatives. It seems that Michael Roberts is now officially developing a Framework (and applying it to Copycat). More as story unfolds.

Thursday, February 21, 2008

Raging against the machine

He can bring people together, but can he make history?

Wednesday, February 20, 2008

Top-down AND bottom-up thinkodynamics

If something, anything, is "thinkable", then it is bottom-up; it can be seen or felt and change mental context (e.g., alter contents on the slipnet).

And anything thinkable, however abstract, can also be imagined--thus it also exerts top-down pressure.

A small thought for man, a big leap for FARG computational modeling.

Monday, February 18, 2008

Essence and Accident: Convergence to Michael Roberts' ideas

Take a look at a dolphin and a shark and think about convergent evolution.

I've of course read a bunch of times Michael Robert's ideas on a FARG core engine, encapsulating the essential from the accidental in a domain.

But after some email exchanges, I'm stunned to see that many of the ideas we're proposing on this website had also been in his vision. Which brings up the question:

Is it convergence? Are we both right to pose (i) domain-free codelets, (ii) distributed temperature, (iii) slipnet nodes with structure? Are we converging to the same ideas because these are, in a sense, the right ideas?

Or have crimes been committed? Have I simply stolen his blueprints and am now, years later, claiming that I've stumbled into them, and just feel like they're mine because time has passed and when I went back to the drawing board all I could see was what was already in my mind?

Under the advice of my prestigious law firm of Cravath, Swaine & Moore LLP, I plead not guilty.

First, I couldn't understand details of Michael's ideas back then. I had to study a lot of design patterns along the way in order to see a new kind of flexibility in software, and the full meaning of encapsulation and polymorphism. Years, later, having developed Capyblanca to a certain extent, I can appreciate the difficulties inherent in separating essence from accident in FARG. With this baggage I've stumbled upon so many similar ideas, like distributed temperature. He argues for distributed temperature, but he doesn´t mention explicitly why. And many of his and mine ideas are still a little bit different. (I've yet to convince him of connotation thing, if he doesn't grasp its reasons quite immediately.)

I seriously think this has been the product of convergent evolution. Which makes me optimistic.

We're on the right track.

Sunday, February 17, 2008

A new commentary piece in Behavioral and Brain Sciences

Here's a new commentary on a target article, to appear in Behavioral and Brain Sciences. The full piece is available through email.

Dynamic sets of potentially interchangeable connotations: A theory of mental objects

Alex Linhares

Abstract: Analogy-making is an ability with which we can abstract from surface similarities and perceive deep, meaningful similarities between different mental objects and situations. I propose that mental objects are dynamically changing sets of potentially interchangeable connotations. Unfortunately, most models of analogy seem devoid of both semantics and relevance-extraction, postulating analogy as a one-to-one mapping devoid of connotation transfer.

Tuesday, February 12, 2008

What if codelets are only bossing around?

What if codelets are only bossing and bitching around?

I mean, from what I get from all the code I've seen in so many projects, codelets actually do things. They work directly on representations. Sometimes they scout STM for something to be created; sometimes they build some stuff out there; sometimes they trigger other codelets, etc.

But what if they only sent signals? What if they only were bossing around?

This is something that the Starcat team has started, but can be done in a deeper way. The advantage of it would be simple: you could encapsulate all codelets, for any domain existing in the universe, and cleanly separate them from the accidental features of your problem.

But here comes Christian's Law, once again: "Language compiles everything".

Back to the compiler.


Monday, February 4, 2008

The internet's backbone

Despite all the talk about Apple and Google, Adobe is the coolest company on earth. With flash, and soon, AIR, they are the backbone of the net. If you want to be stunned and see what they are up to; take a look at these two product lines:

Video. They are moving from low-quality (read: YouTube) videos to high-quality, high-def. In Flash! That is an amazing feat.

Check it out for yourself. Double-click for fullscreen video.

(Hat tip to an amazing flash video blog).

This is going to have an impact in TV broadcasting in the following years.

If that is not enough, check out Sprout, a new web2.0 company; also using flash, and soon, AIR. It's enormously powerful stuff, and enormously easy to use.

You really have to hand it to those guys. This is beautiful work. They are really changing the curve of the curve.

Sunday, February 3, 2008

Copycat's codelets, refactored (i)

From Analogy-Making as Perception (By Melanie Mitchell), MIT Press.

Here is Melanie's classification of codelets:


  1. Bottom-up description scout.
  2. Top-down description scout.
  3. Description strength tester.
  4. Description builder.


  1. Bottom-up bond scout
  2. Top-down bond scout (deals with categories, such as successor)
  3. Top-down bond scout (deals with direction of strings)
  4. Bond-strenght tester
  5. Bond Builder


  1. Top-down group scout (for categories)
  2. Top-down group scout (for direction)
  3. Group-string scout
  4. Group strength tester
  5. Group builder


  1. Bottom-up correspondence scout
  2. Important-object correspondence scout
  3. Correspondence strength tester
  4. Correspondence builder


  1. Rule scout
  2. Rule Strength tester
  3. Rule Builder
  4. Rule Translator


  1. Replacement finder
  2. Breaker

24 codelets in all. Interesting. Not so many.

I wonder how we could use polymorphism to reduce duplicate code and enable the separation of essence from accident.

For example, take the codelet groups:


Each of these groups have a number of different codelets. But look again, and a clear pattern emerges: Each of the groups has scouts, has testers, and has builders. (Also, scouts are either bottom-up or top-down.) I can't convince myself that there isn't another, better, way to design this than by programming each one individually.

We have to separate FARG essence from copycat's letter-string's accidental features. And, in the way towards that road, we will stumble upon general codelets: the holy grail. Codelets that are general enough to work on any domain; codelets that follow the principle of closed for modification, and open for extension. If we can finish this design in this first semester, that will be a powerful moment.

Thursday, January 31, 2008

For portuguese speakers

This blog is the funniest thing ever. If you go there, leave a comment to her.

Don't know about you damn Darwinists...

I don't know about you monkeyboy Darwinists, but I'm sure as hell submitting papers to The Journal of Creation!

Misclassifications of Adenine and Guanine: Serious fraud of scientific evidence?

Dr. A. Linhares
Reverend, Universal Life Chuch

Abstract. In this article we present conclusive data showing that DNA base pairings of nucleotides--most specially some subtle effects involving pyrimidines--explains why both (i) the overall length of a DNA double helix determine the strength of the association between the two strands of DNA and (ii) how Eve was encouraged by a snake to led Adam eat of The Forbidden Fruit. Moreover, by showing that Adenine and Guanine have been intentionally misnamed with an explicit agenda against a clearer comprehension of the events surrounding His death and resurrection (a mischievous fraud for which its proponents shall repent), we demonstrate that such entities should have been characterized with the letters E and V. With our more modern terminology, we have been able to uncover 100 Billion instances of the naturally occurring "EVE" sequence in the genome of the species in which no one is free from sin. Needless to mention, it is no coincidence that 100 Billion is exactly the estimated number of galaxies in the observable universe, and even if supermassive black holes are found at the center of galaxies--a speculative, yet potentially possible finding--that would not in any statistically significant sense bear any effect on the data concerning the fact that He called Abraham and his progeny to be the means for saving all of humanity, or related phenomena.

So long, you fools! --Alex

Wednesday, January 30, 2008

Open markets are oh-so-beautiful!

That's the reason communism failed; nothing beats open markets and their incentive systems. Full article from Slate.

Semantic web dreams and a strategy for Yahoo!

From a cognitive science perspective, the semantic web is still years and years and years away--at least a full decade. What I mean is the set of complex mechanisms that involve creating meaning, not the usual ridicule hyperbole out there.

Consider, for example the fact that when the TAM flight crashed in SP last July, the news were full of contextual ads suggesting readers to "Fly TAM".

Or maybe take a look of these contextual ads (hat tip to Digg):

The best ideas over this issue are by Bob French--though he doesn't particularly address 'contextual ads'--but the whole problem of meaning extraction from text databases in which semantic web engineers are falling upon. This paper is one of the funniest, and most intelligent, things I've ever read.

The "long tail" will always be algorithmic. The "fat head" will always be mainstream. The "middle ground" will be social. This naturally suggest a strategy for Yahoo! (which TechCrunch says is failing--and it just might be).

Yahoo! isn't mainstream media, nor algorithmic (like Google). From this point of view, I think what they should do becomes clear: They should strive to dominate the middle space.

Yahoo! should go beyond and acquire digg. It should subordinate all of its strategy to having all content, including ads, brought up by social voting. If an ad is buried, let it go; just like every piece of content. In the short-run, most likely, only ads from Apple or Ron Paul will appear; in the long-run, only good, socially targeted content should arise.

Meanwhile, algorithmic contextual ads will keep suggesting to stone people to death, to find debt, and to burn babies.