Sunday, March 23, 2008

How to escape email tsunamis

Mike Arrington at TechCrunch is crying like a baby Scoble, as he faces upwards of 2400 unread emails.

How did we get here?  And how to get out of this mess?

There are two aspects here: human psychology and technology.  Email was designed with the wrong metaphor in mind:  email was designed as a way to send letters.  But the cost of sitting down and writing a letter and sending it through the post office was way higher than writing up an email and pressing send.  

The right metaphor for email is workflow.  And instead of one inbox, each of us should have something like 15 different inboxes, which should help, and show to others, our workflow (and how much we are behind).

How long does it take to read and handle 2400 emails? All eternity, of course. Looks like we've finally found a reason to be immortal, after all.  

But how long would it take to fill up 15 spots for a job, given 2400 applications?  About three to five hours, most likely.  As soon as you take a look at the applications, psychologically, you know what you don't want, so that speeds up the process enormously.  You are on job applicant reviewing mode, and that focus your attention and effort.  It is an entirely different thing than reading and replying to email.

Mike says there's a real opportunity for entrepreneurs out there; and here's my reply.  Here’s what you’re looking for: 90% of anyone’s inbox can be classified into 5, 10 or 20 different issues. For instance, someone might:

(i) want an interview with you
(ii) want to discuss a “serious” issue in a published post in TC
(iii) want you to know about their “hot” startup
(iv) want to invite you to speak/participate at an “key” event

…and on and on it goes. Your decisions come down to the evaluation of what “serious” really is, or how “hot” the startup is, etcetera.

REAL friends might be sending out the stupid youtube links and photos and such, but most people have these categories. Which the user can determine, and create forms for. 

So, I go into gmail and type Mike’s address. Gmail puts me on hold: “gathering Mike's workflow requests for you”. Then a list of, say, 15 items like the above comes up. Then if I want to “invite to speak/participate in an event”, I fill out a form with the fields you have defined. If I still want to send an email, then I do it knowing that I’ll be breaking your workflow and you may never reply/read.

Whenever you have the time, you can review all such requests. And software could even rank the requests based on your own settings.  If a field in the form is amount, it is quite probably important.

This would improve workflow tremendously. Most of the time we would be on “review of interview request’s mode”, or “review of employee travel request mode”, or “review of relevant hilarious stuff not on Digg”, or review of something else mode.

Strict workflow categories, and user-designed forms, might even reduce spam, as spammers would have to target an individual's form, instead of the free-for-all that is email. 

Finally, users could also define post-mortem actions on forms.  For example, if one of your forms is "employee travel request", when you review those, that could even generate another form for your boss, or for the accountant. 

Please god Google, go build it; then make it a web standard. 

I really need it.

This is NOT alive. It is NOT an animal. But is it like your toaster?

Recently in FARG's internal mailing lists we have discussed hyperbole in cognitive science; and all the fantastic claims that numerous cognitive scientists make. Every would-be Dr. Frankenstein out there seems to claim to have grasped the fundamental theory of the mind, and in next year we will finally have the glorious semantic web, we will be translating War and Peace into Hindu in 34 milliseconds, we will be having love and sex with robots, and, of course, we will be able to download our minds into a 16GB iPhone and finally achieve humanity’s long-sought after ideal of immortality.

Doug Hofstadter, of course, has long been dismissing these scenarios as nothing short of fantastic.

I think it’s safe to say that, in these sacred halls of CRCC, we are Monkeyboy-Darwinist-Gradualists who really disgust “excluded middle theories”: Either something understands language or it doesn’t. Either something has consciousness or it doesn’t. Either something is alive or it isn’t. Either something thinks or it doesn’t. Either something feels pain or it doesn’t.

I guess it’s safe to say that we believe in gradualism. The lack of gradualism and the jump from interesting ideas to “next year this will become a human being” goes deeply against my views. So my take on the whole issue of grand statements in Cognitive Science is that much more gradualism is needed. People seem to be having enormously simplistic views of the human mind.

As gradualists, we do, however, believe in the longer-term possibility of the theories being developed and cognitive mechanisms being advanced and machines becoming more and more human-like.

In fact, Harry has even stopped (but note that “stopping” is temporary, and is different from “quiting”, or “leaving”) his work on Bongard problems. Harry feels that our work will lead to dreadful military stuff. In fact, it is already happening, as he points out, and here is an eerie example. (Look at how this thing escapes the near certain fall in the ice.)


This “baby” is called the BigDog, and, yes, it is funded by DARPA. So there we have it, Harry: already happening. The military will get their toys, with or without us.

And this is gradualism at its best. Remember: this thing is not an animal. It is not alive.

But is it just as mechanic as a toaster?

Friday, March 21, 2008

Three types of connotations

I believe, and this is a central aspect of development in the Human Intuition Project Framework, that there are three types of connotations: properties, relations, and chunks.

A property is anything that has a value. It could be a numerical value, a name, or anything else.

A chunk is a mental object, holding stuff together. Any mental object is a chunk.

Finally, a relation maps from a set of (properties, chunks, and relations) to create new properties, chunks, or relations. It is very much like the term used in mathematical relations. And this quote from Augustus DeMorgan, mixing psychology and mathematics, is just eerie to my ears:

"When two objects, qualities, classes, or attributes, viewed together by the mind, are seen under some connexion, that connexion is called a relation."

Thursday, March 20, 2008

Ohhh I'll be sooo popular during the Apocalipse!

The Exorcist Economist is running a story (now on the cover) about the financial meltdown and the Fed's rate cut. Dramatic times. I've placed the following comment, and, if anything, I will be really popular as the apocalypse unfolds and we start to eat rats. Here's my top-rated comment, followed by some favorite ones:

linhares wrote:March 18, 2008 22:38

Ok. I am a little on the slow side. So let me get this straight.

The US is a country that lives on borrowing.

The dollar is falling like a skydiver.

Commodity prices are soaring, and lower US demand won't change much of that.

By cutting the rates, correct me if I'm wrong, those trillions of dollars held by the Chinese, Indians, Arabs, Brazilians, and so on, will lose value even faster.

So, if these countries ever decide to protect their (hard-earned) cash, they should switch. Perhaps to the new alternative in town, the Euro.

And if they switch, which they should rationally do, the dollar ceases to be the world standard, inflation in america skyrockets overnight, and the value of goods inside the usa becomes a huge unknown.

But of course I'm wrong. The best way to treat a (debt) alcoholic is to give it an ample supply of liquor, for sure.

Recommended (57)
Now take a look at this:
Great Cthulhu wrote:March 19, 2008 17:14

Personally, I am doing everything I can to rack up over $1 billion in personal debt, knowing full well that the US government will bail me out, as I'll be someone "too big to let fail" at that point. The problem is in getting enough credit cards to max out. You'd think with all the junk mail those credit card companies send out, I'd have over $1 billion in my back pocket by now, but I don't. With a credit limit of even $1 million per card, I'd need a thousand of the things to hit my target debt. Most only start with $25,000-$100,000, depending on what fake information I've used to get free subscriptions to magazines that target corporate executives, and that means I'll need about 10,000-40,000 credit cards for my project.

I guess I should just face it. I'm too poor to matter to the Fed. Oh well... a dollar collapse will at least make illegal immigration a moot issue, leave the US unable to pay for its wars overseas, and will give me the opportunity to discover a new career catering to the wants and needs of foreign tourists here in the states... perhaps I could supplement my income as a taxi driver at nights and earn some precious Euros, Pounds, Canadian Dollars, and Pesos in my tips... that would be something!

Recommended (23)

Or my personal favorite:
cognate wrote:March 18, 2008 22:00

Ahhhh, the wonders of the welfare-warfare state.

Better brush up on your potato planting, chicken feeding, and goat milking skills - just like in Doctor Zhivago.

Recommended (11)
========================
Humorous remarks aside, this is of sobering consequence. The real risk is that of a change of historical proportions.

The USA has benefited for over a century now, as the dollar became the world standard, the international safe haven against bad times. But there is an immense, unsustainable, amount of dollars stashed in the Bank of China, or in the Brazilian Central Bank, or with the Arabs.

If these folks decide that they want to protect their reserves, they will switch. And if there is such a switch, it will quickly become into a massive free-for-all international panic against the dollar. God knows what might happen afterwards.

And what's most eerie about the whole thing is the following set of facts:
  1. I've yet to see Hillary talk about the weak dollar as America's largest problem
  2. I've yet to see McCain talk about the weak dollar as America's largest problem
  3. I've yet to see Obama talk about the weak dollar as America's largest problem
The dollar's skydiving adventures, and the myopia with which one of America's greatest assets is being handled gives to me an awful feeling of a dramatic change without parallel or precedent; something that could make 1929 look like a walk in the park.

(For what it's worth, I'm stacking on Euros... and I'm leaving Citibank.)

Maybe we should even start praying... please god... just prove this scenario wrong.

Thursday, March 13, 2008

The Economist's look at Jeff Hawkins

The Economist is finally mentioning Jeff Hawkins work, in its current technology quarterly.

Mr Hawkins's fascination with the brain began right after he graduated from Cornell University in 1979. While working at various technology firms, including Grid Computing, the maker of the first real laptop computer, he became interested in the use of pattern recognition to enable computers to recognise speech and text. In 1986 he enrolled at the University of California, Berkeley, in order to pursue his interest in machine intelligence. But when he submitted his thesis proposal, he was told that there were no labs on the campus doing the kind of work he wanted to do. Mr Hawkins ended up going back to Grid, where he developed the software for the GridPad, the first computer with a pen-based interface, which was launched in 1989.
Unfortunately, the piece is much more focused towards the man than towards Numenta's work.

And I, of course, couldn't resist commenting:
Hawkins is certainly right in his "grand vision", but he is also certainly right to stumble into 3 serious problems that will take decades to solve.

First, he believes "pattern-recognition is a many-to-one mapping problem". That is simply wrong, as I have pointed out in the journal "Artificial Intelligence", ages ago. If he is a rapid learner, he will backtrack from that mistake soon. Otherwise he may spend ages on this classic error.

Secondly, his HTM model is currently using a statistical model with numerous design decisions. That by itself would not be problematic if not for the fact that ALL nodes (and here we are talking about gigantic amounts of those) would be following precisely the same statistical rule. The problem with that approach is that the slightest, imperceptible error in a parameter setting or a design decision will propagate rapidly, and amplify into utter gibberish.

Finally, it is virtually impossible with current technology to "debug" NUMENTA's approach. We are talking about gigantic matrices filled with all kinds of numbers in each spot... how does one understand what the system is doing by looking at some tiny thousands (at most) cells at a time?

I have given PhD courses concerning "cognitive technology", and I do believe that a new information-processing revolution is going to hatch perhaps in a decade. However, we are dealing with much harder territory here than creating successful silicon valley startups. The tiniest error propagates throughout the network, and is rapidly amplified. It is impossible to debug with current technology. And some of his philosophical perspectives are simply plain wrong.

While I do think Hawkins will push many advances, including by firing up youngsters and hackers leaving web2.0, there are others which are building on a much more promising base (google, for instance, Harry Foundalis).

The Holy Bibruq Hath Spoken!

Directly from the pages of the "prophet".

Tuesday, March 11, 2008

Massively parallel codelets?

Some of the things I've been thinking about concern this question: how to make FARG massively parallel? I've written about parallel temperature, and here I'd like to ask readers to consider parallel coderacks.

Like temperature, the coderack is another global, central, structure. While it only models what would happen in a massively parallel minds, it does constrain us from a more natural, really parallel, model. Though I'm not coding this right now, I think my sketched solution might even help the stale codelet problem Abhijit mentioned:

We need the ability to remove stale codelets. When a Codelet is added to the Coderack, it may refer to some structure in the workspace. While the codelet is awaiting its turn to run, this workspace structure may be destroyed. At the very least, we need code to recognize stale codelets to prevent them from running.
Consider that most codelets fit into one of three kinds: (i) they can propose something to be created/destroyed, (ii) they can evaluate the quality of such change, and (iii) they can actually carry it out.

Now, whenever a codelet is about to change something up, why add it to the global, central, unique, coderack? I don't see a good reason here, besides the "that's what we've always done" one. If a codelet is about to change some structures in STM, why not have (i) a list (or a set, or a collection, etc.) of structures under question & (ii) create a list-subordinated coderack on the fly? Instead of throwing codelets into a central repository, they go directly to the places in which they were deemed necessary in the first place.

Why do I like this idea? First, because it enables parallelism of the true variety. Each of these STM-structure-lists-bound coderacks can be running in their own thread. Moreover, it helps us to solve the stale codelets issue, by simply destroying the coderack when something needed inside the lists is gone. If a structure is destroyed, and a codelet was waiting to work on it, the codelet--in fact all the coderacks associated with the structure--can go.

(I don't know when I'll be able to try this idea out, but hopefully soon.)

Does that make any sense?

Tuesday, March 4, 2008

Cheering over Harvard Girl!

We here at Capyblanca are cheering over our own Harvard girl; who would imagine?

More seriously, we are celebrating the thesis defense of Mrs Anne Jardim, on the ultimatum bargaining game. Anne is an economist, and she spent the last months completing her research at Harvard Law School. We would never miss the chance to poke some fun at her celebrate her achievements. Here's a peek at its conclusion.

==
Most of economic theory and the literature on decision-making rests upon the assumptions of rationality and maximization of utility. In this thesis, we have provided a review of the modern research literature concerning the ultimatum bargaining problem.

The ultimatum bargaining problem arises in asymmetric situations in which a known amount will be split between two actors--one of which is a proposer for the split, while the other, the responder, accepts or rejects the offer. While the proposer is in a better strategic situation, the responder has the power to block the deal, to the detriment of both proposer and responder. This is not only a recurring problem in applied game theory and economics, but also a theoretically interesting one.



It is recurring because it models a large type of ultimatum situations. It should arise in domains as diverse as biology to human relationships to economic behavior between firms to international relations. When a male marks its territory, that is a kind of ultimatum; it is up to other males to accept it or reject it by fighting. When companies fight publicly, they usually send ultimatum offers through the press: "Unless Apple is willing to alter pricing behavior, NBC will stay out of iTunes". In fact, in many kinds of conflicting interest scenarios, ultimatums are an important part of the bargaining process. The particular model studied here represents an important set of these situations, and is of great importance in the real world.

Moreover, it is also theoretically interesting, because humans do not respond as economic theory would predict. Quite the contrary: human behavior is enormously far from the expected rational behavior.



This fact has triggered an enormous amount of scientific interest in this game. Many different types of studies are being conducted now. On the table below we present a taxonomy/classification of such studies. This table characterizes our critical review of the literature.

There is not yet a consensus on why people deviate from the expected Nash equilibrium, but these deviations from rationality are informative about human cognition. Current economic theory is based on the normative model of decision-making: decision-making is treated as maximization of utility. However, if that cannot be expected to hold even in very simple scenarios, such as the one studied here, new mathematical models may eventually replace the standard "rational actor" model.

These new models should be as general and applicable as the standard rational actor. But they should also be psychologically plausible. As we have seen, progress in understanding ultimatum bargaining is steady. In the coming decade, as new data and new models are discussed, a consensus may form. As we have seen, ongoing research on ultimatum bargaining, ultimately, may turn out to bring sweeping changes into the nature of economic theory.

Monday, March 3, 2008

Will psychology beat the traditional math methods?





The Netflix challenge will pay $1 million for anyone who improve Netflix's customer suggestion system by 10%, i.e., achieves an error score less than 0.8563. The best one until now is 0.8675 from When Gravity and Dinosaurs Unite and the increase and lower and lower.


When no one expected, Just a guy in a garage appeared as an outside contender. He is a psychologist that says he has out-of-the-box strategies and the others are suffering from a kind of "collective unconscious". His name is Gavin Potter. He's a 48-year-old Englishman, a retired management consultant with an undergraduate degree in psychology and a master's in operations research.


We are cheering for you, Gavin Potter.