It's January 2008, we wish you all a great new year, but first and foremost we wish ourselves a great new year. We do have big plans for the year. One such goal is to improve productivity in developing FARG models by a large factor, perhaps 10 times easier. If Capyblanca took 5 years to develop, and it's still with its rough edges, a major new rewrite based on the framework codebase should take 5x12 months div 10, which equals 6 months, which seems about right.
Note that there is only one problem here.
Like honest man, calm women, and fire-breathing dragons, that codebase does not exist.
But we have a vision for it, and the vision is beautiful. It's as elegant as Giselle on Armani. I think some words of explanation are needed here: Why is it beautiful? And why is that important?
It's closed for modification and open for extension. Which means two things; first, we'll have basic FARG functionality provided, for free, with no need to deal with it's internals (slipnets, codelets, temperature, coderack, maybe even hedonic feedback regulation will make it to v. 1.0). Our evolving system is based on the notion of connotations. And the idea of connotation explosion is explored in full. All you should need to do to create a new FARG system is to develop the right connotations. For Copycat, that might be, "Letter B", "First", "three", "Sameness group", "Sucessorship Group", "opposite", and so on. Write some code for these, and the system should run, beautifully.
If that ever happens, and the codebase is robust (i.e., it accepts other, new, domains, without having to rewrite its internals), then that codebase should be getting closer and closer to the truth. That being something the likes of figure 49 of Harry's thesis--note, as an aside, that in the paper I've linked here I think that Harry underestimates Jeff Hawkins.
But it is beautiful, also, because it employs recursion, symmetry, polymorphism, in such new, very new, ways. I think even people interested in programming languages might be interested in seeing what we are coming up with. It has some repercussions in that arena. But that's only for when we have a good, solid working model. So our first task for this year is to delve on hedonic feedback and autoregulatory learning (i.e., autoprogramming), and a connotation transfer-based form of programming. After we have some reports on these, we should be able to enjoy large gains in productivity.
And as any economist will tell you, a productivity gain is one of the greatest things you can ever achieve in your endeavors. Large productivity gains change everything. There were cars before Henry Ford, and there was such as thing as a "world-wide web" before Netscape. But those brought productivity gains, changing "the curve of the curve". This is what FARG needs. So here's some reasoning for optimism, in spite of the ugly economic downturns ahead:
- FARG changes everything; with it we have a scientific model, and an agenda, for understanding what understanding itself is all about.
- BUT FARG is hard to implement. Very hard to build. Nasty problems abound.
- A large productivity gain in FARG development could help bring massive change in this second point; and
- If that gain comes through a solid codebase, the though constrains imposed on the codebase to reflect human cognition should, in the long run, provide a better perspective towards a full theory of human high-level cognition.