Tuesday, July 29, 2008

Can RASCALS become truly evil?

Greetings! This is my introductory entry and it’s a great honor to be able to contribute to this weblog - many thanks to Alexandre and the other colleagues from our local FARG group.

I’m an undergraduate student in Philosophy with a bit of (rather unsuccessful, unfortunately) baggage in Computer Engineering deeply interested in cognitive science and how its empirical research intersects with traditional problems in the philosophy of mind, my main obsession being human consciousness. I intend to provide interesting commentary, spanning from fields such as neuroscience, AI and evolutionary psychology.

There has been this year some media coverage over an ongoing and novel AI research program, the RASCALS (acronym for Rensselaer Advanced Synthetic Character Architecture for “Living” Systems) cognitive architecture. Based on the Department Of Cognitive Science at the Rensselaer Polytechnic University, RASCALS was remarkable in deploying over the virtual environment of the famous massively multiplayer online game/social networking community Second Life two functional avatars Eddie, a 4 year old boy and Edd Hifend, a robot. Here's Eddie during a demo, facing a well known experiment in developmental psychology:



RASCALS is logic based-AI with some unconventional twists. According to the researchers's Game On conference paper, the main ambition behind RASCALS is designing in a relatively quick pace autonomous agents that satisfy contemporary theories of personal identity, which is quite a hard task.

How does one design a synthetic person that doesn't merely perform evil acts but is genuinely evil? What does it take for an autonomous virtual agent to truly have a moral character or at least a toy model of it? Merely exhibiting convincing complex evil behavior, something that several videogame characters can already accomplish, is insufficient. Moral character demands advanced conceptualization skills, rich knowledge representation and a belief system besides behavioral dispositions. The main theoretical virtual persona mentioned in the article, referred to as E, is modeled after a fictional cinematic post-apocalyptic warlord drawn from prominent examples of antagonists in the entertainment industry (I suppose General Bethlehem from the motion picture The Postman is a good candidate). So, how to make E embody evilness? The strategy of the design team involves an adequate formal definition of evil, a way to deal with propositions such as the agent's goals, beliefs and desires in an extremely expressive fashion, a contextually appropriate knowledge base, sufficient fluency in a natural language and a believable presentation (the RPI team designed for another demo a sophisticated facial expression system for head avatars).

The RASCALS logical inference system is pluralistic, encompassing probabilistic inference for a better grasp of human-like reasoning besides standard forms of logical inference. Following a well known (and virulently disputed) tradition in cognitive science and artificial intelligence, the architecture employs a language of thought, all cognitive phenomena are unified in a formal language, in that case a bundle of symbolic logic, first-order logic sufficing for some processes while higher-level cognitive procedures use complementary logics like epistemic and deontic logics. Communication is possible due to formal mentalese being converted via a Natural Language Module into plain english in a highly sophisticated process.

Here comes another distinctive feature of RASCALS, the epistemic robustness of its agents. Merely reaching out via logical analysis the correct answers to a certain query provided in natural language is insufficient. For actual understanding (or quasi-understanding, being charitable to the difficulties associated with intentional states) those answers should be able to be justified by the agent. The implication is that for every answer in natural language there is a corresponding justification in formal logic which can also be translated into natural language, based on the agent's knowledge base and its reasoning capabilities.

Next October, a RASCALS agent with a very large, still in development, knowledge base will run on Blue Gene and interact with humans. However unimpressive those results may turn out to be (although optimists abound), this cognitive architecture alongside the new wave of digital entertainment industry are refreshing and regaining the interest and enthusiasm that once permeated AI researchers on its ambition to face and model realistically human-like behavior and cognition, in this case functionalizing propositional attitudes.

Monday, July 28, 2008

America's long-term strategy over the US Dollar



Follow Zimbabwe, where 100 Billion dollars can get you three full eggs.

May god bless America.

Hat tip.

Wednesday, July 9, 2008

Capyblanca is now open source (under GPL)

In 1995, Douglas Hofstadter wrote: "A visit to our Computer Science Department by Dave Slate, one of the programmers of Chess 4.6, at that time one of the world's top chess programs, helped to confirm some of these fledgling intuitions of mine. In his colloquium, Slate described the strict full width, depth-first search strategy employed by his enormously successful program, but after doing so, confided that his true feelings were totally against this type of brute-force approach. His description of how his ideal chess program would work resonated with my feelings about how an ideal sequence-perception program should work. It involved lots of small but intense depth-first forays, but with a far greater flexibility than he knew how to implement. Each little episode would tend to be focused on some specific region of the board (although of course implications would flow all over the board), and lots of knowledge of specific local configurations would be brought to bear for those brief periods."

That was, of course, many years before I would meet Doug.

How do chess players make decisions? How do they avoid the combinatorial explosion? How do we go from rooks and knights to abstract thought? What is abstract thought like? These are some of the questions involving the Capyblanca project. The name, of course, is a blend between José Raoul Capablanca, and Hofstadter's original Copycat Project implemented by Melanie Mitchell, which brought us so many ideas. Well, after almost 5 years, we have a proof-of-concept in the form of a running program, and we are GPL'ing the code, so interested readers might take it to new directions which we cannot foresee. Some instructions are in the paper, and feel free to contact me as you wish.

The manuscript is under review in a journal, and a copy of the working paper follows below. Interested readers might also want to take a look at some of our previous publications in AI and Cognitive Science:

(i) Linhares, A. & P. Brum (2007), "Understanding our understanding of strategic scenarios: what role do chunks play?", Cognitive Science, 31, pp. 989-1007.

(ii) Linhares, A. (2005), "An active symbols theory of chess intuition", Minds and machines, 15, pp. 131-181.

(iii) Linhares, A. (2000), "A glimpse at the metaphysics of Bongard Problems", Artificial Intelligence, Elsevier Science , 121 (1-2), pp. 251-270.

Any feedback will be highly appreciated!

--Alex

Read this document on Scribd: Capyblanca paper under review