Tuesday, July 29, 2008

Can RASCALS become truly evil?

Greetings! This is my introductory entry and it’s a great honor to be able to contribute to this weblog - many thanks to Alexandre and the other colleagues from our local FARG group.

I’m an undergraduate student in Philosophy with a bit of (rather unsuccessful, unfortunately) baggage in Computer Engineering deeply interested in cognitive science and how its empirical research intersects with traditional problems in the philosophy of mind, my main obsession being human consciousness. I intend to provide interesting commentary, spanning from fields such as neuroscience, AI and evolutionary psychology.

There has been this year some media coverage over an ongoing and novel AI research program, the RASCALS (acronym for Rensselaer Advanced Synthetic Character Architecture for “Living” Systems) cognitive architecture. Based on the Department Of Cognitive Science at the Rensselaer Polytechnic University, RASCALS was remarkable in deploying over the virtual environment of the famous massively multiplayer online game/social networking community Second Life two functional avatars Eddie, a 4 year old boy and Edd Hifend, a robot. Here's Eddie during a demo, facing a well known experiment in developmental psychology:



RASCALS is logic based-AI with some unconventional twists. According to the researchers's Game On conference paper, the main ambition behind RASCALS is designing in a relatively quick pace autonomous agents that satisfy contemporary theories of personal identity, which is quite a hard task.

How does one design a synthetic person that doesn't merely perform evil acts but is genuinely evil? What does it take for an autonomous virtual agent to truly have a moral character or at least a toy model of it? Merely exhibiting convincing complex evil behavior, something that several videogame characters can already accomplish, is insufficient. Moral character demands advanced conceptualization skills, rich knowledge representation and a belief system besides behavioral dispositions. The main theoretical virtual persona mentioned in the article, referred to as E, is modeled after a fictional cinematic post-apocalyptic warlord drawn from prominent examples of antagonists in the entertainment industry (I suppose General Bethlehem from the motion picture The Postman is a good candidate). So, how to make E embody evilness? The strategy of the design team involves an adequate formal definition of evil, a way to deal with propositions such as the agent's goals, beliefs and desires in an extremely expressive fashion, a contextually appropriate knowledge base, sufficient fluency in a natural language and a believable presentation (the RPI team designed for another demo a sophisticated facial expression system for head avatars).

The RASCALS logical inference system is pluralistic, encompassing probabilistic inference for a better grasp of human-like reasoning besides standard forms of logical inference. Following a well known (and virulently disputed) tradition in cognitive science and artificial intelligence, the architecture employs a language of thought, all cognitive phenomena are unified in a formal language, in that case a bundle of symbolic logic, first-order logic sufficing for some processes while higher-level cognitive procedures use complementary logics like epistemic and deontic logics. Communication is possible due to formal mentalese being converted via a Natural Language Module into plain english in a highly sophisticated process.

Here comes another distinctive feature of RASCALS, the epistemic robustness of its agents. Merely reaching out via logical analysis the correct answers to a certain query provided in natural language is insufficient. For actual understanding (or quasi-understanding, being charitable to the difficulties associated with intentional states) those answers should be able to be justified by the agent. The implication is that for every answer in natural language there is a corresponding justification in formal logic which can also be translated into natural language, based on the agent's knowledge base and its reasoning capabilities.

Next October, a RASCALS agent with a very large, still in development, knowledge base will run on Blue Gene and interact with humans. However unimpressive those results may turn out to be (although optimists abound), this cognitive architecture alongside the new wave of digital entertainment industry are refreshing and regaining the interest and enthusiasm that once permeated AI researchers on its ambition to face and model realistically human-like behavior and cognition, in this case functionalizing propositional attitudes.

1 comments:

Alexandre Linhares said...

One thing that strikes me out here is the word "evil". Overcoming bias has a discussion about it, concerning the quote:

"The simple fact is that non-violent means do not work against Evil. Gandhi's non-violent resistance against the British occupiers had some effect because Britain was wrong, but not Evil. The same is true of the success of non-violent civil rights resistance against de jure racism. Most people, including those in power, knew that what was being done was wrong. But Evil is an entirely different beast. Gandhi would have gone to the ovens had he attempted non-violent resistance against the Nazis. When one encounters Evil, the only solution is violence, actual or threatened. That's all Evil understands."
-- Robert Bruce Thompson

I love Peter Turney's framing on comment #1.