Thursday, September 6, 2007

Drained concepts: today´s presentation

The first thing I learned today was the amazing artifact Linhares uses to post our classes. I think this is called Slideshare. I've never seen it before. With this technology I was capable to review last week's class, two concepts of which were again referenced today. I'm calling them "drained concepts".

First of them is the famous "Neural Network" architecture. This is so much widespread that even those who don't know it actually know it. In such cases, we get never free of it. Looking at Bongard Problems, we wondered: "NNet could solve this problem? And this one?". Well, the fact is that NNet still isn't, up to now, worrying about intuition, abstraction or even cognition. NNet creation was inspired on neurons, but has more of a math/statistical approach. It's Network isn't a semantic network. In spite of it, we actually have a well-know neuron linked network. This way, one can say we are on "Neural Network 2.0", an extended NNet with CodeNeurons and a ConceptNetwork. Should it maybe be a "Coconet"?

The second concept is that of "inconsistency", or whatever you might want to call it: paraconsistency, conflicting arguments, ilogic or semantic paradoxes. I think we can't in fact have inconsitency, because a real inconsistency doesn't have solution. If we can implement it on a computer we have to follow a logic reasoning. The word "inconsistency" is used only to carry (or to drain) concepts, maximizing information efficiency. The same way we are motivated to use NNet word.