Eric Nichols says:
One of the things I've been thinking about is determining some way to tell if everything is going to decay to 0, or whether some loop of activation can form, or whether things can explode and get super-activated and excited, like in a runaway nuclear reaction. I think you can get any of these types of behaviors, depending on the configuration of nodes and the parameters for your spreading
I think the people in neural net research must have some sort of theory that describes these conditions for us. I wonder if there is a way to set reasonable parameter values (or bounds on parameters) to, for example, prevent any runaway reaction from ever occurring? Alex -- how did you set parameters for your activation decay model?
Tricky, tricky issue. I haven't been able to find any theory or solid model from neural network people. Melanie Mitchell, back in '93, was writing about careful parameter tuning in Copycat to prevent this:
The current version of the program always uses the intrinsic link length rather than the shrunk link length for this [spreading activation] calculation, even when the label node for this link is active. [...] When shrunk link lengths were used for spreading activation, the network tended to become too active. It is possible that a different mechanism (e.g., some kind of inhibition technique) should be used to control activation in the network. This is a topic for future work on Copycat. (p. 254 of "Analogy-making as perception")In my current implementation of NUMBO, I'm using two inhibition techniques, and I'm having, essentially, what is a closed system. The first "closed-system" technique is that a node does not spread its current_level of activation, but really the received increase from the last round. So if A sends x activation to B, B will receive x times a drag (inversely proportional to distance), and might as well propagate it back to A. But instead of propagating the whole activation of B, only the increase will be leaving for the road. A stone falls in a calm pool. The waters propagate at large intensity, waves reach the border, then re-propagate with smaller intensity, and so on.
The second closed-system idea is that a node spreads its activation to neighbors, but divided by the number of neighbors it is spreading activation to. So, neighbors really receive very small amounts of activation, unless it's coming from a boring node with few neighbors. I do, however, have the counterpart function: an amplification function. If a node is receiving activation from more than one node, than this amplification function will multiply that activation. At this point it will become sum(activation_received) * (1.1^sum(nodes_sending)). The rationale behind this? We can call it the nonlinearity of curiosity.
Yet, despite all these inhibition mechanisms, the slipnet is still prone to become crazily excited. Reminds me of Jenny Curran while in Berkeley, and of course others, these runaway nuclear reactions.