|COGENT Version 2.2 Help|
Interactive activation network boxes contain nodes with associated activation values. Nodes within an interactive activation network may be created or deleted. Existing nodes may be excited or inhibited. Their activation values may also be queried by conditions within processes, and a special purpose dynamically updated viewer allows activations to be monitored during model execution.
The initial contents of an interactive activation network are the set of (names of) nodes initially contained in that network. Initial activation values are set with reference to the box's properties.
Nodes may be added to an interactive activation network at any stage in processing through the use of add actions (e.g., within data sources or rules contained in a process):
Nodes may be deleted from an interactive activation network at any stage in processing through the use of clear actions, delete actions, and delete all actions:
The first of these removes all nodes from a network. The second deletes one node (whose name unifies with NodeName). The third deletes all nodes whose name unify with NodeName.
Interactive activation networks will normally co-exist with a process which sends appropriate excite messages to control the node activations:
In such a message, Excitation should be a numeric quantity which specifies the level of excitation to apply to the node. If the quantity is negative, it will be interpreted as inhibition.
On each processing cycle, the total excitation to each node within the network is summed. This net excitation is then used, along with the current activation and network activation properties to determine the new activation of each node.
In addition to external input, node activations may be subject to internal competition, using Lateral Inhibition and Self Activation. Each of these can be configured by additional properties, allowing them to be enabled separately, scaled appropriately, and to use alternative baselines. Lateral Inhibition may be configured to use any of several different functions, and it may also be calculated over the whole network or within sub-networks which partition the set of nodes in the box into several effectively separate networks.
If Subnet competition is selected within the network's properties, NodeName must be specified as a binary structure using the slash operator ("/"), in which the first argument is the name of the node and the second is the name of the sub-network, e.g., mynode/mysubnet. This node will then only compete with others belonging to the mysubnet sub-network.
The last operation available on networks is matching. The activation value of a node may be queried by matching against the node's name:
The above code segment may occur in the conditions of any rule or condition definition that can read MyNet. Normally Activation will be an uninstantiated variable, and execution of the condition will bind that variable to the activation of the node whose name is bound to Name. As with buffers, the match operation will attempt to re-satisfy if any part of the node definition is uninstantiated.
Interactive activation networks are highly configurable. A total of sixteen properties govern their behaviour:
Max Act (possible values: any real number; default: 1.00)
This parameter specifies the maximum activation that any node may obtain.
Min Act (possible values: any real number; default: -1.00)
This parameter specifies the minimum activation that any node may obtain.
Rest Act (possible values: any real number; default: 0.00)
This parameter specifies the rest activation to which nodes revert in the absence of excitation.
Persistence (possible values: any real number; default: 0.90)
This parameter the degree to which activation values persist in the absence of excitation. A persistence of 1.00 will lead to nodes maintaining their current activation in the absence of further excitation. A persistence of 0.00 will lead to nodes reverting immediately to rest activation in the absence of excitation.
Update Function (possible MR/GH/CS; default: MR)
The basic principle of network operation is that the activation of each node is updated on each cycle. The activation of each node is affected by its net input, its activation prior to that input, the persistence parameter, and the update function. (Initial activations, prior to any input, are determined by the parameters specified above.) Several update functions are used in the literature. The following functions are available in COGENT:
Initialise (possible values: Each Trial/Each Block/Each Subject/Each Experiment/Each Session;
default: Each Trial)
This property determines the timing of network initialisation. When the value is Each Trial, the network will automatically initialise itself at the beginning of each trial. When the value is Each Block, the network will initialise itself at the beginning of each block of trials (i.e., activations will be preserved across trials within a block). Analogously, when the value is Each Subject, Each Experiment and Each Session, activations will be preserved across simulated blocks, subjects and experiments respectively.
Initial Acts (possible values: uniform/normal: default: uniform)
This parameter governs the shape of the initial activation distribution function. If it is set to uniform, then on initialisation, activations will be randomly selected from a uniform distribution. The parameters which govern the distribution (i.e., the minimum and maximum possible activations) are determined by the two following properties: Act Parameter A and Act Parameter B. If Initial Acts is set to normal, then on initialisation, activations will be randomly selected from a normal distribution. The parameters which govern the distribution (i.e., the mean and standard deviation of the possible activations) are determined by the two following properties: Act Parameter A and Act Parameter B.
Act Parameter A (possible values: any real number; default: -1.00)
If Initial Acts is set to uniform, then this specifies the lower limit of the activation distribution. It Initial Acts is set to normal, then this specifies the mean of the activation distribution.
Act Parameter B (possible values: any real number; default: 1.00)
If Initial Acts is set to uniform, then this specifies the upper limit of the activation distribution. It Initial Acts is set to normal, then this specifies the standard deviation of the activation distribution.
Self Influence (possible values: true or false; default: false)
If Self Influence is enabled, self activation will be calculated using the values of the Self Parameter and Self Baseline properties.
Self Parameter (possible values: any real number; default: 0.00)
The Self Parameter is a scaling factor which is multiplied with the self input (see Self Baseline) to give the raw self activation.
Self Baseline (possible values: Min act, Rest act; default: Rest act)
The setting of the Self Baseline governs how self activation is calculated. If Rest act is selected, the value of Rest Act is subtracted from the node's current activation before multiplying by Self Parameter to give the raw self activation, so self activation can be positive or negative, depending whether the current activation is greater or less than rest. If Min act is selected, the value of Min Act is subtracted before multiplication by Self Parameter, so self activation always has the same sign as Self Parameter (typically positive).
Lateral Influence (possible values: None, Whole net, Subnet; default: None)
Lateral Influence can be enabled in two modes, or disabled. If Whole net is selected, all nodes in the box contribute to each node's inhibition. If Subnet is selected, each node is inhibited only by other members of its sub-network.
Lateral Parameter (possible values: any real number; default: 0.00)
The Lateral Parameter is a scaling factor which is multiplied with the output of the Lateral Function to give the raw Lateral Inhibition.
Lateral Function (possible values: Sum, Mean, Max; default: Sum)
Lateral Function determines how inputs from competitor nodes are combined. If Sum is selected, inputs are summed; if Max is selected, the maximum individual influence is used; if Mean is selected, inputs are averaged.
Lateral Baseline (possible values: Min act, Rest act; default: Rest act)
The setting of Lateral Baseline governs how individual lateral influences from competitor nodes are calculated, before being combined in the Lateral Function. As with Self Baseline, Lateral Baseline switches between using alternative values for the baseline. Current activations are subtracted from the baseline (either Rest act or Min Act), and this determines whether lateral influences can be both positive and negative, or always of the opposite sign to Lateral Parameter (i.e., normally negative).
Windows corresponding to interactive activation networks include an Activation Graph page in the notebook. This page is a dynamically updated graphical display of node activations. The view shows the activations of nodes as coloured bars. If Subnet competition is selected, different colours are used for different sub-networks.
|COGENT Version 2.2 Help|