|COGENT Version 2.2 Help|
Associative network are boxes consisting of a set of nodes joined by weighted connections. Nodes may be fully connected, as in interactive activation networks (i.e., there's no limit on activation to "feed forward"), and weights between nodes may be learnt as in feed-forward networks in response to training signals. Associative networks are a generalised form of Hopfield network, and may be configured by setting the network's size, learning rule, learning rate, activation function, etc.
Associative networks may be sent two types of messages:
In both cases, the argument should be a list of numbers representing activation values of nodes. (If the network has 7 nodes, the list should contain exactly 7 elements.) The function of these messages is as follows:
where f is the activation function, and wji is the weight from node j to node i.
In both of the above cases, multiple messages of the same type can be processed on a single cycle. In the case of multiple learn/1 messages, a single weight update will be applied representing the matrix sum of the weight updates from each individual message. In the case of multiple excite/1 messages, each input will be applied to the network in parallel (i.e., the vectors will be summed and then a single input applied to the network).
Associative networks may be matched by, for example, the conditions of rules and elements within user-defined conditions:
If Vector is a variable this condition will succeed and bind Vector to a vector representing the state of MyNet. This vector will be a list of numeric values, with elements in the list corresponding to individual nodes as in in learn/1 and excite/1 messages.
Associative networks are highly configurable. A total 14 properties govern their behaviour. Many of these parameters are similar in name and function to those of other network boxes. In detail, the properties are:
Initialise (possible values: Each Trial/Each Block/Each Subject/Each Experiment/Each Session;
default: Each Trial)
The timing of network initialisation is determined by this property. When the value is Each Trial, the network will automatically initialise itself at the beginning of each trial. When the value is Each Block, the network will initialise itself at the beginning of each block of trials (i.e., weights will be preserved across trials within a block). Similarly, when the value is Each Subject, weights will be preserved across simulated blocks. When the value is Each Experiment, weights will be preserved across subjects, and when the value is Each Session, weights will be preserved across experiments.
Min Act (possible values: any real number; default: -1.00)
This parameter specifies the minimum activation that an output node may achieve.
Max Act (possible values: any real number; default: 1.00)
This parameter specifies the maximum activation that an output node may achieve.
Size (possible values: any positive integer; default: 10)
This parameter specifies the number of nodes in the associative network.
Symmetric (possible values: yes/no; default: no)
This parameter specifies whether the network's matrix is symmetric (i.e., whether the weight of a connection from A to B is the same as the weight from B to A). Important classes of associative network (e.g., Hopfield networks) are symmetric.
Connectivity (possible values: any real number; default: 1.00)
This determines the proportion of possible connections between input and output nodes which are actually present in the network. A connectivity of 0.50 will mean that, on average, 1 in every 2 possible connections is actually present. Although it is possible in principle to set this parameter to any real number, in practice only numbers in the range 0 to 1 are sensible.
Act Function (possible values: linear/sigmoidal; default: linear)
This defines the activation function used to calculate node activations given their net input. As with feed-forward networks, two functions are possible: a linear function and a sigmoid function:
The precise shape of the function is determined by the values of the Act Slope and Act Midpoint properties.
Act Midpoint (possible values: any real number; default: 0.00)
This parameter partially determines the activation function by specifying its midpoint. If the net input to a node is equal to the Act Midpoint, then the output for that node will be the average of Min Act and Max Act. This property is independent of the actual activation function selected.
Act Slope (possible values: any real number; default: 1.00)
This parameter specifies the gradient of the activation function when the input to the function is that specified by Act Midpoint.
Initial Weights (possible values: uniform/normal: default: uniform)
This parameter governs the shape of the initial weight distribution function. If it is set to uniform, then on initialisation, weights will be randomly selected from a uniform distribution. The parameters which govern the distribution (i.e., the minimum and maximum possible weights) are determined by the two following properties: Weight Parameter A and Weight Parameter B. If Initial Weights is set to normal, then on initialisation, weights will be randomly selected from a normal distribution. The parameters which govern the distribution (i.e., the mean and standard deviation of the possible weights) are determined by the two following properties: Weight Parameter A and Weight Parameter B.
Weight Parameter A (possible values: any real number; default: -1.00)
If Initial Weights is set to uniform, then this specifies the lower limit of the weight distribution. It Initial Weights is set to normal, then this specifies the mean of the weight distribution.
Weight Parameter B (possible values: any real number; default: 1.00)
If Initial Weights is set to uniform, then this specifies the upper limit of the weight distribution. It Initial Weights is set to normal, then this specifies the standard deviation of the weight distribution.
Learning Rule (possible values: delta/Hebbian; default: delta)
Associative networks are capable of either delta-rule learning or Hebbian learning. The value of this parameter controls the learning algorithm employed by any specific network. In Hebbian learning, weights between units are strengthened when those units have similar activation values (see, e.g., Chapter 3 of Hertz, Krogh & Palmer, 1991). In delta-rule learning, the weight from node A to B is adjusted by an amount proportional to the difference between the target value of B and its actual value given its excitation from other nodes in the network (see, Chapter 4 of e.g., McLeod, Plunket & Rolls, 1998).
Learning Rate (possible values: a real number greater 0; default: 0.10)
This parameter is used in the calculation of weight changes. In general, a high learning rate will mean that the weight matrix responds more quickly to input-output training pairs, but may result in the network being insufficiently sensitive to its parts training history.
Windows corresponding to associative networks include a Current State page in their notebook. This page is a dynamically updated graphical display of the state of the network, including activations of individual nodes and the weight matrix. The activation portion of the viewer resembles the activation view of interactive activation networks The weight matrix portion of the viewer resembles the weight matrix view of feed-forward networks.
|COGENT Version 2.2 Help|