Decay, Time and Rehearsal

The previous sections have produced a model which shows simple Primacy and Recency effects in a serial position curve. However, there is one major feature of the theory which we still haven't dealt with, namely Decay. The Long-Term Store is supposed to be of unlimited capacity but less-than perfect reliability, whereas in the present model, if something gets in to LTS it stays there until the end of the trial.

COGENT supports decay of items held in buffers by means of buffer properties. Open the LTS box and switch to its "Properties" view. There are two properties of interest here, the "Decay" property, which has three possible values -- None, Probabilistic and Fixed -- and the "Decay Constant", which is a number. The Decay Constant should be interpreted as a number of cycles. (The cycle is COGENT's basic unit of time.) Probabilistic and Fixed decay both use the value of the Decay Constant -- call it D -- but in different ways. In Fixed decay, items are deleted as soon as they have been in the buffer for D cycles, whereas in Probabilistic decay D specifies a half-life (as in radioactive decay), so that items may decay at any time, but the probability of their having decayed by the time they are D cycles old is 0.50.

Set the Decay property of LTS to Probabilistic, and the Decay Constant to 20, then return to the graph view and run a block of 20 trials. You will (probably) see that the Primacy Effect has been greatly reduced, or even abolished completely, whereas the Recency Effect is still strong. Now if you increase the LTS Decay Constant, making items less likely to decay in the course of a trial, the size of the Primacy Effect should increase too.

Another way you can affect the performance of the model is to change the rehearsal rate. Remember that the experimenter sends one word per cycle during the memorisation phase, and that rehearsal transfers one item per cycle to LTS. If you copy the rehearsal rule, so that there are two identical copies, you double the rehearsal rate. Try it -- you should see another increase in the size of the Primacy Effect, as well as a raising of the level of the central portion of the curve. On a related note, if you like you can change the "Duplicates" property of LTS to the value "Yes". This allows multiple copies of the same word in LTS, and should improve recall in LTS.

These considerations about time suggest another way the model can be improved. Although the experimenter sends words to the subject serially, the subject currently does not recall serially. We would like to recall words one at a time, on separate cycles, rather than in a parallel burst on a single cycle. Open the Input/Output box and edit the recall rule again. Check the "Rule fires once per cycle" and "Rule is Refracted" boxes. The first ensures that only one word will be recalled per cycle, and the second ensures that each word will be recalled only once. Then add a "Send" action, to send the trigger recall to the Input/Output box. This means that each time the rule fires, it sends a message to itself to try to fire again on the next cycle. (Note that if you can't get a rule in a given Process box to send a message to the same box, make sure that the Process' "Recurrent" property is set to "Yes".) It should end up looking like this:

This is the end of the tutorial. For more information, see the COGENT help system and web site. There are a couple of suggestions for further experimentation with the model below, if you want to go further.

  1. You have already been introduced to a range of parameters that can affect the behaviour of the model, including capacity limitations, behaviour when that capacity is exceeded, decay type and rate, and rehearsal rate. Another property of interest is the Buffer Access property, which offers the options "FIFO" (First-In/First-Out), "LIFO" (Last-In/Last-Out) and "Random", and controls the order in which items are read from buffers. Feel free to play with these (and other) parameters to see how they affect the model's behaviour.
  2. Have a look around the Experimenter system. You'll see it's written using the same language as the Subject, as is the Graph Package. Try to discover how it works.
  3. If you feel confident, go on to develop the model into something substantial.