Non Gamstop CasinosNon Uk Casino SitesBest Non Gamstop CasinosNon Gamstop Uk SitesCasinos Not On Gamstop

Chapter 1 Lecture Programme & Introduction

Lecture Programme

Reading

The main textbook is J.M. Pearce (1997), Animal Learning and Cognition, Psychology Press, however you may also find the first edition helpful. A new book I would also recommend is C. Wynne (2001), Animal Cognition, Palgrave Macmillan, Basingstoke

Other useful books are the Annual Review of Psychology, and the Psychology of Learning & Motivation. Both have scholarly chapters written by current researchers and over the last 10 years have included some valuable reading.

The most useful journals which have comparative sections are the Journal of Experimental Psychology and the Quarterly Journal of Experimental Psychology.

Further reading will be suggested throughout the course.

Course Structure

The outline of the course is described below.

Lectures Content
1 Introduction
Instrumental Conditioning - Associative Conditioning.
2 Comparative Theories of Learning
3 Comparative Theories of Learning - Imitation
4 Complex Stimuli & Concepts
Rescorla & Wagner's theory (no longer taught - on web)
5 Concepts part II
6 Learned Helplessness
7 Optimism
8 Biological Constraints & Preparedness
9 Phobias
10 Revision
12 Recapitulation

The key skills which will be practiced include: 1.1, 1.2, 1.3, 2.2, 2.3

The first section of the course provides the background source material on approaches to animal learning. The textbook will be the main reading material, although the lectures will provide additional topics.

The latter part of the course examines more applied features of learning, in particular how the perspective deriving from animal learning can provide an understanding of human disorder (depression and phobia).

The second lecture slot will be used to show videos - check noticeboard for more information

There is a three week practical session where students will undertake research in groups.

The practical programme begins on Thursday 21st January.

Week Content
2 Selecting a topic,
Determining practical programme
3 Running the practical
4 Analysing the data, oral presentation

The additional key skills from those above which will be practiced include:

1.5, 1.7, 2.1, 3.2, 4.1, 4.3, 5.1, 5.3, 6.2, 6.3

Examinations

Practical 25% learning (25% memory)

Groups of students to produce a set of web page resources.

Examination 50%

2 hours, answer 1 question on learning, and one on memory

Introduction

This course in learning has a few hurdles that many students find hard to negotiate. The first problem is that much of the research, especially the first half, is focussed on animal research and surely most people do psychology because they are interested in people. The second problem is that the research can be viewed either as sophisticated, or alternatively as confusing and difficult. A related problem is that some of the research is mathematical or computational, and although this appeals to some students, most find it unappealing. Finally, students sometimes find the material too abstract and can't find the relevance of the material to their own lives.

Whether these criticisms are valid can only be determined by the end of the course, but the material is important. Learning, historically has played a crucial part in the development of psychology, and it is the case that most of the early research was on animals. If this was the only rationale then it is a rather lame argument. The psychology we should be investigating cannot be justified purely in historical terms, but must inform us now and help us understand behaviour. However research is still being carried out after all these years, and many new insights and developments are still emerging.

The sophistication of the research is partly necessitated by the fact that most of the research is carried out on animals where only simple responses are possible, and thus if difficult ideas are going to be investigated then more interesting and imaginative designs are required to answer the questions. However one benefit is that students can see how psychologists have to be original in research. Because the subjects are also non humans it encourages us to view the behaviour as being simple and not to use over-elaborate explanations. However, the issue of whether we should invest animals with human attributes (anthropomorphism) will be highlighted, but perhaps, more importantly, the extent to which we are anthropocentric (we can only view animals as if they were partly human). The handbook also describes some formal models which may be difficult for students, but this in itself should not deter us from investigating these models. The mathematics is no harder than that which is covered in the statistics course and as such should be within the comprehension of students, and furthermore the approach shows the way in which some psychologists have, perhaps unsuccessfully, proposed solutions that can be tested by modelling as well as by experimentation. This is a more recent development in psychology. Many of the models can be described by connectionist architectures, and the course can be considered an introduction to some of these ideas. The latter part of the course looks at applications of the approach and tries to contrast the learning of animals (and humans) with the kind of learning that is emphasised in the memory course. The learning that occurs in animals may be different from learning derived from language, but animal derived learning might be more important in that it has survival value, and as such might be the learning that takes place out of consciousness and may be influential in abnormal psychology.

Chapter 2
Instrumental Conditioning


Objectives.

After you have read the material you should be able to:


If one considers learning in all species, are there any similarities? Surprisingly, there are, since there is a common problem for most species (those that can move anyway) and that is to determine cause and effect. Clearly it would be of some advantage if animals could learn about simple cause and effect, and the two most important theories in learning have been developed to understand how this may come about. The first theory tries to explain how animals might relate actions to whether they are successful or not. Initially writers often ascribed human thought processes and emotions to animals, but the beginning of a more scientific approach led Thorndike to suggest a much simpler process for learning. He suggested that learning only depended on a few simple processes. Skinner championed this approach for many years and the insights are still relevant in some therapeutic situations such as token economies. And in applied areas such as training animals for films.

In this chapter a few examples of the processes involved will be presented, and then a look at the challenge related to biological significance and finally the conflict between alternative explanations of the process.

Definitions

It might be as well to begin by defining a few terms.

Reinforcer will be used frequently, and there are operational definitions of the term.

A positive reinforcer is an event which increases the probability of the response when the event follows a response (e.g. sweet after good act) whereas a negative reinforcer is an event which increases the probability of the response when the event is removed. (e.g. moving away when an alarm sounds)[1]

Table 1.1 describes reinforcers and punishment.

Reinforcement Punishment
Positive Receives pellet receives shock
adds to environment for pressing lever rate increases for pressing lever rate decreases
Negative removed Pressing lever stops shock Pressing lever gives time out

Table 1.1

The Law of Effect

Thorndike observed how animals learn to escape from a cage, and argued that, far from the animals having any cognitions, a simply process of trial and error together with a mechanism for strengthening associations could account for learning. It is possible to try to explain the learning by pavlovian conditioning (see Pearce ch.4), but generally the responses produced during instrumental learning are not like appetitive responses.

What is the mechanism in this form of conditioning? Is it simply contiguity (together in time)? The superstition experiment by Skinner seems to support this idea. What was learned just depended on what the animal was doing at the time. However later research seems to show that the behaviors were pretty much pecking the wall by the food, so maybe they were expecting the food. (figure 2.1)

The figure shows the behaviour change over the timespan of a schedule

Figure 2.1

Thorndike suggested several laws of learning.

Law of Effect (Thorndike)

Any response in a given situation which results in a satisfying state of affairs becomes associated with that situation and is thus likely to be repeated when the animal is next in that situation[2]


Law of Exercise

There is a strengthening of connections or associations with practice.


Look again at the table 1.1. Notice that when there is reinforcement there is an increase in behaviour after a satisfying state of affairs as the Law of Effect states. Punishment occurs when there is a less than satisfying state of affairs after a response. Time out, by the way, is a period of time when no reinforcement can be gained whatever the response. If we follow the logic of Thorndike it seems that there is no anticipation of the future. The law of effect states that behaviour changes by reinforcements that occur after the action, but there is no suggestion that there are any higher level cognitions. It may seem reasonable to suppose this S-R learning in simple organisms like amoeba, but maybe higher level organisms, mammals say, have more sophisticated learning. A problem associated with the Law of Effect is that it would seem that there should be no anticipation of the goal. The learning mechanism is passive, since all that is done is to make a response under certain situations. Tinkelpaugh (1928) however showed that by substituting a lettuce for a banana produced disappointment, which appears to show that the animals anticipate something, providing evidence of cognitions.

Tolman argued agains Thorndike and said what is learned is to associate the response with the next reinforcer, a R-US connection and his theory is more cognitive as he is arguing for an expectancy being important. Pearce describes some research which explores this. It is suggested that if a reinforcer is devalued then it will only have an effect if the animal anticipates the reinforcer. If there is no anticipation the manipulation will not have an effect.

Devaluation Experiments

Colwill & Rescorla (1985) provide evidence for this with their devaluation experiment. Rats learned to press a lever R1 for one reinforcer, food (US1), and a different response, pulling a chain, (R2) for a second reinforcer , sucrose (US2). The rats were then made sick to food (US1) by injecting an emetic. Now rats were allowed to make either response for no reinforcement.

Training Devaluation Test
R1->US1 press lever-food
R2 ->US2 pull chain-drink
US1 + LiCl
R2>R1

The response to pulling a chain was more vigorous as would be predicted with a R-US connection. (Pearce, pp 85). It seems devaluation leads to a change in expectations.

Chaining, S -> R-US, has also been demonstrated.

Figure 2.2

Pearce (pp 87) describes another experiment by Colwill & Rescorla (1990).

Here rats received discrimination learning, light or tone which could signal different patterns of reinforcement to different reinforcers. It is best understood in the table below.

Discrimination Training Devaluation Test
light
cue
press lever-food
pull chain-drink
drink -> LiCl
light: lever >chain
tone
cue
press lever-drink
pull chain - food
drink -> LiCl tone: chain> lever

What this experiment shows is that with a light cue (S1) devaluation led to a reduction in chain pulling which was associated with the devalued reinforcer, drink. But with a tone cue, S2, where lever press was devalued, now the more vigorous response was to chain pulling. The animal wasn't affected simply by a R-US association, otherwise the differential effect could not have occurred. What must happen is that there is also an association to the cue, or an

S-(R-US) connection.

These two experiments show how psychologists have continued to attempt to identify the processes underlying a type of learning that is visible in many animals, from the simplest to the most complicated.

Another problem with Thorndike's Law of Effect was it suggested that it was a satisfying state of affairs that was important, but this is tautological.


i.e. When is something reinforced ‰d‰‰‰‰‰‰if it leads to a satisfying state of affairs ‰ how do we know it is satisfying ‰it is reinforcing.


Premack developed the Premack principle which suggested reinforcement was relative. He stated that the more probable event can reinforce a less probable, or that the response is followed by a preferred response for learning to occur. i.e. if chocolate is preferred to rice pudding, then chocolate can act as a reinforcer for eating rice pudding, but not the other way around. However Timberlake and Allison (1974) argued that each activity has its equilibrium and bliss point. So under some circumstances the less preferred activity (eating rice pudding) can act as a reinforcer.

Motivation

The vigour of responding partly depends on deprivation. Hull developed grandiose theories to try to explain behaviour based on this principle. One of his tenets was that there was an intervening factor, drive, which determined learning rate. He further argued that this drive was not linked to the specific deprivation, so if an animal is learning to press a lever for food, then making it thirstier will enhance the force of the response. Hull's theory was found to be lacking in some respects, especially with research involving animals directly stimulating areas of the brain. More modern theories assume 2 motivational systems, one an energising one, and one that minimises aversive stimuli.

References

Pearce (1997) Animal Learning and Cognition, ch 4 Psychology Press, Hove

Wynne (2001) Animal Cognition, Ch 1, Ch 3 Palgrave Macmillan, Basingstoke

URLs:

http://academic.brooklyn.cuny.edu/psych/delam/53.1/ basics of conditioning

http://www.nerdbook.com/sophia/chickens.html chicken training

http://www.nerdbook.com/sophia/Movies/movie2.html?mov=37hi.mov&num=37
instrumental conditioning

Questions to answer

Supporting evidence Challenging evidence:

Chapter 3
Associative Conditioning.


Objectives.

After you have read the material you should be able to:


Pavlov first described conditioning of a dog's salivation to a tone. The type of conditioning has several names, but nowadays it is often called associative conditioning. Associative conditioning involves the pairing of a a previously neutral stimulus, the CS (say a tone) with a unconditioned stimulus US (food or shock). Although at first sight this may appear a rather irrelevant aspects of learning as far as humans are concerned. we will see that it is crucial in one explanation of the genesis and maintenance of neurosis. Classical conditioning can be observed in many animals, from very simple species like snails to complex animals like humans. (Wynn, 2001 pp37)

Acquisition and Extinction

Figure 3.1

The above figures show the usual acquisition and extinction curves obtained with excitatory conditioning (To rabbits with shock).

Acquisition occurs when the tone is repeatedly paired with the UCS (shock). Pavlov realised that this seemingly simple phenomenon could explain much animal behaviour. Rather than the animals having cognitions they learn by a fairly simple mechanism dependent on the pairing in time of the CS and UCS (referred to as temporal contiguity). Extinction occurs when the CS is presented repeatedly without the UCS, i.e. a tone is presented many times without the reinforcer of food.

The question of extinction has exercised psychologists as whether during extinction material is forgotten, or whether a new response is produced antagonistic to the learned response. An analogy is a balance scale (Figure 3.2) . If a weight is put on one side, then equilibrium can be achieved either by removing the weight (forgetting) or by putting on a counter weight.

Figure 3.2

A generalisation gradient is also shown in figure 3.1. This is achieved by training on one tone and then examining how large the response is to higher or lower tones that had never been used in training. The figure shows that maximum responding occurs to a tone that is most similar to a training stimulus. Interesting deviations to this curve occur when both positive and negative training is used. For instance reinforcement could be to a tone of 1000 Hz and no reinforcement when the tone is 500Hz. Now the maximum respoinse is not at 1000Hz but at say 1020Hz. This phenomenon is called peak shift and will be looked at later.

The need for such a simple learning mechanism is apparent. It allows the organism to learn about significant events.

Surprisingly after all the years of research, one would have thought that we would know everything we need to know about such a simple phenomenon, but no, and in the 1960's there was a re-evaluation. Partly because of new thinking and the development of new techniques like conditioned suppression, and partly by developments in our understanding of the physiology of the brain, and partly by the use of new computational tools.

A classic experiment on conditioned suppression is described, which hopefully you will be able to follow.

Conditioned suppression is a more complicated technique which is often used since it is a very sensitive tool. The method uses both associative and instrumental (skinnerian) conditioning. The animals are taught to press a lever for food (instrumental conditioning), and with intermittent reinforcement a steady rate of lever pressing is achieved. In a separate situation, using associative conditioning, a CS (say tone) is paired with a UCS (a shock). The CS is then introduced during the instrumental activity and the effect it has on the lever pressing rate is observed. Suppression is measured by a ratio of pressing with and without reinforcement. , where a is the ratio of the rate of pressing during the CS and b the rate just prior to the CS. If there is no suppression then a=b, and the ratio will be close to 0.50, if there is suppression of pressing then a<b and so the value gets smaller, if there is is enhancement, it gets bigger


suppression 0.0 <--- 0.5 --->1.0 enhancement


You may wonder why there should be an effect on the lever pressing rate. It is argued that what has been conditioned is a Conditioned Emotional Response (CER). Fear, if there is a negative reinforcer like shock, and hope, if there is a positive reinforcer like food. Can you see that the importance of this type of learning is that it is related to feelings and emotions, and not simply cognitions.

An example of the role of contingency:

From British Journal of Psychology, August 1998 v89 n3 p453(10)

Can animals detect when their owners are returning home? An experimental test of the 'psychic pet' phenomenon. Richard Wiseman; Matthew Smith; Julie Milton.

Can dogs psychically detect when their owners are returning home. An Austrian television company suggested they could. Jaytee appeared remarkably successful and seemed to sense when the owner started her journey home, and would go and sit in their porch until her return. They did a little experiment by having one camera crew follow the owner as she walked around her local town centre. The second crew remained in her parents' house and continuously filmed Jaytee. After a few hours the crew accompanying the owner decided to return home. It seemed as if the Jaytee's behaviour was contingent on the return of the owner.

However a more closely designed procedure was carried out to eliminate some of the contamination. For example the dog's behaviour could just be routine, or picked up other cues from the owner or the other people in the house, or maybe the people in the house remember selectively what the dog has done.

In the first experiment Jaytee made 13 trips to the porch during the experimental session. The owner left the remote location at 21.00 and so, to be successful, Jaytee had to respond between 21.00 and 21.09. In fact, the first occasion on which Jaytee inexplicably visited the porch occurred at 19.57. As a result, the experiment was considered unsuccessful.
In the second experiment Jaytee made 12 trips to the porch during the experimental session. The owner left the remote location at 14.18 and so to be successful, Jaytee needed to respond between 14.18 and 14.27. In fact, the first occasion on which Jaytee inexplicably visited the porch for more than 2 minutes occurred at 13.59. As a result, Expt 2 was also considered unsuccessful

In fact over four experiments the analysis of the data did not support the hypothesis that Jaytee could psychically detect when his owner was returning home.

It seems that it is very easy for people to form the view that there are contingencies, however, what they tend to ignore are the number of times an event occurs (Jaytee going to the porch) which has nothing to do with the owner coming home. In fact Jaytee was as likely to go to the porch whatever the owner was doing, so, going to the porch was not a predictor, except in the minds of those staying at home.

Rescorla 1988 experiment demonstrating conditioned suppression.[3]

What Rescorla aimed to do was to check whether learning depended on temporal contiguity as Pavlov had suggested. He felt that this mere coincidence of two factors, the CS and UCS, was not sufficient for learning. He felt that what was important was whether the CS predicted the UCS. This is termed a contingent relationship, one thing depends or predicts the other. He then devised an experiment where he kept the contiguity constant with different conditions, but varied the contingency. In this way if contiguity was important there would be no difference between the conditions, but if contingency was important then the learning would vary.

There are several parts to the experiment.

  • Phase 1. Learn to press for food. - (instrumental/operant conditioning)
  • Phase 2. Classical conditioning CS, light and UCS, shock
    He used four conditions.
    However, the number of times CS paired with US was kept constant. i.e. the contiguity was the same in all groups. The likelihood of a US appearing in the time slot was 0.4
    The conditioning situation (pairing CS and US) was interspersed with occasions where the US was presented alone without the CS. This procedure could be used to modify the contingency by adopting different frequencies of US occurrence on its own in the four groups, 0.4, 0.2, 0.1, 0.0
  • Phase 3. Test to see extent of suppression
    They presented the light during pressing a lever for food, and measured the effect on rate of pressing lever. Measured the suppression ratio.

This experiment showed that temporal contiguity alone does not determine learning, since there were an equal number of light/shock pairings in all conditions and yet the degree of learning differed by condition..

Figure 3.3

However the contingency had a strong effect, since clearly although there is no contingent relationship between the light and shock in condition 0.4, there is in conditioning 0.0, and the amount of learning directly matches this relationship.

This experiment also highlights the problems that different control conditions might have (see Pearce pp31). Is the best control condition one where the US never appears without the CS (0.0), or would the random condition (0.4) be better?

To summarise, Pavlov had proposed that classical conditioning came about through STIMULUS SUBSTITUTION due to the close TEMPORAL CONTIGUITY of the UCS and CS. But as we see Rescorla showed that the result was due to CONTINGENCY rather than contiguity. He independently manipulated the contiguity, and contingency in his experiments, and found that contingencies led to learning.

Is the rat calculating probabilities?? This is unlikely.

Then how can we explain?

An important factor is the salience of stimuli!

Mackintosh (1973) overshadowing experiment examined overshadowing and stimulus salience

As in the previous experiment, there were several phases to the experiment, first training on instrumental conditioning, then a classical conditioning phase and finally a test phase.

The experimental manipulation was the presence of a noise, which could be loud or soft.

GROUP 1 Light + shock L

GROUP 2 Light + loud Noise + shock LN

GROUP 3 Light + quiet noise + shock Ln

The amount of suppression of lever pressing when a light is presented is shown in figure 3.4

Figure 3.4

For gp 1 (L) the light successfully suppresses pressing, as it does in group 3 (Ln) where the noise is quiet, however in group 2 (LN) the loud Noise has led to poor learning to the light. This experiment shows how overshadowing can occur.

The number of pairings of light is the same, it is just effect of context, the noise that influenced learning.

One stimulus can take all the learning - it overshadows, so just learn one thing.

How is this relevant to Rescorla's experiment? There was only one stimulus, but this is from the experimenter's point of view, maybe something else from the background was more salient. so that in some conditions the animal is learning to associate the stimulus, and in another condition it is associating the background?

The light can overshadow background in the contingent situation, so there is learning to the light.

However background is the reliable factor without contingency so maybe there is learning to this!

Saavedra has examined this situation with an explicit signal, a tone, replacing the background to determine how learning is affected. Saavedra included an overshadowing, control condition too.

Here the more learning is represented by higher bars (in contrast with the graphs on suppression ratios)

Signal A, a light, is clearly affected. More responses are associated in the correlated condition , and conversely less are associated with signal B, the tone.

The number of light pairings is the same, but the context can change how much is learnt.

Tone is like the background, in uncorrelated can overshadow, is more salient, in correlated becomes less salient.

Figure 3.5

Dweck & Wagner also looked at whether animals can learn a context-shock association.

Animals first had to learn to lick a spout for sucrose

then 2 min presentations They showed that subsequent presentations of a tone during licking suppressed licking for correlated group

But in addition found that it took longer to start licking again in the uncorrelated group, demonstrating that learning has occurred to the context which affects behaviour.

To quote Dickinson "it appears that in the correlated condition the animals primarily learn about the relationship betwe
en the contextual cues and shock, whereas exactly the opposite is true of animals exposed to the uncorrelated shedule." (p36)

What sort of theory could explain this result?

Three contemporary theories which relate to this situation have been developed, Rescorla & Wagner, Mackintosh, and Pearce & Hall.

Two contrasting views have developed.

  1. Learning depends on the extent to which an event is surprising.
  2. Learning depends on the extent to which the event is a predictor.

Inhibitory Conditioning

Inhibitory conditioning is also now recognised as important. In this situation the animal learns that a significant event will not follow a particular cue. e.g. that no painful stimuli occur in some situations or that a previous reward will not appear. Pearce (32) describes Hearst & Franklin's experiment where pigeons learnt that food would not be delivered when a light was illuminated. The extent to which the pigeons moved away from the light was linked to the frequency of the food delivery, such that the more frequently food (US) has been presented, the more the avoidance behaviour

Figure 3.6

Conditioned inhibition is difficult to measure and often sophisticated techniques using retardation or summation methods are used. (Pearce 33)

Two simple models of learning.

Two simplistic models have been proposed to explain learning during associative conditioning. The first model, which Pavlov suggested, sometimes called S-R learning, proposes that through the temporal contiguity of CS and US the CS will start to substitute for the US.

The second model, which is sometimes called S-S learning, suggests that learning takes place when the CS acts as a cue for the US; the contingency between CS and US becomes represented and this memory allows prediction of events.

Pearce offers several pieces of evidence that support both models. However it may not be too fruitful to see the models as opposed to one another, it may be that the circumstances determine which process is involved. Rescorla's experiment described earlier was one test which supported animals making a prediction, and Holland & Straub (1979 pp 39 Pearce) made an additional test. They reasoned that if anticipation was correct then once an association had been created that any changes in the salience of the US would have a strong effect, whilst no effect would occur with substitution. They used flavoured food with noise as a CS in the training phase. The food was then contaminated (No CS present) to make the animals sick. they then presented the noise and checked to see if this affected the CR. It did, suggesting that the CS must have contacted the memory of the US in order to produce the effect.

In a further experiment (Holland 1990) presented sucrose with either a Wintergreen + Tone or a Peppermint + noise combination. The Wintergreen flavoured sucrose (without a CS) was then followed by an emetic (LiCl which made the animal sick). Now when the animal was presented with sucrose in the presence of noise it behaved normally, but with the tone it showed aversive reactions. The tone must have recovered the specific memory of the Wintergreen which was linked now to sickness.

S-R (substitution) theory is often supported by showing that there are differences between the CR and UR. Pavlov for instance noted that the CR in his experiments, salivation, is not like the UR, eating. Salivation is a preparatory response to the real response of eating. There may be also additional specific effects. For instance one form of consummatory response may be produced for food and a different one for water, this is because the final UR is different for food (chewing) and drink (licking) and although not identical the CR will often mimic the UR.

An important distinction that Konorski (1967) has noted is that US possess two different characteristics, specific and affective. Usually a US has qualities e.g. taste, colour etc. and also an emotional impact. The extent to which conditioning is linked to one or other or both of these aspects may have very important consequences, for instance in learning and maintaining phobias.

Do the models only explain excitatory conditioning though? How would conditioned inhibition be represented? If a CS is continually presented with the absence of the US it can produce an effect on performance, but is the effect due to inhibiting the salience of the US or its output, or is it achieved through a second representation which includes a NO-US state (Pearce, 46)

Does the US need to be present to learn?

This question actually goes back to the early days of learning theory. As far as Pavlov was concerned learning could only take place if there was a US to pair with the CS, but Tolman had argued that learning could take place even without the US. Sensory preconditioning is such an example. Rizley & Rescorla (1972) presented light-tone combinations for a number of trials. The tone was then paired with a shock. Presenting the light now led to a change in behaviour. So although the light was never paired with shock, the previous pairings had left a memory and so once the tone was paired with shock this led to a CR of fear which generalised from the tone to the light. This research actually supports anticipation, S-S learning.

Second order conditioning also demonstrates the complexity of associative conditioning. Rashotte et al (1977) presented a white light with food, then paired a blue light with the white (without the food). For one group a white light was presented without the food in order to produce experimental extinction. Now when tested with a blue light this group showed extinction of the response to the blue also.

However in a very similar experiment by Rizley & Rescorla (1972) but using Tone/Lights and shocks, there was no extinction to the second order CS.

Several differences exist between the two experiments. One was the type of US, food vs. shock. Different associations may be favoured because of this, and fear as a CR produced with the shock may be much more influential and different from the UR which was pain than the sort of CR produced by food and its anticipation. Furthermore the similarity between the first and second order stimuli may be crucial as suggested in Pearce (pp 44).

There are still many important questions left that need to be tackled before a full understanding of such a simple learning situation can be provided. It is however a relevant area for research as the example below shows


Anticipatory Nausea - An Abstract

N Tomoyasu, DH Bovbjerg, PB Jacobsen (1996).

Conditioned reactions to cancer-chemotherapy - percent reinforcement predicts anticipatory nausea

Physiology & Behavior 59(2):273-276.

Current theorizing on classical conditioning has emphasized the role of contingent relations between the conditioned and unconditioned stimuli in the development of conditioned responses. The present study is the first to examine the relevance of this concept to our understanding of the phenomenon of anticipatory nausea in cancer chemotherapy patients. Anticipatory nausea in patients receiving emetogenic chemotherapy has been cited as an example of the importance of classical conditioning in clinical medicine. Outpatient chemotherapy can be viewed as a series of conditioning trials in which the previously neutral stimuli of the clinic (conditioned stimuli) are associated with chemotherapy infusions and postinfusion nausea. Reexposure to these clinic stimuli alone is sufficient to elicit nausea (conditioned response) in some patients prior to subsequent infusions. In the present study we examined whether differences among patients in percent reinforcement (the percentage of infusions followed by nausea) would predict anticipatory nausea, which was assessed at the sixth infusion. Results were consistent with the hypothesis. Percent reinforcement was positively correlated with the incidence of anticipatory nausea. Comparison of patients with and without anticipatory nausea (t-test and hierarchical logistic regression analysis) confirmed that percent reinforcement was a significant predictor of anticipatory nausea, independent of other factors previously reported to be involved.


References

Pearce (1997) Animal Learning and Cognition, ch 2 Psychology Press, Hove

Wynne (2001) Animal Cognition, Ch 1, Ch 3 Palgrave Macmillan, Basingstoke

URLs:

http://academic.brooklyn.cuny.edu/psych/delam/53.1/ basics of conditioning

http://www.nerdbook.com/sophia/chickens.html chicken training

http://www.nerdbook.com/sophia/Movies/movie2.html?mov=37hi.mov&num=35
classical conditioning




[1]  see http://en.wikipedia.org/wiki/Instrumental_conditioning

[2]  see http://en.wikipedia.org/wiki/Law_of_effect

[3]  see Wynn (2001) pp 41-44


Quality sites