Chapter 8: Learning
BRIEF CHAPTER OUTLINE
Basic Processes of Learning
Habituation and the Orienting Response
Association
Conditioning Models of Learning
Classical Conditioning
Pavlov’s Dogs
How Classical Conditioning Works
The Conditioning of Little Albert
Operant Conditioning
Reinforcement and Punishment
How Operant Conditioning Works
Schedules of Reinforcement
Psychology in the Real World: Treating Autism with Applied Behavior Analysis
Challenges to Conditioning Models of Learning
Biological Constraints on Conditioning
Latent Learning
Breaking New Ground: Conditioned Taste Aversion
Refining the Learning Model
Limitations of Conditioned Taste Aversion
Social Learning Theory
The Interaction of Nature and Nurture in Learning
Imprinting
Imitation, Mirror Neurons, and Learning
Synaptic Change During Learning
Experience, Enrichment, and Brain Growth
Making Connections in Learning: Why Do People Smoke?
Chapter Review
EXTENDED CHAPTER OUTLINE
BASIC PROCESSES OF LEARNING
o Suggestion: Link learning and its definition to memory (see TOC for chapter on memory).
Association
Pavlov’s
Dogs
How Classical Conditioning Works
·
Forward conditioning: the
neutral stimulus being presented just before the UCS, or the neutral stimulus
and the UCS presented simultaneously.
·
Backward conditioning: a slightly less successful form of
conditioning in which the neutral stimulus follows the UCS.
·
Pavlov’s
criteria for successful conditioning:
1. Multiple pairings of UCS and neutral
stimulus (CS) are necessary for an association to occur, so that the CS will
produce the conditioned response.
2. Temporal continuity. The UCS and CS must be paired or presented
very close together in time in order for an association to form.
·
Other
issues in the acquisition process are stimulus generalization and stimulus
discrimination:
o Stimulus generalization: extending the association between UCS and CS to include a broad array of similar stimuli.
o Stimulus discrimination: when a CR (such as salivation) occurs only to the exact CS to which it was conditioned.
o Extinction: the weakening of a CR when the CS and the UCS are no longer paired together. For example, if Pavlov stopped providing food after bell ringing would they salivate forever? No.
· Spontaneous recovery: the sudden reappearance of an extinguished response.
The
Conditioning of Little Albert
· Perhaps one of the best illustrations of stimulus generalization comes from Watson and Rayner (1920), in the conditioning of Little Albert.
o A 9-month-old baby known as Little Albert was conditioned to fear a white rat. Initially, Watson and Rayner brought out a white rat and showed it to Albert. He was curious, but not afraid of it. They then presented the rat with a very loud noise (the sound of a hammer striking a steel bar right behind Albert’s head). Naturally, the loud sound (a UCS) startled Albert (the UCR), and he got very upset. Eventually, the rat (CS) elicited the fear response (CR). Amazingly, Albert further generalized the fear response to a slew of stimuli, including a rabbit, dog, and white fur coat, and even a Santa Claus mask! This generalization is very impressive, if not disturbing, as he generalized from animate to inanimate stimuli.
o CONNECTION: Could Watson do research on Little Albert in today’s world? Review the discussion of ethics in Chapter 2.
· Thorndike: Spontaneously emitted behavior can become favored and reinforced when it is followed by certain consequences. He tested this using a device called a puzzle box. Here, cats are placed in a specially designed cage from which they want to escape. Simply based on its random behaviors, the cat would eventually be rewarded by the door opening. This reward increased the probability of the now specific behavior happening again, leading to further rewards. Moreover, this specific behavior would occur more quickly over time. Thorndike labeled this the law of effect.
o There are two dimensions of reinforcement: Primary vs. Secondary and Positive vs. Negative
· Punishment: any stimulus that decreases the likelihood that a behavior will occur.
o Like reinforcement, punishment can be positive or negative (but remind students this refers to the addition or subtraction of the stimulus – that is, all punishment is “bad” and all reinforcement is “good” regardless of whether the word “positive” or “negative” proceed it.
§ Positive punishment: the addition of a stimulus that may decrease behavior (e.g., spanking in an effort to stop an undesirable behavior, electric shocks, putting bad-tasting chemicals on a child’s thumb to assist them in stopping undesirable thumb sucking, getting a fine for speeding). In any of these examples, an unwanted situation/stimulus is added in the attempt to dissuade negative behaviors.
§ Negative punishment: removal of a stimulus in order to decrease behavior; in other words, something that is desirable is taken away (e.g., grounding a child by taking away their freedom, taking an adolescent’s cell phone away for breaking curfew, losing your license for a DUI).
· CONNECTION: What is addiction? See the discussion of drugs in Chapter 6.
·
Skinner
box:
a simple cage used for operant conditioning in which a small animal (e.g.,
a rat) can move around, with a food dispenser and a response lever to trigger
food delivery. Using this device, Skinner demonstrated how a rat could be
coaxed to perform a desired behavior (such as lever pressing) through
reinforcement of behaviors that occurred when the rat got closer and closer to
pressing the lever using shaping (the reinforcement of successive
approximations of a desired behavior).
Psychology in the Real World: Treatment of Autism
· Autism: developmental disorder usually appearing in the first few years of life. It is characterized by drastic deficits in communication and language, social interaction with others, emotional expression and experience, and imaginative play (Kanner, 1943). Current estimates suggest that autism affects anywhere from 41 to 45 out of every 10,000 children between the ages of 5 to 8 years old and that the rate is much higher in boys than in girls (Fombonne, 2003).
· Although it was thought that autism was untreatable, Ivar Lovaas has developed a promising new treatment called applied behavioral analysis (ABA), which is based on operant conditioning principles. That is, it uses reinforcement to increase the frequency of desirable behaviors in autistic children, and in some cases, punishment (a loud “NO!” or time-out) to decrease the likelihood of undesirable behaviors. The intensive program involves ignoring behaviors that are harmful or undesirable such as hand flapping, twirling, or licking objects and aggressive behaviors, through use of time-out, and reinforcement of behaviors such as contact with others, simple speech, appropriate toy play, and interaction with others. Typically, the program involves at least two years of treatment for 35-40 hours per week.
Schedules of
Reinforcement.
·
Reinforcement may be presented every time a
behavior occurs, or only occasionally.
o Continuous reinforcement: rewarding a behavior every time it occurs. For example, giving a dog a cookie every time he sits on command.
o Fixed ratio (FR) schedule: reinforcement follows a set number of responses. For example, every third time Fluffy the Shih Tzu sits on command, Fluffy gets a cookie. Interestingly, a continuous schedule is a fixed ratio where the number of response is set at 1.
o Variable ratio (VR) schedule: the number of responses needed for reinforcement varies. For example, playing slot machines, which reinforce variably but at a preordained schedule, or checking your email to see if you’ve got mail.
o Fixed interval (FI) schedule: responses are always reinforced after a set period of time has passed. For example, getting paid every two weeks.
o Variable interval (VI) schedule: responses are reinforced after time periods of different duration have passed. For example, your instructor may use CPS questions to track attendance or reward you with points, but it varies at what lecture and at what point in the lecture they are asked.
· This section talks about whether learning is universal to all species, the idea being that there is something primal about base aspects of learning in all species. Breland and Breland (1961), two of Skinner’s students, successfully conditioned 38 different species and more than 6,000 animals. They coined the term instinctive drift, which they defined as learned behavior that shifts towards instinctive, unlearned behavior tendencies.
· CONNECTION: What is innate about language learning? We explore how our brains are wired for language in Chapter 9.
· Biological constraint model: some behaviors are inherently more likely to be learned than others. In other words, biology constrains, or limits, options to make the adaptive ones more likely to occur. The idea here is that constraints on learning have positive evolutionary implications; that is, it is useful for survival. For example, if you were attacked by a dog (like the Fluffy example, above), and did not learn a fear response, you might wind up dead.
· Nature-Nurture Pointer: Why are animals primed from birth to readily learn some things and not others? Why can geese fly but not talk?
· The fact that not every species can be conditioned to learn anything illustrates a major theme of this book. Namely, biology and experience interact to determine who we are.
Latent
Learning
· Latent Learning (Tolman & Honzick, 1930): learning that occurs in the absence of reinforcement and is not demonstrated until the reinforcement is provided at a later time.
·
Stop and
Think: How do organisms learn in classical conditioning? How do they learn
in operant conditioning? Which type of reinforcement or punishment adds a
stimulus? Which type takes away a stimulus? What are the four types of
schedules of reinforcement? What biological constraints occur in conditioning?
Breaking New Ground:
Conditioned Taste Aversion
See “Breaking New Ground” section
for detailed explanation.
SOCIAL LEARNING THEORY
· Enactive learning: learning by doing.
· Observational learning: learning by watching others
· Bandura, the father of Social Learning, argued much of what we learn is done vicariously, his social learning theory (1986). The primary method for such vicarious learning was termed modeling (the process of observing and imitating behaviors performed by others).
·
Social Learning
People learn best
those things they are rewarded for doing, whether the rewards are external
(such as praise, money, candy) or internal (such as joy and satisfaction).
Bandura realized that reinforcement matters not only for the person carrying
out the behavior, but also for those who watch.
· A series of classic studies in the 1960s involved a Bobo doll. This research demonstrated that those who viewed aggression were more aggressive with the doll than those who did not see aggression. The consequences for the model also mattered. Children who saw the aggressive adult rewarded for his aggression were more violent with the toys and Bobo doll than those who saw the aggressive adult get punished. Those who did not see an aggressive model did not show much aggression with the toys, nor did those who saw the adult punished. These studies show how modeling and reinforcement can work together to influence behavior. Kids are more likely to copy behavior that they see others get rewarded for doing.
· CONNECTION: Do you think watching violence in movies and TV leads to aggressive behavior? Overwhelmingly, the answer seems to be “yes.” (See Chapter 15.)
· Stop and Think: What are the two basic components of social learning theory? How did children act after they saw adults who behaved aggressively and were rewarded for that aggression?
· Four learning processes that illustrate the dynamic interplay between nature and nurture: imprinting, imitation, synaptic change, and brain growth with enrichment.
Imitation, Mirror Neurons, and Learning
· Imitation by infants may be a result of mirror neuron systems in the brain (neuron systems which respond in much the same way while watching an action as they do while making an action).
· Nature-Nurture Pointer: Why do we cringe when we watch a movie character enter a dark, scary building alone?
· CONNECTION: Do mirror neurons explain how newborns are able to imitate grown-ups who stick out their tongue? (See Chapter 5.) Are mirror neurons behind much imitation seen in social interaction? (See Chapter 15.)
· CONNECTION: Remind students of Hebb’s work on learning, memory, and brain plasticity discussed in Chapter 7.
· Synaptic connections between neurons strengthen and even grow during long-term associative learning, indicating that the brain literally grows and changes as we learn. The development and frequent use of new synaptic connections in response to stimulation from the environment strengthens the associated memories and makes learning easier. So it does seem as though “practice makes perfect” and you should either “use it” or you will “lose it.”
·
Nature-Nurture
Pointer: If learning changes the brain, why do we
forget what we’ve learned?
Experience, Enrichment, and Brain Growth
· CONNECTION: Remind students of Chapter 2’s discussion of classic work demonstrating that when rats are reared in enriched environments they grow more neural connections and learn to run mazes faster than genetically identical rats raised in impoverished environments (Bennett et al., 1964; Rosenzweig et al., 1962).
·
Later experiments showed that animals did not
have to be raised from birth in an enriched environment to benefit. However,
the best way to stimulate new neural growth is to be in an enriched environment
that continues to have new and novel forms of stimulation. (Kemperman &
Gage, 1999).
·
Stop and Think: What type of learning happens within a
very short period after birth? What type of learning do mirror neurons support?
What happens in the brain during long-term associative learning?
MAKING CONNECTIONS: Why Do
People Smoke?
See “Making the
Connections” section for detailed explanation.
KEY TERMS
association: process by which two pieces of information from the environment are repeatedly linked so that we begin to connect them in our minds.
behavior modification: the application of operant conditioning principles to change behavior.
biological constraint model: view on learning proposing that some behaviors are inherently more likely to be learned than others.
classical conditioning: form of associative learning in which a neutral stimulus becomes associated with a stimulus to which one has an automatic, inborn response.
conditioned stimulus (CS): a previously neutral input that an organism learns to associate with the UCS.
conditioned response (CR): a behavior that an organism learns to perform when presented with the CS.
conditioned taste aversion: the learned avoidance of a particular taste or food.
conditioning: a form of associative learning in which behaviors are triggered by associations with events in the environment.
enactive learning: learning by doing.
ethology: the scientific study of animal behavior.
extinction: the weakening and disappearance of a conditioned response, which occurs when the UCS is no longer paired with the CS.
fixed ratio (FR) schedule: pattern of intermittent reinforcement in which reinforcement follows a set number of responses.
fixed interval (FI) schedule: a pattern of intermittent reinforcement in which responses are always reinforced after a set period of time has passed.
imprinting: the rapid and innate learning of the characteristics of a caregiver very soon after birth.
instinctive drift: learned behavior that shifts towards instinctive, unlearned behavior tendencies.
latent learning: learning that occurs in the absence of reinforcement and is not demonstrated until later, when reinforcement occurs.
law of effect: principle that the consequences of a behavior increase (or decrease) the likelihood that the behavior would be repeated.
learning: enduring changes in behavior that occur with experience.
modeling: the imitation of behaviors performed by others.
negative reinforcement: removal of a stimulus after a behavior to increase the frequency of that behavior. An example is buckling your seat belt to stop the buzzer in the car.
negative punishment: the removal of a stimulus to decrease behavior.
observational learning:
learning by watching the behavior of others.
operant conditioning: the process of changing behavior by manipulating the consequences of that behavior.
positive reinforcement: the presentation or addition of a stimulus after a behavior occurs that increases how often that behavior will occur.
positive punishment: the addition of a stimulus that may decrease behavior.
primary reinforcers: innate, unlearned reinforcers that satisfy biological needs (such as food, water, or sex).
punishment: stimulus, presented after a behavior, that decreases the frequency of the behavior.
reinforcer: environmental stimulus that increases the frequency of a behavior.
responses: reinforced after time periods of different duration have passed.
schedules of reinforcement: patterns of reinforcement distinguished by whether reinforcement occurs after a set number of responses or after a certain amount of time has passed since the last reinforcement.
secondary (or conditioned) reinforcers: reinforcers that are learned by association, usually via classical conditioning.
shaping: the reinforcement of successive approximations of a desired behavior.
Skinner box: simple chamber used for operant conditioning of small animals; includes a food dispenser and a response lever to trigger food delivery.
social learning theory: a description of the kind of learning that
occurs when we model or imitate the behavior of another.
spontaneous recovery: the sudden reappearance of an extinguished response.
stimulus generalization: extension of the association UCS and CS to include a broad array of similar stimuli.
stimulus discrimination: restriction of a CR (such as salivation) to the exact CS to which it was conditioned.
unconditioned response (UCR): the automatic, inborn response to a stimulus.
unconditioned stimulus (UCS): the environmental input that always produces the same unlearned response.
variable ratio (VR) schedule: a pattern of intermittent reinforcement in which the number of responses needed for reinforcement changes.
variable interval (VI) schedule: pattern of intermittent reinforcement in which responses are reinforced after time periods of different duration have passed.
MAKING
THE CONNECTIONS
Habituation and the Orienting Response
CONNECTION: Right now you are habituated to dozens of stimuli – including the feel of clothing on your skin. Now you are sensitized to it. How so? See Chapter 4.
o Discussion: Is habituation learning? Ask students to think about their job. How much of what they do is automatic? That is, are they demonstrating habituation (they are oriented to what they do and are exposed to repeatedly) or learning?
Classical Conditioning
CONNECTION: Vomiting is another example of a reflex, which is why it is so hard to control when we are sick. To learn how reflexes work, see Chapter 3.
o Discussion: Refer to Chapter for a list of innate reflexes. Ask students what other reflexes they think can be used effectively in classical conditioning.
CONNECTION: Could Watson do research on Little Albert in today’s world? Review the discussion of ethics in Chapter 2.
Discussion: Watson, perhaps the father of the behavioral movement, is best known for the infamous quote: “Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in and I’ll guarantee to take any one at random and train him to become any type of specialist I might select – doctor, lawyer, artist, merchant-chief, and yes, even beggar-man and thief, regardless of his talents, penchants, tendencies, abilities, vocations, and race of his ancestors (Watson, 1925, p. 82).” Based on this quotation, what would this mean if you had an IQ of 100 and wanted to be a doctor? What if you lacked an ability for athleticism, as you were born small and weaker than most but you wanted to be a professional football player? What would Watson say?
o Discussion: What became of poor Little Albert? The book mentions that he was never reconditioned, but why? Well, he was adopted by, presumably, a loving family. You may want to mention, however, that in Chapter you will visit issues of behavior modification and counter-conditioning, at which time you might wish to open up a discussion of how this type of therapy would be applied in this instance.
o Discussion: What would Albert be like in 2008? If he was still alive today, he would be a 90-year-old man who has gone through life with a crippling fear of all fuzzy white things and hating Christmas.
o
Discussion: Students are generally interested in
this story, and you may want to also talk about little Peter, a follow-up study
done by Mary Cover Jones (1924) under Watson’s supervision (see Suggested
Operant Conditioning
CONNECTION: What is addiction? See the discussion of drugs in Chapter 6.
o
Suggested
Activity: Show Alcohol Addiction (In-Psych Discovery
Channel Videos) and discuss alcohol addiction as it relates to reinforcement
and punishment.
Biological Constraints on Conditioning
CONNECTION: What is innate about language learning? We explore how
our brains are wired for language in Chapter 9.
o
Discussion:
This might be a good time to preview Chomsky and the nativist perspective in
comparison to a learning perspective on language acquisition. Ask students what language skills children
are rewarded for, versus pre-wired for.
Do parents punish and correct every grammatical mistake toddlers make in
speech? Unlikely.
o
Discussion: What types of language do animals
display? See the bee waggle dance (http://www.youtube.com/watch?v=-7ijI-g4jHg).
Social Learning Theory
CONNECTION: Do you think watching violence in movies and TV leads to aggressive
behavior? Overwhelmingly, the answer seems to be yes. (See Chapter 15.)
o Activity: If you have Internet access in your classroom, go to http://video.google.com/videoplay?docid=-6612619266298637591&q=bandura+bobo&total=9&start=0&num=10&so=0&type=search&plindex=0 for a YouTube video clip of Bandura discussing his famous Bobo Doll study.
o Video: Choose any segment from Jackass: The Movie and discuss its implications for young children who idolize these types of behaviors.
o Discussion: Ask students to consider how Bandura’s research would map onto the violent videogames on the market today.
The Interaction of Nature and Nurture in Learning
CONNECTION: John Bowlby argued that a variation of imprinting is seen in humans when we attach to our primary caregivers. See Chapter.
o
Video: Show Fly Away Home (1996) and discuss imprinting. Now ask students how this model is limited in
terms of human behavior and attachment.
CONNECTION: Sensitivity periods in language development allow toddlers and young children to learn not only the structure of their native language, but also a vocabulary of 10,000 words by the time they enter kindergarten. (See Chapter 9.)
o Activity: If you have Internet access in your classroom, go to http://www.youtube.com/watch?v=K3nt9P8XeIo&feature=related for a clip on a Russian child raised by dogs with limited language development. This is a good example of how species-typical genes need to interact with a species–typical environment for biologically primary skills, like language, to develop. Remind students that developmental norms indicate a critical or sensitive period for language development.
MAKING CONNECTIONS: Why Do
People Smoke?
· Social learning probably offers the best explanation to how the smoking behavior is acquired. Most smokers start smoking as teenagers, and most teens start smoking because they seek some of the rewards that appear to come with smoking: coolness, peer acceptance, looking like an adult. Kids see that others who smoke get some of these rewards for smoking. Thus they might model the smoking behavior in order to obtain these rewards themselves.
· Once someone has become an established smoker, operant conditioning helps maintain smoking behavior. Smoking is bolstered by a number of positive reinforcers: arousal of the sympathetic nervous system (the “rush” of smoking), mild relaxation of the muscles, and in some cases, increased peer acceptance. Smoking also has a number of negative reinforcers, such as the removal of stress, the removal of social isolation for some smokers, and a reduced appetite.
· The power of these reinforcers, combined with the physiologically addictive properties of nicotine, make it very difficult to quit smoking.
Moreover, the potential punishers of smoking—a substantially increased risk of lung cancer and heart disease—are threats that are so far off in the future for teens that they tend to ignore them.
· There are several other factors to consider:
NATURE-NURTURE
POINTERS
Classical Conditioning
Nature-Nurture Pointer: If
unconditioned responses are biologically built in, does that mean conditioned
responses come purely from experience?
o Another way to approach this would be to ask students to provide examples of instances in which UCR and CR differ. For example, a child is looking at his mom’s pretty scented candle that has been burning for several hours. The child bats at the hot wax pooling by the wick and screams in pain when he is burnt. Several days later his mom has another candle burning. When the child sees the candle he again screams but this time in fear. Discuss the difference in motivation of the UCR and CR and what other possible conditioned responses are viable in this example (e.g., crying, running away, etc.).
Biological Constraints on Conditioning
Nature-Nurture Pointer: Why are animals primed from birth to readily
learn some things and not others? Why can geese fly but not talk?
Discussion: Turkewitz (1993) is well known for his work on several species of bird and “innate” skills. In humans, he looked at the development of the brain in utero and discovered that the right hemisphere develops early (before the auditory system is working). The left hemisphere develops later and rapidly surpasses the right in both size and complexity. As the auditory system develops in concourse with the left hemisphere, this is also when mom’s speech is most salient. Thus, the left hemisphere becomes specialized for processing language and speech. The right hemisphere remains “unspecialized” and thus is able to deal with visual information, spatial skills, and face/pattern recognition – thus, new meaning to the term “innate.” Ask students for their definition of “innate.” How would this research alter that view?
The Interaction of Nature and
Nurture in Learning
Nature-Nurture Pointer: Why do we cringe when we watch a movie character enter a dark, scary building alone?
o Discussion: This is a good opportunity to discuss the role of classical conditioning, vicarious learning, and evolutionary principles.
Nature-Nurture Pointer: If learning changes the brain, why do we forget what we’ve learned?
o Discussion: Synaptic connections between neurons strengthen and even grow during long-term associative learning, indicating that the brain literally grows and changes as we learn. The development and frequent use of new synaptic connections in response to stimulation from the environment strengthens the associated memories and makes learning easier. So it does seem as though “practice makes perfect,” and you should either “use it” or you will “lose it.”
Breaking New Ground: Conditioned
Taste Aversion
· Conditioned taste aversion is the learned avoidance of a particular taste or food if nausea occurs at the same time as or shortly after exposure to the food.
The Traditional Learning Model: Pavlov assumed there was no “eureka” type of learning; that is, he assumed the CS and the UCS were paired repeatedly to create a conditioned response. However, with taste aversion it can be a single pairing.
o In contrast to the traditional conditioning approach, one could describe acquired taste aversion as an evolutionary adaptation. From this perspective, we readily learn to avoid any taste or food that might make us sick, and we learn it quickly.
o Discussion: Ask students to describe any taste aversion experiences they have had. Have them discuss it in terms of these two perspectives. Which does a better job of explaining conditioned taste aversion?
Refining the Learning Model: Garcia and his colleagues (1955) wanted to see if they could condition rats to develop an aversion to water sweetened with saccharine—something they normally like a lot—by pairing it with radiation (a UCS for nausea at certain doses). They began with the following questions:
1. Could taste aversion to a preferred substance (saccharine water) be achieved by pairing the taste with radiation (a UCS for nausea)?
2. How long would the taste aversion last without repeated exposure to radiation (the UCS)?
o Researchers varied the conditions of groups of rats. All of the groups had access to either plain water or saccharine water during the radiation period. One control group had access to plain water during irradiation. The other control group got saccharine water and no radiation. In the experimental condition, rats subjected to different levels of radiation were given saccharine water. All of the groups that received radiation were exposed to it for the same amount of time, 6 hours overall. In some cases, the interval of time between when the rats were irradiated (UCS) and when they tasted the drink (CS) lasted several minutes. The independent variable was the radiation, and the dependent variable was measured in terms of how much saccharine water the rats consumed after the pairing of saccharine water with radiation.
· Results: Regardless of radiation level, both groups of rats that had been drinking saccharine water during irradiation consumed significantly less saccharine water after conditioning.
· This study is important because it showed that long-lasting conditioned taste aversion could occur even when the UCS and CS were paired only during a single session. This is now known as the Garcia effect.
· Garcia and Koelling (1966) varied the type of aversive stimulus (UCS) to which the rats were exposed. Nausea (the UCR) was induced by exposure to X-rays, whereas pain (UCR) was induced by shocks through the floor. When the rat licked the drinking tube, it received the CS of either saccharine water or “bright-noisy water” (plain water accompanied by a light and a buzzer that went on when the rat touched the drinking tube). The UCS for half the rats was X-rays. The other half received a shock.
o Results: The rats that were made nauseous avoided the sweet water but not the bright-noisy water, whereas rats that were given a mildly painful shock avoided the bright-noisy water but not the sweet water.
o The key finding here is that, contrary to the predictions of traditional learning theory, an organism cannot be conditioned to respond to just any “neutral” stimulus paired with an unconditioned stimulus.
· Garcia’s findings in several studies undermined two major assumptions of classical conditioning: (1) that conditioning (learning) could happen only if an organism was exposed repeatedly within a brief time span to the UCS and CS together and (2) that organisms can learn to associate any two stimuli.
· The assumption is that taste aversion can be learned quickly because it is adaptive. Natural selection has produced a learning mechanism that helps organisms survive dangers that would kill them if they did not learn to avoid them after one trial.
Limitations of Conditioned Taste Aversion
o Researchers have found that a single pairing of saccharine water with morphine (a pain-relieving drug that is highly addictive) reduced saccharine water consumption in rats.
o Another example given is the drug disulfiram, which can be used to condition alcoholics to have an aversion to alcohol. If people drink alcohol while taking disulfiram, then they get very sick. The problem is that alcohol does not become a CS for nausea when the disulfiram is discontinued. It is tough to condition alcohol to become a CS for nausea because the intoxication it produces is a positive reinforcer, especially for alcoholics.
INNOVATIVE INSTRUCTION
Additional Discussion Topics
2. Combining stimulus generalization, stimulus discrimination, extinction, and spontaneous recovery:
4. Behavior modification: How should you best modify behaviors? Ask students how their parents reinforced and punished them. Which actions were most effective? Which were most ineffective? Skinner emphasized that reinforcement is a much more effective way of modifying behavior than is punishment. Specifically, using reinforcement to increase desirable behaviors works better than using punishment in an attempt to decrease undesirable behaviors. As another example, ask students to honestly report if they have ever driven drunk. Then ask if they were ever caught in this act. What can government do to curb drunk driving? Should they punish people with jail sentences, major fines, etc., or should they reward people each time they drive sober?
5. Relating classical conditioning concepts to operant conditioning principles: Have students discuss how concepts such as stimulus generalization, stimulus discrimination, extinction, and spontaneous recovery discussed with classical conditioning can be applied to operant conditioning.
Activities
1. Have students buy a copy of Sniffy (the virtual rat) or, if you do not want to add to their expenses, load the program onto your in-class computer and work through different types of classical conditioning and operant conditioning principles discussed in class (Alloway, Wilson, & Graham, 2005). Students very much enjoy the interactive process, and the hands-on experience tends to clarify their mounting confusion over these different concepts.
2.
Students will find it difficult to
differentiate different types of punishments and reinforcements. They will also find it very difficult to
differentiate negative reinforcement and punishment in general. You may wish to utilize CPS clicker questions
to ascertain their understanding of these issues before moving forward.
3. Make an additional connection between this chapter and Chapter 2 by asking students how the Skinner box differs from Thorndike’s Puzzle Box. Students may not understand the fundamental difference here. Review concepts of independent and dependent variables. Now, remind them that Thorndike measured how long it took cats to escape. Skinner is interested in how many times animals perform an action.
4. Give students a homework assignment of watching television. Have them make note of different types of aggression they see in the course of one evening (you may wish to differentiate physical aggression versus relational aggression). Talk to students in the next class meeting about their observations. They will likely be surprised by just how much aggression they saw. Ask them how this might influence children (you can also talk about cartoon violence here).
Suggested Films
1. Bee waggle dance: http://www.youtube.com/watch?v=-7ijI-g4jHg
2. Fly
Away Home (1996) is a good example of imprinting. It is a story of a family of orphaned
goslings who have gotten lost and imprint onto a father and daughter who
ultimately help them.
3. Alcohol
Addiction In-Psych Discovery Channel Videos (http://highered.mcgraw-hill.com/sites/dl/free/0073382760/558381/AlcoholAddiction.mpg)
4. Phobias:
Living In Terror In-Psych Discovery Channel Videos (http://highered.mcgraw-hill.com/sites/dl/free/0073382760/558381/Phobias_LivinginTerror.mpg)
5. Traffic
(2000) is a good example of social learning and operant conditioning
(especially as it relates to drug addiction sections of this text (see “Making
Connections: Why Do People Smoke?” and the “Breaking New Ground” section on
treating alcoholism). This movie intertwines four separate story lines but we
recommend you focus on that of the conservative politician recently appointed
as the
6. Jackass:
The Movie (2002). Choose any segment
from this film and discuss its implications for young children who idolize
these types of behaviors. You can
include a discussion of evolutionary and social learning issues at play here.
Suggested Websites
1. Differentiating classical and operant conditioning worksheet: http://www.ar.cc.mn.us/biederman/courses/p1110/conditioning2.htm
2. Using classical and operant conditioning (NOTE: This is a site that provides you with scenarios and solutions. You may not want to assign it to students, though, since the answers are posted): http://www.utexas.edu/courses/svinicki/ald320/CCOC.html.
3. Classical conditioning handout: http://flightline.highline.edu/sfrantz/ClassicalConditioning/classical%20conditioning%20examples%20worksheet.htm
4. Operant conditioning worksheet: http://core.ecu.edu/psyc/ironsmithe/Developmental/operant.htm
5. Overview of operant conditioning: http://chiron.valdosta.edu/whuitt/col/behsys/operant.html
6. Overview of social learning: http://teachnet.edb.utexas.edu/~lynda_abbott/Social.html
Suggested
Alloway, T., Wilson, G.& Graham, J.
(2005). Sniffy: The Virtual Rat.
Bandura, A.,
Ross, D., & Ross, S. A. (1963). Vicarious reinforcement and imitative learning. Journal of Abnormal
& Social Psychology, 67,
601-608.
Bushman, B.J., & Anderson, C.A. (2001). Media violence and the American public: Scientific facts versus media misinformation. American Psychologist, 56, 477-489.
Dinn, W. M., Aycicegi, A., & Harris, C. L. (2004). Cigarette smoking in a student sample: Neurocognitive and clinical correlates. Addictive Behaviors, 29, 107-126.
Garcia, J., Kimeldorf, D. J., & Koelling, R. A. (1955). A conditioned aversion towards saccharine resulting from exposure to gamma radiation. Science, 122, 157-159.
Jones, M. C. (1924). A laboratory study of fear: The case of Peter. Pedagogical Seminary, 31,
308-315.
Jones, M. C. (1974). Albert, Peter, and John B. Watson. American Psychologist, 29, 581-583.
Meltzoff, A. N., & Moore, M. K. (1983). Newborn infants imitate adult facial gestures. Child Development, 54, 702-709.
Pavlov, I.P. (1906). The scientific investigation of the psychical faculties or processes in the higher animals. Science, 24, 613-619.
Seligman, M.E.P. (1970). On the generality of the laws of learning. Psychological Review, 77, 406-418.
Skinner, B.F. (1959). A case history in scientific method.
In S. Koch (Ed.). Psychology--A study of a science, Vol. 2 (pp.
359-379).
Watson, J. B. & Rayner, R.
(1920). Conditioned emotional reactions. Journal
of Experimental
Psychology, 3, 1-14.