Animal Behavior for Shelter Veterinarians and Staff. Группа авторов
Читать онлайн книгу.other behaviors get associated with the consequence instead.
The problem of timing is a common one with pet owners. The following scenario might be familiar: Many dog owners come home to find that their dog has rummaged through the trash. In an attempt to punish trash‐rummaging behavior, the owner scolds the dog, perhaps by yelling or confining the dog to a crate. The problem, though, is that it is likely the dog rummaged through the trash hours before the owner came home. Then, even though the dog was peacefully chewing on its dog bone upon the owner’s return, it experienced an aversive consequence. Subsequently, the scolding was associated with appropriate behavior instead of the trash‐rummaging behavior that the owner attempted to punish. Timing, or more specifically, immediacy, is crucial for the development of a behavior‐consequence association.
The second major factor that determines the effectiveness of a reinforcer or punisher in establishing a new or eliminating an unwanted behavior is how often the behavior is followed by the consequence. Formally, how often a consequence follows a behavior is called a schedule. If a consequence follows every instance of behavior, then the consequence is on a continuous schedule. In contrast, if a consequence does not follow every time a behavior occurs, then the consequence is on an intermittent schedule. For a strong association between a behavior and a consequence to develop, the consequence needs to follow the behavior every time it occurs. This is especially true when attempting to teach a new behavior with reinforcement or when attempting to reduce an unwanted behavior with punishment (Zimmerman and Ferster 1963).
Schedules of consequence deliveries are usually referred to as reinforcement schedules, though they are relevant to punishment as well. Schedules of reinforcement can differ in two ways. First, they can differ based on whether the reinforcer is delivered after a certain number of responses or after some amount of time passed. In ratio schedules, reinforcement is delivered following a particular number of responses. Interval schedules are set to deliver reinforcement when one response is made after some amount of time has passed. Continuous and intermittent consequence deliveries can be broken down into four schedules: fixed ratio, variable ratio, fixed interval, and variable interval (see Table 3.2).
In fixed schedules, the number of responses needed to obtain reinforcement or the amount of time that needs to pass is the same every time. With fixed ratio schedules, the number of responses that need to occur for reinforcement to be delivered stays the same after each delivery. The number of responses can be 1, 10, or more. Regardless, the same number of responses is required for reinforcement to occur. For example, in scent detection dogs might not get reinforced with the target scent until the 10th bag they smell. With fixed interval schedules, the amount of time that must pass before a response is reinforced is the same across deliveries. Whether the interval is one minute or one hour, the same amount of time must pass before a response is reinforced. For example, a dog begging at the table will not be reinforced for the begging behavior until after the owner is done with dinner and gives the dog a handout.
In variable schedules, the number of responses or the interval duration for reinforcement changes around some average. A variable ratio schedule requires a different number of responses each time reinforcement occurs. That is, the number of responses can change from one reinforcement to the next (e.g., 5 responses may occur prior to one reinforcement, while 10 may occur prior to the next reinforcement, but overall the average number of responses to reinforcement is, for instance, 6). Similarly, with a variable interval schedule, the amount of time between reinforcements changes. For instance, on a variable interval schedule of five seconds, reinforcement might be delivered when the animal responds after two seconds has passed this time and not until nine seconds has passed the next time. Box 3.2 explores some examples of variable schedule reinforcement in the shelter.
Table 3.2 Reinforcement schedules.
Reinforcement schedule | Definition | Example |
---|---|---|
Fixed interval | Reinforcement is delivered at a predictable time interval | Letting animals out in the play yard: every morning at 9 a.m. the animal caregiver opens the enclosure door, but the animal’s behavior of checking the door to go outside isn’t reinforced until it checks the door after 9 a.m. |
Variable interval | Response is reinforced after an interval of time that varies but centers around some average amount of time | Animal feedings: the time of feeding an animal may vary from day to day, but on average a caregiver provides food every eight hours. Therefore, the animal’s response to checking the bowl will not be reinforced until an average of eight hours has passed. |
Fixed ratio | Response is reinforced only after a specified number of responses | Multiple repetitions: a trainer wants an animal to do multiple repetitions of the same behavior. Therefore, the trainer delivers reinforcement after every two correct responses. |
Variable ratio | Response is reinforced after an average number of responses | Opening the door: an animal might paw at the door several times to be let through. The owner lets the animal in after the animal paws on average five times. |
Though intermittent schedules don’t work as well as continuous reinforcement for establishing a new behavior, they work really well in maintaining an already established behavior (Jenkins and Stanley 1950). Typically, after a dog is trained to sit, trainers reduce the number of reinforcers she receives for sitting. The trainer gradually transitions the continuous schedule of reinforcement to an intermittent schedule. As long as the dog receives a treat once in a while, she reliably sits on cue. Changing a continuous schedule of reinforcement to an intermittent one is often called “schedule thinning.” This procedure is beneficial for trainers because not only does it reduce the number of reinforcers needed to maintain behavior, but it also causes the animal to perform consistently. Intermittent schedules result in unpredictable deliveries of reinforcers that essentially teach the animal to be a devoted “gambler.” Without knowing when a response will be reinforced, the animal performs the behavior consistently and reliably! Based on laboratory research, once a behavior is maintained intermittently, it can be very hard to eliminate (Harper and McLean 1992).
The effects of intermittent reinforcement are commonly found in the shelter. Food‐dispensing toys are often provided to facilitate an enriching environment. Caregivers might vary the type of food or switch between food or scent. However, the dog has a preference for food enrichment over the scent enrichment, and after some experience getting the toy with food on some occasions and getting the toy with scent on others, whether or not the toy contains food is a mystery to the dog! The effect of the intermittent presence of food is evident in the dog’s behavior: the dog is likely to check the toy every time it is placed into its enclosure. The behavior of checking the toy is on an intermittent schedule of reinforcement, leading the behavior to occur reliably when the toy is present (even though the food reinforcer only occurs sometimes).
Box 3.2 Variable Schedule Reinforcement in the Shelter
Training Dogs to Sit Using Variable Ratio Reinforcement
An animal trainer is training dogs in the shelter to sit when someone walks by their kennel. The trainer decides to deliver food on a variable ratio 5 (written as VR 5). This means that on average, every fifth response will receive a food reward when someone walks by. The dog might receive a piece of food on the first response (sitting when the first person walks by), sixth response, second response, eighth response, fifth response,