首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.
The effects of changeover delays of fixed or variable duration on concurrent variable-interval performance in pigeons were investigated in a series of three experiments. Experiment 1 compared the effects of a fixed, variable, or variable signaled changeover delay on interchangeover times and responding during and after the changeover delay. The duration of the changeover delays was systematically varied in Experiment 2, and the relative reinforcement frequencies were manipulated in Experiment 3. Interchangeover times were found to be shorter when changeover delays of variable duration were compared with those of fixed duration. Changeover delays of fixed duration produced higher response rates during the changeover delay than after the changeover delay had elapsed; changeover delays of variable duration produced such differences to a lesser extent. It was concluded that the changeover delay in concurrent variable-interval schedules of reinforcement functionally acts as a delay period to the next opportunity for reinforcement, possibly serving as a conditioned reinforcer for the behavior preceding it (the interchangeover time) and as a discriminative stimulus for the behavior in its presence (response rates during the delay).  相似文献   

2.
Pigeons responded on a two-key concurrent chains choice procedure with the same level of percentage reinforcement on each key. During the initial links, a choice response on either key occasionally produced a conditioned reinforcer—which on one key was associated with a 15-sec, and on the other key with a 30-sec, interreinforcement interval—or an extinction stimulus. In Part 1, the initial links were equal. With successive decreases in the probability of a reinforcer, choice shifted from preference for the 15-sec terminal link toward indifference. In Part 2, the initial links were unequal and were arranged so that the shorter initial link preceded the 30-sec terminal link. At a high probability of a reinforcer, the pigeons again preferred the 15-sec terminal link. However, at a low probability, the pigeons reversed and preferred the alternate key. It was concluded that the conditioned reinforcers tended to become functionally equivalent at a low probability of a reinforcer, despite the nominally different interreinforcement intervals, with the result that choice was then modulated by the relative size of the initial links. The data are inconsistent with the view that choice and the strength of conditioned reinforcers are isomorphic with the reduction in delay to reward correlated with terminal link stimuli.  相似文献   

3.
How do animals choose between opportunities to run of different durations? Are longer durations preferred over shorter durations because they permit a greater number of revolutions? Are shorter durations preferred because they engender higher rates of running? Will longer durations be chosen because running is less constrained? The present study reports on three experiments that attempted to address these questions. In the first experiment, five male Wistar rats chose between 10-sec and 50-sec opportunities to run on modified concurrent variable-interval (VI) schedules. Across conditions, the durations associated with the alternatives were reversed. Response, time, and reinforcer proportions did not vary from indifference. In a second experiment, eight female Long-Evans rats chose between opportunities to run of equal (30 sec) and unequal durations (10 sec and 50 sec) on concurrent variable-ratio (VR) schedules. As in Experiment 1, between presentations of equal duration conditions, 10-sec and 50-sec durations were reversed. Results showed that response, time, and reinforcer proportions on an alternative did not vary with reinforcer duration. In a third experiment, using concurrent VR schedules, durations were systematically varied to decrease the shorter duration toward 0 sec. As the shorter duration decreased, response, time, and reinforcer proportions shifted toward the longer duration. In summary, differences in durations of opportunities to run did not affect choice behavior in a manner consistent with the assumption that a longer reinforcer is a larger reinforcer.  相似文献   

4.
The hyperbolic-decay model is a mathematical expression of the relation between delay and reinforcer value. The model has been used to predict choices in discrete-trial experiments on delay-amount tradeoffs, on preference for variable over fixed delays, and on probabilistic reinforcement. Experiments manipulating the presence or absence of conditioned reinforcers on trials that end without primary reinforcement have provided evidence that the hyperbolic-decay model actually predicts the strength of conditioned reinforcers rather than the strength of delayed primary reinforcers. The model states that the strength of a conditioned reinforcer is inversely related to the time spent in its presence before a primary reinforcer is delivered. A possible way to integrate the model with Grace’s (1994) contextual-choice model for concurrent-chain schedules is presented. Also discussed are unresolved difficulties in determining exactly when a stimulus will or will not serve as a conditioned reinforcer.  相似文献   

5.
Blocking was investigated in a free-operant procedure by presenting a response-contingent signal prior to reinforcer delivery. At issue was the way in which blocking effects previously reported with this procedure are related to conditioned reinforcement effects, also previously found with similar procedures. Signal presentation decreased response rate when delay of reinforcement was 0 or .5 sec, but the signal increased response rate when the delay of reinforcement was increased to 3 sec. Thus, which effect (blocking or conditioned reinforcement) occurred depended critically on the response-reinforcer interval.  相似文献   

6.
Stimuli associated with primary reinforcement for instrumental behavior are widely believed to acquire the capacity to function as conditioned reinforcers via Pavlovian conditioning. Some Pavlovian conditioning studies suggest that animals learn the important temporal relations between stimuli and integrate such temporal information over separate experiences to form a temporal map. The present experiment examined whether Pavlovian conditioning can establish a positive instrumental conditioned reinforcer through such temporal integration. Two groups of rats received either delay or trace appetitive conditioning in which a neutral stimulus predicted response-independent food deliveries (CS1→US). Both groups then experienced one session of backward second-order conditioning of the training CS1 and a novel CS2 (CS1–CS2 pairing). Finally, the ability of CS2 to function as a conditioned reinforcer for a new instrumental response (leverpressing) was assessed. Consistent with the previous demonstrations of temporal integration in fear conditioning, a CS2 previously trained in a trace-conditioning protocol served as a better instrumental conditioned reinforcer after backward second-order conditioning than did a CS2 previously trained in a delay protocol. These results suggest that an instrumental conditioned reinforcer can be established via temporal integration and raise challenges for existing quantitative accounts of instrumental conditioned reinforcement.  相似文献   

7.
Pigeons pecked keys on concurrent-chains schedules that provided a variable interval 30-sec schedule in the initial link. One terminal link provided reinforcers in a fixed manner; the other provided reinforcers in a variable manner with the same arithmetic mean as the fixed alternative. In Experiment 1, the terminal links provided fixed and variable interval schedules. In Experiment 2, the terminal links provided reinforcers after a fixed or a variable delay following the response that produced them. In Experiment 3, the terminal links provided reinforcers that were fixed or variable in size. Rate of reinforcement was varied by changing the scheduled interreinforcer interval in the terminal link from 5 to 225 sec. The subjects usually preferred the variable option in Experiments 1 and 2 but differed in preference in Experiment 3. The preference for variability was usually stronger for lower (longer terminal links) than for higher (shorter terminal links) rates of reinforcement. Preference did not change systematically with time in the session. Some aspects of these results are inconsistent with explanations for the preference for variability in terms of scaling factors, scalar expectancy theory, risk-sensitive models of optimal foraging theory, and habituation to the reinforcer. Initial-link response rates also changed within sessions when the schedules provided high, but not low, rates of reinforcement. Within-session changes in responding were similar for the two initial links. These similarities imply that habituation to the reinforcer is represented differently in theories of choice than are other variables related to reinforcement.  相似文献   

8.
Two experiments tested two cynomolgus monkeys’ self-control—choice of a longer, more delayed reinforcer over a shorter, less delayed reinforcer. In Experiment 1, subjects exhibited significant selfcontrol in a procedure in which reinforcer amounts and delays were held constant throughout a condition. In Experiment 2, subjects exhibited significantly greater sensitivity to variation in reinforcer amount than to variation in reinforcer delay in a procedure in which the reinforcer delay associated with the self-control alternative was adjusted until each macaque was indifferent between the two alternatives. Both experiments indicated that, in laboratory paradigms in which humans show self-control and pigeons and rats show impulsiveness, macaques show self-control. These results are inconsistent with the hypothesis that species differences in self-control are a function of language ability or of specific types of prior training. The results are consistent with the hypothesis that species differences in self-control are related to the ratio of brain size to body weight (a possible indicator of general cognitive ability) or to shared phylogeny.  相似文献   

9.
Clement, Feltus, Kaiser, and Zentall (2000) found that when pigeons have to work to obtain a discriminative stimulus that is followed by reinforcement, they prefer a discriminative stimulus that requires greater effort over one that requires less effort. The authors suggested that such a preference results from the greater change in hedonic value that occurs between the more aversive event and the onset of the stimulus that signals reinforcement, a contrast effect. It was hypothesized that any stimulus that follows a relatively more aversive event would be preferred over a stimulus that follows a relatively less aversive event. In the present experiment, the authors tested the counterintuitive prediction of that theory, that pigeons should prefer a discriminative stimulus that follows the absence of reinforcement over a discriminative stimulus that follows reinforcement. Results supported the theory.  相似文献   

10.
Task difficulty in delayed matching-to-sample tasks (DMTS) is increased by increasing the length of a retention interval. When tasks become more difficult, choice behavior becomes more susceptible to bias produced by unequal reinforcer ratios. Delaying reinforcement from choice behavior also increases both task difficulty and the biasing effect of unequal reinforcer probability. Six pigeons completed nine DMTS conditions with retention intervals of 0, 2, 4, 6, and 8 sec, in which reinforcer delays of 0, 2, and 4 sec were combined with ratios of reinforcer probabilities of .5/.5, .2/.8, and .8/.2 for correct red and green responses. Discriminability (logd) decreased with both increasing retention interval duration and increasing reinforcer delay. Sensitivity to reinforcement, the tendency for ratios of choice responses to follow unequal reinforcer probabilities, also increased as a function of both increasing retention interval and increasing reinforcer delay. The result is consistent with the view that remembering in DMTS tasks is a discriminated operant in which increasing task difficulty increases sensitivity to reinforcement.  相似文献   

11.
In Experiment 1, the form of keypecks produced in an autoshaping procedure with food or water reinforcers was compared with that of eating and drinking responses. Because the responses involve a number of different effector systems, several elements of response form were measured, including peck force and duration, gape, and eye closure. Gape was the only measure to reliably distinguish between both ingestive responses and between conditioned keypecks reinforced with food or water. With either reinforcer, keypecks had greater force than did ingestive behaviors. In Experiment 2, a transition between two forms of keypeck was produced by manipulating deprivation and reinforcer conditions. Some measures appeared to vary in a dichotomous manner between two discrete response forms; gape showed a gradual and continuous change involving the production of intermediate forms of the response. It was concluded that the control of conditioned response form involves theconstruction of the response from movements produced by several effector systems, each with potentially different sources of control.  相似文献   

12.
Control of pigeons’ keypecking by a stimulus-reinforcer contingency was investigated in the context of a four-component multiple schedule. In each of three experiments, pigeons were exposed to a schedule consisting of two two-component sequences. Discriminative stimuli identifying each sequence were present only in Component 1, which was 4, 6, or 8 sec in duration, while reinforcers could be earned only in Component 2 (30 sec in duration). Control by a stimulus-reinforcer contingency was sought during Component 1 by arranging a differential relation between Component 1 cues and schedule of reinforcement in Component 2. In Experiment 1, rate of keypecking during Component 1 varied with the presence and absence of a stimulus-reinforcer contingency. When a contingency was introduced, rate of keypecking increased during the Component 1 cue associated with the availability of reinforcement in Component 2. In Experiment 2, the stimulus-reinforcer contingency was manipulated parametrically by varying the correlation between Component 1 cues and Component 2 schedules of reinforcement. Responding in Component 1 varied as a function of strength of the stimulus-reinforcer contingency. The relatively high rates of Component 1 responding observed in Experiments 1 and 2 pose difficulties for conceptions of stimulus-reinforcer control based on probability of reinforcement. In these two experiments, the stimulus-associated probabilities of reinforcement in Component 1 were invariant at zero. An alternate dimension of stimulus-reinforcer control was explored in Experiment 3, in which Component 1 cues were differentially associated with delay to reinforcement in Component 2, while probability of reinforcement was held constant across components. When the stimulus-reinforcer contingency was in force, rate of responding in Component 1 varied inversely with delay to reinforcement in Component 2. In a quantitative analysis of data from Experiments 2 and 3, relative rate of responding during Component 1 was strongly correlated with two measures of relative delay to reinforcement.  相似文献   

13.
Six water-deprived pigeons were exposed to a fixed-time 90-sec water schedule with and without a conspecific target available. Target contacts and the pigeon’s location in the test chamber during the interreinforcement interval were recorded, and the results were compared with those previously obtained with food reinforcement. Prior to target introduction, water-reinforced birds spent more total time in the front near the reinforcer dispenser and less in the rear than food-reinforced birds and, unlike food-reinforced birds, exhibited postreinforcement drinking-like behaviors near the reinforcer dispenser before moving away from that area. With the target available, the level, topography, and duration of target-directed biting pecks were comparable for food- and water-reinforced pigeons. In contrast, the temporal organization of target pecks reflected the different temporal and spatial organizations of behavior prior to target introduction. For both food- and water-reinforced birds, the time between reinforcers at which a bird was spatially situated halfway between the front and rear of the chamber prior to target presentation was positively correlated with the time at which maximum target contact subsequently occurred.  相似文献   

14.
Delay between choice and receipt of reinforcement (prereinforcer delay) and delay between receipt of reinforcement and the next opportunity to choose (postreinforcer delay) were varied in a discretetrials choice paradigm using four pigeons. The pigeons consistently chose the reinforcer with the smaller prereinforcer delay. Variations in postreinforcer delay did not affect choice unless prereinforcer delays were equal. The results support previous findings that prereinforcer delays contribute disproportionately to the effects of rate of reinforcer access on choice in pigeons.  相似文献   

15.
16.

Temporal parameters were varied in two different observing response procedures. In Experiment I, concurrent variable-interval chain schedules were employed. Responding on one key led to either a stimulus correlated with reinforcement or a stimulus correlated with time-out. Responding on the other key led to a stimulus which ended either in reinforcement or time-out. The duration of the delay to reinforcement or time-out was varied, the delays for all three stimuli always remaining equal in a given phase. It was found that the longer the delay, the greater the preference for the observing response. In Experiment II a procedure was employed in which birds pecked during a “trial” to produce stimuli correlated with reinforcement or time-out at the end of the trial. The duration of the trial ending in time out was varied while the positive trial duration remained constant. It was found that the longer the duration of the negative trial, the greater the strength of observing responses. The results were interpreted as supporting the hypothesis that the value of a positive stimulus is a function of time spent in stimuli correlated with nonreinforcement.

  相似文献   

17.
A dissociation between the effect of reinforcer type and response strength on the force of the pigeon’s keypeck response was shown in three experiments. In Experiment 1, pigeons were trained to peck two conditioned stimuli, one paired with water and another paired with grain. The pigeons made more forceful pecks for grain than for water and also showed a tendency, albeit an unreliable one, to respond on a higher percentage of food trials than water trials. In Experiment 2, the pigeons from Experiment 1 were satiated with either food or water and were then presented with the two conditioned stimuli in an extinction test. It was found that, regardless of the drive state, the pigeons made more forceful pecks to the stimulus that predicted food than to the stimulus that predicted water. In the thirsty group, however, this difference in force was not accompanied by a difference in the percentage of trials with a response. In Experiment 3, pigeons trained with a single reinforcer pecked more often on instrumentally reinforced trials than on Pavlovian conditioning trials, but there was no difference in the force of the pecks. Taken together, these results imply that differences in response strength cannot account for the difference between the force of food- and water-reinforced pecks. Instead, stimulus-substitution theory may provide the best account of the topography of the two types of pecks.  相似文献   

18.
Two experiments examined the effects of preexposure to a stimulus on the subsequent acquisition of conditioned suppression by rats. Variations in the level of suppression within conditioning trials were noted so thatinhibition of delay (taken here to mean less suppression at the beginning of a trial than at the end) could be detected. Inhibition of delay was observed both during the acquisition of suppression and (in Experiment 1) when suppression began to wane with continued postasymptotic training. Preexposure to the to-be-conditioned stimulus retarded acquisition of suppression and slowed the appearance of inhibition of delay both in acquisition and (in Experiment 1) in postasymptotic performance. Experiment 2 demonstrated that inhibition of delay was attenuated during conditioning that followed preexposure in which the stimulus was paired with a weak reinforcer. These results provide no support for the suggestion that preexposure to a stimulus retards later conditioning because it allows the subject to acquire information about stimulus duration that in turn fosters the development of inhibition of delay. Rather, they are compatible with the suggestion that preexposure causes the stimulus to lose associability.  相似文献   

19.
Five rats served as subjects in an experiment that examined the effect of increasing response effort on self-control (choice of a larger, more delayed reinforcer over a smaller, less delayed reinforcer). The rats made significantly more self-control choices as the force required to respond on each lever increased from 0.1 to 0.8 N. As the force required to press the levers increased to 1 N and then began to decrease, some of the rats stopped responding. For those rats that continued to respond, self-control tended to decrease. The results suggest that increasing the required response force can increase selfcontrol choices, as long as the required response force is low enough that some responding occurs.  相似文献   

20.
Previous research showed that sucrose and wheel-running reinforcement of leverpressing generate different response rate asymptotes. To investigate the basis of this difference, the present study assessed the role of inhibitory after-effects and excitatory stimulus effects on measures of responding in rats exposed to fixed-interval schedules that randomly produced either sucrose or wheel-running reinforcers. Different discriminative stimuli were associated with each reinforcer type. Inhibitory aftereffects and excitatory stimulus effects were assessed by the pattern of postreinforcement pauses and local response rates across the four different combinations of previous and upcoming reinforcer types: wheel-wheel, wheel-sucrose, sucrose-wheel, and sucrose-sucrose. Results showed that, regardless of the prior type of reinforcer, response rates were higher and pauses were shorter in the presence of a stimulus signaling sucrose reinforcement. This suggests that differences in response rate asymptotes generated by these qualitatively different reinforcers may have more to do with differences in excitatory stimulus effects than with inhibitory after-effects.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号