首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
When pigeons are trained on a delayed conditional discrimination with presence versus absence samples and tested with delays, a bias to choose the comparison associated with the absence sample is observed with increasing delay. Additionally, when the samples consist of food versus no food, this trial-type performance difference is reversed on short-delay trials: a bias to choose the comparison associated with the presence sample develops with delay testing. This reversal in comparison bias at short delays has been attributed to a preference produced by backward associations between the hedonic samples and the nonhedonic choice stimuli. In the present experiment, we tested an alternative hypothesis, that the short-delay comparison bias is produced by proactive interference—in particular, from reinforcement obtained on the previous trial—by including a group trained with reinforcement on only half of the trials with a correct response. According to the proactive interference account, this group should have shown a smaller short-delay comparison bias than would the typical 100% reinforcement group. Instead, consistent with a backward-association interpretation, the magnitude of the short-delay comparison bias shown by the 50% group was significantly greater than that shown by the 100% group.  相似文献   

2.
Control of pigeons’ keypecking by a stimulus-reinforcer contingency was investigated in the context of a four-component multiple schedule. In each of three experiments, pigeons were exposed to a schedule consisting of two two-component sequences. Discriminative stimuli identifying each sequence were present only in Component 1, which was 4, 6, or 8 sec in duration, while reinforcers could be earned only in Component 2 (30 sec in duration). Control by a stimulus-reinforcer contingency was sought during Component 1 by arranging a differential relation between Component 1 cues and schedule of reinforcement in Component 2. In Experiment 1, rate of keypecking during Component 1 varied with the presence and absence of a stimulus-reinforcer contingency. When a contingency was introduced, rate of keypecking increased during the Component 1 cue associated with the availability of reinforcement in Component 2. In Experiment 2, the stimulus-reinforcer contingency was manipulated parametrically by varying the correlation between Component 1 cues and Component 2 schedules of reinforcement. Responding in Component 1 varied as a function of strength of the stimulus-reinforcer contingency. The relatively high rates of Component 1 responding observed in Experiments 1 and 2 pose difficulties for conceptions of stimulus-reinforcer control based on probability of reinforcement. In these two experiments, the stimulus-associated probabilities of reinforcement in Component 1 were invariant at zero. An alternate dimension of stimulus-reinforcer control was explored in Experiment 3, in which Component 1 cues were differentially associated with delay to reinforcement in Component 2, while probability of reinforcement was held constant across components. When the stimulus-reinforcer contingency was in force, rate of responding in Component 1 varied inversely with delay to reinforcement in Component 2. In a quantitative analysis of data from Experiments 2 and 3, relative rate of responding during Component 1 was strongly correlated with two measures of relative delay to reinforcement.  相似文献   

3.
Blocking was investigated in a free-operant procedure by presenting a response-contingent signal prior to reinforcer delivery. At issue was the way in which blocking effects previously reported with this procedure are related to conditioned reinforcement effects, also previously found with similar procedures. Signal presentation decreased response rate when delay of reinforcement was 0 or .5 sec, but the signal increased response rate when the delay of reinforcement was increased to 3 sec. Thus, which effect (blocking or conditioned reinforcement) occurred depended critically on the response-reinforcer interval.  相似文献   

4.
Task difficulty in delayed matching-to-sample tasks (DMTS) is increased by increasing the length of a retention interval. When tasks become more difficult, choice behavior becomes more susceptible to bias produced by unequal reinforcer ratios. Delaying reinforcement from choice behavior also increases both task difficulty and the biasing effect of unequal reinforcer probability. Six pigeons completed nine DMTS conditions with retention intervals of 0, 2, 4, 6, and 8 sec, in which reinforcer delays of 0, 2, and 4 sec were combined with ratios of reinforcer probabilities of .5/.5, .2/.8, and .8/.2 for correct red and green responses. Discriminability (logd) decreased with both increasing retention interval duration and increasing reinforcer delay. Sensitivity to reinforcement, the tendency for ratios of choice responses to follow unequal reinforcer probabilities, also increased as a function of both increasing retention interval and increasing reinforcer delay. The result is consistent with the view that remembering in DMTS tasks is a discriminated operant in which increasing task difficulty increases sensitivity to reinforcement.  相似文献   

5.
Pigeons pecked keys on concurrent-chains schedules that provided a variable interval 30-sec schedule in the initial link. One terminal link provided reinforcers in a fixed manner; the other provided reinforcers in a variable manner with the same arithmetic mean as the fixed alternative. In Experiment 1, the terminal links provided fixed and variable interval schedules. In Experiment 2, the terminal links provided reinforcers after a fixed or a variable delay following the response that produced them. In Experiment 3, the terminal links provided reinforcers that were fixed or variable in size. Rate of reinforcement was varied by changing the scheduled interreinforcer interval in the terminal link from 5 to 225 sec. The subjects usually preferred the variable option in Experiments 1 and 2 but differed in preference in Experiment 3. The preference for variability was usually stronger for lower (longer terminal links) than for higher (shorter terminal links) rates of reinforcement. Preference did not change systematically with time in the session. Some aspects of these results are inconsistent with explanations for the preference for variability in terms of scaling factors, scalar expectancy theory, risk-sensitive models of optimal foraging theory, and habituation to the reinforcer. Initial-link response rates also changed within sessions when the schedules provided high, but not low, rates of reinforcement. Within-session changes in responding were similar for the two initial links. These similarities imply that habituation to the reinforcer is represented differently in theories of choice than are other variables related to reinforcement.  相似文献   

6.
The effects of schedule of reinforcement (partial vs. consistent) and delay of reward (0 to 20 sec) on running in rats were examined in two investigations. The effects of delay depended upon schedule of reinforcement; acquisition speed decreased as delay increased under consistent reinforcement, a common finding, while acquisition speed was independent of delay under partial reinforcement, a new finding. The partial-reinforcement acquisition effect or PRAE is defined as faster acquisition speed under partial than under consistent reinforcement. Because running speed was independent of delay under partial reinforcement, but decreased as delay increased under consistent reinforcement, the PRAE increased as delay of reinforcement increased.  相似文献   

7.
Pigeons learned a series of reversals of a simultaneous red-green visual discrimination. Delay of reinforcement (0 vs. 2 sec) and intertrial interval (ITI; 4 vs. 40 sec) were varied across blocks of reversals. Learning was faster with 0-sec than with 2-sec delays for both ITI values and faster with 4-sec ITIs than with 40-sec ITIs for both delays. Improvement in learning across successive reversals was evident throughout the experiment, furthermore, even after more than 120 reversals. The potent effects of small differences in reinforcement delay provide evidence for associative accounts and appear to be incompatible with accounts of choice that attempt to encompass the effects of temporal parameters in terms of animals’ timing of temporal intervals.  相似文献   

8.
Delay between choice and receipt of reinforcement (prereinforcer delay) and delay between receipt of reinforcement and the next opportunity to choose (postreinforcer delay) were varied in a discretetrials choice paradigm using four pigeons. The pigeons consistently chose the reinforcer with the smaller prereinforcer delay. Variations in postreinforcer delay did not affect choice unless prereinforcer delays were equal. The results support previous findings that prereinforcer delays contribute disproportionately to the effects of rate of reinforcer access on choice in pigeons.  相似文献   

9.
Delayed-reward learning in pigeons was examined using a simultaneous red-green visual discrimination task in which the conditions during the delay interval were varied between groups. The nondifferential group received training in which the stimulus present during the 1-min delay was the same following a peck on the correct and incorrect colors. The other three groups received 1-min delay training in which different stimuli occurred in the delay interval following correct and incorrect choices. The differential group received continuous, differential stimuli during the delay. The reinstatement group received the differential stimuli in the 10 sec immediately following the choice and during the last 10 sec of the delay. The reversedcue group was treated in the same way, except that the 10-sec delay stimulus immediately following an incorrect response was also presented for 10 sec prior to reward on correct choices, and the stimulus following a correct response also occurred 10 sec before nonreward on incorrect choices. Nondifferential birds failed to learn the discrimination, while differential and reinstatement birds learned it readily. The reversed-cue birds learned to choose the incorrect stimulus. Differential and reinstatement birds showed no decrement in performance when the delay was increased to 2 min. These findings suggest that similarity of prereward and postresponse delay stimuli controls choice responding in long-delay learning, a finding compatible with both memorial and conditioned reinforcement interpretations.  相似文献   

10.
The effects of within-session variations in the intertriai interval (ITI) and delay on pigeons’ memory for event duration were studied in delayed symbolic matching-to-sample tasks. Pigeons were trained to peck one color following a long (8 sec) sample and another color following a short (2 sec) sample. In the first three experiments, the baseline conditions included a 10-sec delay (retention interval) and a 45-sec ITI. During testing, the delay was varied from 0 to 20 sec, and the ITI that preceded the trial was varied from 5 to 90 sec. When the ITI and delay were manipulated separately (Experiments 1 and 2), the pigeons displayed a choose-short tendency when the delay was longer than 10 sec or when the ITI was longer than 45 sec, and a choose-long tendency when either the delay or the ITI was shorter than these baseline values. These effects occurred whether the sample was food access or light. When the ITI and delay were manipulated together, the pigeons showed a large choose-long error tendency when the short delay was tested together with a short ITI, and no systematic error tendency when the short delay was tested together with a longer ITI. A very large choose-short error tendency emerged on trials with a long delay and a long ITI; a reduced choose-short tendency was present when the long delay was presented together with a short ITI. In Experiment 4, the baseline conditions were a 0-sec delay and a 45-sec ITI. In this case variations in the ITI had a smaller and unidirectional effect: the pigeons showed a choose-long error tendency when the ITI was decreased, but no effect of ITI increases. Two hypotheses were proposed and discussed: (1) that pigeons judge sample durations relative to a background time composed of the ITI and delay, and (2) that the delay and ITI effects might arise from a combination of subjective shortening and proactive effects of samples from previous trials.  相似文献   

11.
Four pigeons were exposed to multiple schedules with concurrent variable interval (VI) components and then tested for preference transfer. Half of the pigeons were trained on a multiple concurrent VI 20-sec, VI 40-sec/cuncurrent VI 4G-sec5 VI 80-sec schedule. The remaining pigeons were trained on a multiple concurrent VI 80-sec, VI 40-sec/concurrent VI 40-sec, VI 20-sec schedule-After stability criteria for time and response proportions were simultaneously met, four preference transfer tests were conducted with the stimuli associated with the VI 40-sec schedules. During the transfer tests, each pigeon allocated a greater proportion of responses (M=0,79) and time (M=0.82) to the stimulus associated with the VI 40-sec schedule that was paired with the VI 80-sec schedule than lo the VI 40-sec schedule stimulus paired with the VI 20-sec schedule. Absolute reinforcement rates on the two VI 40 sec schedules were approximately equal and unlikely to account for the observed preference. Nor was the preference consistent with the differences in local reinforcement rates associated with the two stimuli. Instead, the results were interpreted in terms of the differential value that stimuli acquire as a function of previous pairings with alternative schedules of reinforcement.  相似文献   

12.
Pigeons were trained with two concurrent delayed conditional discriminations that involved different distributions of memory intervals, namely 1 and 5 sec in the “short” and 5 and 10 sec in the “long” distribution. Memory for the initial stimulus was much better after 5 sec in the short distribution than in the long one. The memory functions between the intervals within each distribution were essentially flat. These findings were replicated in a second study that involved differential outcomes; this procedure enhanced memory generally within trials so that memory intervals of 1, 9, and 19 sec could be used, with 9 sec serving as the common memory interval. The findings are interpreted on the basis of the subject’s expectation of the likely duration of the upcoming memory interval and the associated delay of reinforcement.  相似文献   

13.
Eight pigeons chose between pairs of different sizes and delays of reinforcement scheduled according to nonindependent concurrent variable-interval variable-interval schedules. The results were best described by the generalized matching law, where the relative effects of amount and delay on preference are independent and multiplicative. Order of presentation of the conditions had a significant effect on preference that was best represented in the model by a modification of the bias parameter.  相似文献   

14.
Experiment 1 compared the acquisition of initial- and terminal-link responding in concurrent chains. The terminal-link schedules were fixed interval (FI) 10 sec and FI 20 sec, but some presentations were analogous to no-food trials in the peak procedure, lasting 60 sec with no reinforcement delivery. Pigeons completed a series of reversals in which the schedules signaled by the terminal-link stimuli (red and green on the center key) were changed. Acquisition of temporal control of terminal-link responding (as measured by peak location on no-food trials) was more rapid than acquisition of preference in the initial links. Experiment 2 compared acquisition in concurrent chains, using the typical procedure in which the terminal-link schedules are changed with a novel arrangement in which the initial-link key assignments were changed while the terminal-link schedules remained the same. Acquisition of preference was faster in the latter condition, in which the terminal-link stimulus-reinforcer relations were preserved. These experiments provide the first acquisition data that support the view that initial-link preference is determined by the values of the terminal-link stimuli.  相似文献   

15.
A recent theory of timing (Killeen & Fetterman, 1988) suggests that adjunctive behaviors may act as discriminative cues for the passage of time and that the rate of transition between those behaviors is affected by the rate of reinforcement within the experimental context. Is the rate of transition between behaviors correlated with the rate of reinforcement? What is the context in which rate of reinforcement is calibrated? If rate of transition is correlated with reinforcement frequency, does this correlation change with extended training? Four pigeons were trained on multiple fixed-time schedules of reinforcement, with one component always FT 15 sec, the other either FT 15 sec, FT 45 sec, or FT 5 sec. Behavior was coded into one of 12 categories. Response distributions in the constant component shifted when rate of reinforcement was varied in the other component and eventually shifted back toward their original location.  相似文献   

16.
When pigeons are given a choice between an initial-link alternative that results in either a terminal-link stimulus correlated with 100% reinforcement or a stimulus correlated with 0% reinforcement (overall 50% reinforcement) and another initial-link alternative that always results in a terminal-link stimulus correlated with 100% reinforcement, some pigeons show a preference for the initial-link alternative correlated with 50% reinforcement. Using this procedure, in Experiment 1, we found a relatively modest preference for 100% over 50% reinforcement. In Experiment 2, we decreased the reinforcement density for the second initial-link alternative to 75% and found a significant preference for the 50% reinforcement initial-link alternative. It may be that this “maladaptive” behavior results from a positive contrast between the expectation of reinforcement correlated with the 50% reinforcement initial-link alternative and the terminal-link stimulus correlated with 100% reinforcement. But apparently, the complementary negative contrast does not develop between the expectation of reinforcement correlated with the 50% reinforcement initial-link alternative and the terminal-link stimulus correlated with 0% reinforcement that often follow. Such paradoxical choice may account for certain human appetitive risk-taking behavior (e.g., gambling) as well.  相似文献   

17.
Cebus monkeys explored a small T-maze for 5 min, and their preference for the striped or black arm of the maze was assessed. On the next day, the experimental animals were placed into the nonpreferred arm for a 1-min period (exposure to the CS), removed from the T-maze for a 30-min delay interval, and then returned to the startbox of the maze, where they received a food reward (UCS). One control group (CS only) received the placement experience but was not rewarded after the 30-min period. A second control group (noncontingent UCS) received the reward in the startbox but not the placement experience. A second preference test showed that the experimental, but not the control, animals reversed their original preference, now showing a preference for the arm associated with reward. A retention test given 4 months after three such training-test trials revealed considerable retention of the preferences exhibited by the experimental and CS-only control subjects.  相似文献   

18.
Two experiments examined within-session changes in responding during discrimination procedures. In Experiment 1, rate of responding changed significantly within sessions during symbolic delayed matching-to-sample tasks when the delay between the stimulus and the choice period was short (1–5 sec), but not when it was long (8–12 sec). The percentage of responses that were correct did not change within sessions. In Experiment 2, response rates increased and then decreased within sessions during both S1 and S2 when successive discrimination procedures provided high, but not low, rates of reinforcement. Discrimination ratios sometimes increased within sessions. These results question two potential definitions of attention as explanations for within-session changes in response rates. They are more compatible with explanations based on concepts such as arousal, satiation, habituation, and interfering responses.  相似文献   

19.
Three pigeons were exposed to fixed-time (FT) 15 sec, fixed-interval (FI) 15 sec for performing an arbitrary response, a reversal back to FT 15 sec, and then extinction (no reinforcement). During each phase, a computer-controlled tracking system continuously recorded the position of the bird’s head as it moved freely in the experimental chamber. During the first exposure to FT 15 sec, all 3 birds developed a pattern of feeder-wall-directed behavior with occasional circular excursions from the feeder immediately following reinforcement. During FI 15 sec, all birds performed the arbitrary operant, which consisted of contacting a virtual target sphere near the rear of the chamber, and did not engage in feeder-wall-directed behavior. During the reversal back to FT 15 sec, the birds developed a behavior sequence consisting of moving in the direction of the target sphere after reinforcement, followed by feeder-wall-directed behavior prior to the next reinforcement. During extinction, either moves toward the target sphere or wall-directed behavior occurred separately, interspersed with reappearance of the two as a sequence, followed by cessation of both members of the behavior sequence. These findings indicate that prior reinforcement of an arbitrary response can affect the location and form of superstitious behavior that develops near the beginning of the interreinforcement interval, but that other factors (e.g., immediacy of reinforcement) affect the location and form of the behavior near the end of the interval. The findings can be interpreted in the context of superstitious chaining.  相似文献   

20.

Temporal parameters were varied in two different observing response procedures. In Experiment I, concurrent variable-interval chain schedules were employed. Responding on one key led to either a stimulus correlated with reinforcement or a stimulus correlated with time-out. Responding on the other key led to a stimulus which ended either in reinforcement or time-out. The duration of the delay to reinforcement or time-out was varied, the delays for all three stimuli always remaining equal in a given phase. It was found that the longer the delay, the greater the preference for the observing response. In Experiment II a procedure was employed in which birds pecked during a “trial” to produce stimuli correlated with reinforcement or time-out at the end of the trial. The duration of the trial ending in time out was varied while the positive trial duration remained constant. It was found that the longer the duration of the negative trial, the greater the strength of observing responses. The results were interpreted as supporting the hypothesis that the value of a positive stimulus is a function of time spent in stimuli correlated with nonreinforcement.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号