首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Each of four pigeons was exposed to a single random-ratio schedule of reinforcement in which the probability of reinforcement for a peck on either of two keys was 1/25. Reinforcer amounts were determined by an iterated prisoner’s dilemma (IPD) matrix in which the “other player” (a computer) playedtit-for-tat. One key served as thecooperation(C) key; the other served as thedefection(D) key. If a peck was scheduled to be reinforced and the D-key was pecked, the immediate reinforcer of that peck was always higher than it would have been had the C-key been pecked. However, if the C-key was pecked and thefollowing peck was scheduled to be reinforced, reinforcement amount for pecks on either key were higher than they would have been if the previous peck had been on the D-key. Although immediate reinforcement was always higher for D-pecks, the overall reinforcement rate increased linearly with the proportion of C-pecks. C-pecks thus constituted a form of self-control. All the pigeons initially defected with this procedure. However, when feedback signals were introduced that indicated which key had last been pecked,cooperation (relative rate of C-pecks)—hence, self-control—increased for all the pigeons.  相似文献   

2.
In matching-to-sample, comparison choice should be controlled by the identity of the sample and, when the sample is not available, by the overall probability of reinforcement associated with each of the comparisons. In the present research, pigeons were trained to match a frequent sample (appearing on 80% of the trials) to one comparison (C fr) and an infrequent sample (appearing on 20% of the trials) to the other (C inf), with the number of reinforcements associated with each sample equated. In Experiment 1, the task was identity matching; in Experiments 2 and 3, it was symbolic matching. We asked whether, when control of comparison choice by the sample was reduced (by inserting a delay between the sample and the comparisons), pigeons would choose comparisons on the basis of (1) the number of reinforcements per comparison (and thus show no comparison bias), (2) the comparison associated with the more frequent sample during training (and show a preference forC fr), or (3) the probability of reinforcement given a correct response (and show a preference forC inf), or (4) inhibition produced by nonreinforced choice of the more frequently correct comparison (and show a preference forC inf). Pigeons showed a significant tendency to chooseC fr. In Experiment 3, we showed that this bias did not result from the effects of intertrial facilitation or interference. Thus, it appears that when control of comparison choice by the sample is reduced, pigeons’ choice is controlled not merely by the probability of reinforcement but also by overall sample frequency.  相似文献   

3.
In a discrete-trials procedure, a frequency-dependent schedule shaped left-right choice proportion toward various equilibrium values between 0 and 1. At issue was (1) whether pigeons match when the overall reinforcement probabilities for two responses depend inversely on their recent frequency, and (2) how pigeons meet the schedule constraint in terms of local responding. That is, do they respond quasi-randomly (Bernoulli mode), or do they learn the stable pattern of the schedule (stable-pattern mode)? Molar choice behavior always tracked the equilibrium solution of the schedule, but the molecular response patterns varied substantially. Markov chains applied to the data revealed that responding was generally intermediate between the memoryless Bernoulli mode, and the perfect memory stable-pattern mode. The polymorphism of molecular patterns, despite molar regularities in behavior, suggests that (1) in order to engender the Bernoulli or stable-pattern modes, the reinforcement rule must strongly discourage competing response patterns (e.g., perseveration), and (2) under frequency-dependent schedules, molar matching is apparently not the outcome of momentary maximizing.  相似文献   

4.
Pigeons were studied on multiple variable-ratio yoked-variable-interval schedules in which components had equal rates of food reinforcement and appeared equally often on each of two keys. Interpolated between component changes on the final multiple schedule were 10-sec probes in which both schedule stimuli were present, one on each key. During multiple schedule training, variable-ratio response rates were greater than yoked-variable-interval rates; however, response rate differences in the components were not a function of the mean ratio value for the 40-to-320-ratio range studied. During the choice probes, subjects responded more to the stimulus associated with the interval schedule than to the one associated with the ratio schedule. It was concluded that pigeons prefer interval schedules over equal reinforcement rate ratio schedules, because the former generate fewer responses per reinforcement.  相似文献   

5.
Pigeons categorized binomial samples. One of two “coins” was tossed on each trial, and birds learned to infer from observing the outcomes which of the two equally likely coins had been tossed. Outcomes (“heads” or “tails”) appeared as successively presented red or green center keys. Coin R was biased in favor of red, and coin G was similarly biased in favor of green. A categorization consisted of a choice of a left or right side key and was reinforced with food if it was to the key (left for coin R and right for coin G) corresponding to the coin that produced that trial’s sample. Coin bias and minimum sample size required for reinforcement were experimentally manipulated. When sample size was greatest (n=8), categorizing a sample as having been produced by coin R tended to undermatch the probability that the sample was produced by coin R. When sample size was smallest (n=1), categorizing a sample overmatched, provided that the context did not include other trials with large samples. This context effect reconciles an otherwise inconsistent literature on intuitive statistical inference in pigeons but suggests a new and difficult goal for research-the general clarification of the effects of sampling context on inference.  相似文献   

6.
Twenty acquisition curves were obtained from each of 8 pigeons in a free-operant choice procedure. Every condition began with a phase in which two response keys had equal probabilities of reinforcement, and, as a result, subjects’ responses were divided fairly evenly between the two keys. This was followed by a phase in which one key had a higher probability of reinforcement than the other, and the development of preference was observed. In all but a few cases, response proportions increased for the key with the higher probability of reinforcement. In most conditions, the two probabilities differed by .06, but the actual probabilities varied (from .16 and .10 in one condition to .07 and .01 in another). Development of preference for the key with the higher probability of reinforcement was slower when the ratio of the two reinforcement probabilities was small (.16/.10) than when it was large (.07/.01). This finding is inconsistent with the predictions of several different quantitative models of acquisition, including the kinetic model (Myerson & Miezin, 1980) and the ratio-invariance model (Horner & Staddon, 1987). However, the finding is consistent with a hypothesis based on Weber’s law, which states that the two alternatives are more discriminable when the ratio of their reinforcement probabilities is larger, and, as a result, the acquisition of preference is faster.  相似文献   

7.
Two groups of pigeons were required to generate a fixed sequence of responses on three keys, for example, middle-left-right. One group received a small food reward (SFood) following each correct response except the terminal one, which was followed by a large food reward. The second group received conditioned reinforcement from an overhead light (SLight) for each correct response, with the terminal correct response followed by both SLight and the large food reward. We manipulated length of sequence (3 or 7 responses) and duration of required interresponse interval (IRI; 1 to 9 sec). SLight contingencies generated more accurate performances than did SFood when sequence length was 3 responses but not when it was 7 responses. IRI duration influenced accuracy under the SLight contingencies but not under SFood. These results show that conditioned reinforcers sometimes generate more accurate sequence learning than do primary reinforcers, and that schedule contigencies influence which type of feedback will optimize performance. The results parallel those from the matching-to-sample and conditional discrimination literature.  相似文献   

8.
Pigeons discriminated between two pairs of durations: a short set (2.5 and 5 sec) and a long set (5 and 10 sec). The pairs were intermixed within sessions and identified by the colors on the signal and choice keys. Once the task was learned, the pigeons experienced the following three conditions seriatim: (1) The signal key was made ambiguous about the test change, but the choice keys were informative (retrospective); (2) the signal key identified the test range, but the choice keys did not (prospective); (3) probe trials were introduced in which the color of the center key signaled one test range, but the color of the choice keys signaled the other test range (inconsistent). Accuracy of choice decreased in the retrospective condition and, returned to baseline levels, was higher under the prospective condition than under the retrospective condition. In a final condition, referred to as conflict trials, the center-key color signified one test range and the choice-key colors the other range. The results from these conflict-inconsistent tests indicate that choice behavior was largely controlled by the signal-key color and not by the choice-key color. We relate these findings to different approaches to timing in animals.  相似文献   

9.
In Experiment 1, three food-deprived pigeons received trials that began with red or green illumination of the center pecking key. Two or four pecks on this sample key turned it off and initiated a 0- to 10-sec delay. Following the delay, the two outer comparison keys were illuminated, one with red and one with green light. In one condition, a single peck on either of these keys turned the other key off and produced either grain reinforcement (if the comparison that was pecked matched the preceding sample) or the intertrial interval (if it did not match). In other conditions, 3 or 15 additional pecks were required to produce reinforcement or the intertrial interval. The frequency of pecking the matching comparison stimulus (matching accuracy) decreased as the delay increased, increased as the sample ratio was increased, and decreased as the comparison ratio was increased. The results of Experiment 2 suggested that higher comparison ratios adversely affect matching accuracy primarily by delaying reinforcement for choosing the correct comparison. The results of Experiment 3, in which delay of reinforcement for choosing the matching comparison was manipulated, confirmed that delayed reinforcement decreases matching accuracy.  相似文献   

10.
Dorrance, Kaiser, and Zentall (1998) trained pigeons on two concurrent simultaneous discriminations (A+B?; C+D?), with interspersed single-stimulus trials in which responding to the two positive discriminanda was differentially reinforced (A+; C±). In each of four separate experiments, the birds showed a preference for stimulus B over stimulus D. Dorrance et al. concluded that the results of these experiments were best described in terms of value transfer theory (VTT; Fersen, Wynne, Delius, & Staddon, 1991). They reject the possibility that the results of these experiments might have been caused by differential nonreinforced experience with the test stimuli (B and D) on the basis of correlational analyses within each experiment. However, differences between experiments in choice of B over D are well predicted by differences in their history of reinforcement. Previous findings that were thought to favor VTT are also entirely consistent with a simpler associative analysis.  相似文献   

11.
Keyboarding Aids     
《学校用计算机》2013,30(1-2):49-57
Abstract

Elementary schools across the United States are striving to find the most effective method of teaching keyboarding to their students. A number of schools make use of sensor-key capson the Dand the Kkeys. These caps have eight elevated dots forming a circle in the center of the cap. Educators would benefit from knowing if, indeed, these key caps make a significant difference in students' abilities to keyboard with increased speed and accuracy when compared to standard keys. This study was conducted on two groups of third-, fourth-, and fifth-grade students (N= 95) who completed a keyboarding unit. One group used the sensor-key caps and the other group did not. Results provided no evidence that the use of sensor-key caps improved keyboarding speed or accuracy over standard keys without the key caps.  相似文献   

12.
When pigeons are required to peck each of two keys in any order for reinforcement, stereotyped response sequences develop that are resistant to disruption by extinction, schedules of reinforcement, or contingencies requiring sequence variability. To test the hypothesis that stereotyped response sequences become integrated behavioral units, two experiments introduced within-sequence temporal delays of varying duration. Experiment 1 found that when a delay followed each peck in a sequence, there was substantial disruption of sequence performance that was independent of delay duration. However, such disruption was only temporary. Experiment 2 found that when the location of a delay within a sequence was varied, sequence disruption was a function of when, in a sequence, the delay occurred. Delays that occurred within sequence subunits had large effects, whereas delays that occurred between such subunits had small effects. The data indicate that pigeons can learn to bridge within-sequence delays, and suggest that response sequences are organized into “phrases.”  相似文献   

13.
In simultaneous matching-to-sample and oddity-from-sample tasks, briefly delaying the offset of trial stimuli following an incorrect choice response was found to facilitate task acquisition (Experiment 1). Because thispenalty-time procedure also resulted in longer choice-response latencies, it was hypothesized that any procedure that increased response latency would facilitate task acquisition. However, in Experiment 2, no evidence of facilitation was found when a 2-sec pause was imposed prior to the choice response. The results of Experiment 3 suggest that penalty-time facilitation of acquisition was not due to either the added differential outcome on correct versus incorrect trials (i.e., incorrect choice responses do not darken the keys as do correct choice responses) or the aversive effects associated with trial prolongation (i.e., incorrect responses not only result in the absence of reinforcement but also delay the start of the next trial). Instead, results suggest that birds trained with the penalty-time procedure review the trial stimuli following an incorrect choice.  相似文献   

14.
We explored response rate as a possible mediator of learned stimulus equivalence. Five pigeons were trained to discriminate four clip art pictures presented during a 10-sec discrete-trial fixed interval (FI) schedule: two paired with a one-pellet reinforcer, which supported a low rate of responding, and two paired with a nine-pellet reinforcer, which supported a high rate of responding. After subjects associated one stimulus from each of these pairs with a discriminative choice response, researchers presented two new clip art stimuli during a 10-sec FI: one trained with a differential reinforcement of low rate schedule (DRL) after the FI and the other trained with a differential reinforcement of high rate schedule (DRH) after the FI. Each of the stimuli that were withheld during choice training was later shown to see if the choice responses would transfer to these stimuli. The results suggest that response rate alone does not mediate learned stimulus equivalence.  相似文献   

15.
During simultaneous discrimination training, there is evidence that some of the value of the S+ transfers to the S?. When the value of the S+ is altered outside the context of the simultaneous discrimination, two very different predictions are made concerning its effect on its S?, depending on whether one views the S+ as an occasion setter or as a stimulus capable of transferring value. In four experiments, pigeons were trained with two similar simultaneous discriminations, A+B? and C+D?, and two single-stimulus trial types, A and C, (in which A always had greater nominal value than C). According to value transfer theory, on test trials, B should always be preferred over D, because B and D should be affected by the net values of A and C, respectively. According to an occasion setting account, however, D should be preferred over B because the presence of D signals a higher probability of reinforcement for responding to C than when C is alone, and/or the presence of B signals a lower probability of reinforcement for responding to A than when A is alone. In all four experiments, the pigeons preferred B over D, a result consistent with value transfer theory. Thus, an S? can acquire value from an S+ even when that value is conditioned in a “context” different from that of the simultaneous discrimination.  相似文献   

16.
Pigeons pecked on two response keys that delivered reinforcers on a variable-interval schedule. The proportion of reinforcers delivered by one key was constant for a few sessions and then changed, and subjects’ choice responses were recorded during these periods of transition. In Experiment 1, response proportions approached a new asymptote slightly more slowly when the switch in reinforcement proportions was more extreme. In Experiment 2, slightly faster transitions were found with higher overall rates of reinforcement. The results from the first session, after a switch in the reinforcement proportions, were generally consistent with a mathematical model that assumes that the strength of each response is increased by reinforcement and decreased by nonreinforcement. However, neither this model nor other similar models predicted the “spontaneous recovery” observed in later sessions: At the start of these sessions, response proportions reverted toward their preswitch levels. Computer simulations could mimic the spontaneous recovery by assuming that subjects store separate representations of response strength for each session, which are averaged at the start of each new session.  相似文献   

17.
Discrimination reversal learning has been used as a measure of species flexibility in dealing with changes in reinforcement contingency. In the simultaneous-discrimination, midsession-reversal task, one stimulus (S1) is correct for the first half of the session, and the other stimulus (S2) is correct for the second half. After training, pigeons show a curious pattern of choices: They begin to respond to S2 well before the reversal point (i.e., they make anticipatory errors), and they continue to respond to S1 well after the reversal (i.e., they make perseverative errors). That is, pigeons appear to be using the passage of time or the number of trials into the session as a cue to reverse, and are less sensitive to the feedback at the point of reversal. To determine whether the nature of the discrimination or a failure of memory for the stimulus chosen on the preceding trial contributed to the pigeons’ less-than-optimal performance, we manipulated the nature of the discrimination (spatial or visual) and the duration of the intertrial interval (5.0 or 1.5 s), in order to determine the conditions under which pigeons would show efficient reversal learning. The major finding was that only when the discrimination was spatial and the intertrial interval was short did the pigeons perform optimally.  相似文献   

18.
Two experiments employed a delayed conditional discrimination procedure in which half the trials began with the presentation of food and half with no food; following a retention interval, subjects were presented with a choice between red and green keys, a response to one of which was reinforced according to whether the trial had started with food or no food. In Experiment 1, after 38 training sessions during which the retention interval was gradually increased, pigeons performed at a moderate level with intervals of 5 to 7.5 sec. A final test produced a steep forgetting function for food trials, but not for no-food trials; performance was unaffected by the duration of the intertriai interval (10 or 40 sec). Experiment 2 used the delayed conditional discrimination procedure to compare short-term memory in jackdaws (Corvus monedulus) with that in pigeons. Although the performance of the jackdaws was below that of the pigeons at the start of training, they showed more rapid learning over long delays, and, in the final test, a shallower forgetting function for food trials than that shown by pigeons. The results suggested superior short-term memory in jackdaws, which may help to explain the better performance of corvids in general when compared with that of pigeons in certain complex learning tasks.  相似文献   

19.
Within-trial contrast has been proposed as a mechanism underlying preferences for stimuli that follow relatively more aversive events over stimuli that follow less aversive events. In this study, we manipulated deprivation level to test within-trial contrast predictions. In Experiment 1, pigeons encountered two discriminative stimuli, one presented when they were deprived and the other when they were prefed. When later given a choice between the two stimuli, pigeons strongly preferred the stimulus encountered when deprived, independently of their deprivation level at test. In Experiment 2, pigeons learned two simultaneous discriminations, one when deprived and the other when prefed. Here, subsequent tests between the two S+ or the two S stimuli revealed no consistent preferences. These contrasting findings suggest that differential aversiveness is necessary but not sufficient to induce preferences via within-trial contrast.  相似文献   

20.
Clement, Feltus, Kaiser, and Zentall (2000) found that when pigeons have to work to obtain a discriminative stimulus that is followed by reinforcement, they prefer a discriminative stimulus that requires greater effort over one that requires less effort. The authors suggested that such a preference results from the greater change in hedonic value that occurs between the more aversive event and the onset of the stimulus that signals reinforcement, a contrast effect. It was hypothesized that any stimulus that follows a relatively more aversive event would be preferred over a stimulus that follows a relatively less aversive event. In the present experiment, the authors tested the counterintuitive prediction of that theory, that pigeons should prefer a discriminative stimulus that follows the absence of reinforcement over a discriminative stimulus that follows reinforcement. Results supported the theory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号