首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This article initially outlines a procedure used to develop a written diagnostic instrument to identify grade-11 and -12 students' misconceptions and misunderstandings of the chemistry topic covalent bonding and structure. The content to be taught was carefully defined through a concept map and propositional statements. Following instruction, student understanding of the topic was identified from interviews, student-drawn concept maps, and free-response questions. These data were used to produce 15 two-tier multiple-choice items where the first tier examined content knowledge and the second examined understanding of that knowledge in six conceptual areas, namely, bond polarity, molecular shape, polarity of molecules, lattices, intermolecular forces, and the octet rule. The diagnostic instrument was administered to a total of 243 grade-11 and -12 chemistry students and has a Cronbach alpha reliability of 0.73. Item difficulties ranged from 0.13 to 0.60; discrimination values ranged from 0.32 to 0.65. Each item was analyzed to ascertain student understanding of and identify misconceptions related to the concepts and propositional statements underlying covalent bonding and structure.  相似文献   

2.
This study involved the development and application of a two-tier diagnostic test measuring college biology students' understanding of diffusion and osmosis after a course of instruction. The development procedure had three general steps: defining the content boundaries of the test, collecting information on students' misconceptions, and instrument development. Misconception data were collected from interviews and multiple-choice questions with free response answers. The data were used to develop 12 two-tier multiple choice items in which the first tier examined content knowledge and the second examined understanding of that knowledge. The conceptual knowledge examined was the particulate and random nature of matter, concentration and tonicity, the influence of life forces on diffusion and osmosis, membranes, kinetic energy of matter, the process of diffusion, and the process of osmosis. The diagnostic instrument was administered to 240 students (123 non-biology majors and 117 biology majors) enrolled in a college freshman biology laboratory course. The students had completed a unit on diffusion and osmosis. The content taught was carefully defined by propositional knowledge statements, and was the same content that defined the content boundaries of the test. The split-half reliability was .74. Difficulty indices ranged from 0.23 to 0.95, and discrimination indices ranged from 0.21 to 0.65. Each item was analyzed to determine student understanding of, and identify misconceptions about, diffusion and osmosis.  相似文献   

3.
Polymerase chain reaction (PCR) and gel electrophoresis have become common techniques used in undergraduate molecular and cell biology labs. Although students enjoy learning these techniques, they often cannot fully comprehend and analyze the outcomes of their experiments because of a disconnect between concepts taught in lecture and experiments done in lab. Here we report the development and implementation of novel exercises that integrate the biological concepts of DNA structure and replication with the techniques of PCR and gel electrophoresis. Learning goals were defined based on concepts taught throughout the cell biology lab course and learning objectives specific to the PCR and gel electrophoresis lab. Exercises developed to promote critical thinking and target the underlying concepts of PCR, primer design, gel analysis, and troubleshooting were incorporated into an existing lab unit based on the detection of genetically modified organisms. Evaluative assessments for each exercise were aligned with the learning goals and used to measure student learning achievements. Our analysis found that the exercises were effective in enhancing student understanding of these concepts as shown by student performance across all learning goals. The new materials were particularly helpful in acquiring relevant knowledge, fostering critical-thinking skills, and uncovering prevalent misconceptions.  相似文献   

4.
ABSTRACT

Concept inventories (CIs) are assessment instruments designed to measure students’ conceptual understanding of fundamental concepts in particular fields. CIs utilise multiple-choice questions (MCQs), and specifically designed response selections, to help identify misconceptions. One shortcoming of this assessment instrument is that it fails to provide evidence of the causes of the misconceptions, or the nature of students’ conceptual understanding. In this article, we present the results of conducting textual analysis on students’ written explanations in order to provide better judgements into their conceptual understanding. We compared students’ MCQ scores in Signals and Systems Concept Inventory questions, with the textual analysis utilising vector analysis approaches. Our analysis of the textual data provided the ability to detect answers that students identified as a ‘guessed’ response. However, the analysis was unable to detect if conceptually correct ideas existed within the ‘guessed’ responses. The presented approach can be used as a framework to analyse assessment instruments that utilise textual, short-answer responses. This analysis framework is best suited for the restricted conditions imposed by the short-answer structure.  相似文献   

5.
Students' writing can provide better insight into their thinking than can multiple-choice questions. However, resource constraints often prevent faculty from using writing assessments in large undergraduate science courses. We investigated the use of computer software to analyze student writing and to uncover student ideas about chemistry in an introductory biology course. Students were asked to predict acid-base behavior of biological functional groups and to explain their answers. Student explanations were rated by two independent raters. Responses were also analyzed using SPSS Text Analysis for Surveys and a custom library of science-related terms and lexical categories relevant to the assessment item. These analyses revealed conceptual connections made by students, student difficulties explaining these topics, and the heterogeneity of student ideas. We validated the lexical analysis by correlating student interviews with the lexical analysis. We used discriminant analysis to create classification functions that identified seven key lexical categories that predict expert scoring (interrater reliability with experts = 0.899). This study suggests that computerized lexical analysis may be useful for automatically categorizing large numbers of student open-ended responses. Lexical analysis provides instructors unique insights into student thinking and a whole-class perspective that are difficult to obtain from multiple-choice questions or reading individual responses.  相似文献   

6.
Theory on student learning provides that students are able to direct their learning when they have metacognitive knowledge about their own learning processes. In this article, a preliminary attempt to assess untrained high-school students’ metacognitive knowledge of learning processes as an ability through multiple-choice questions is reported. In three studies, item selection was established for ninth graders at the end of their school year. Also, in the final study the results showed that the ninth-grade students’ self-reported use of learning and studying strategies, study techniques, school learning and tiresome academic subjects related significantly to their metacognitive knowledge regarding learning processes. In the discussion, the practical consequences for school assessment are explicated and future research questions are raised.  相似文献   

7.
We present a multiple-choice test, the Montana State University Formal Reasoning Test (FORT), to assess college students' scientific reasoning ability. The test defines scientific reasoning to be equivalent to formal operational reasoning. It contains 20 questions divided evenly among five types of problems: control of variables, hypothesis testing, correlational reasoning, proportional reasoning, and probability. The test development process included the drafting and psychometric analysis of 23 instruments related to formal operational reasoning. These instruments were administered to almost 10,000 students enrolled in introductory science courses at American universities. Questions with high discrimination were identified and assembled into an instrument that was intended to measure the reasoning ability of students across the entire spectrum of abilities in college science courses. We present four types of validity evidence for the FORT. (a) The test has a one-dimensional psychometric structure consistent with its design. (b) Test scores in an introductory biology course had an empirical reliability of 0.82. (c) Student interviews confirmed responses to the FORT were accurate indications of student thinking. (d) A regression analysis of student learning in an introductory biology course showed that scores on the FORT predicted how well students learned one of the most challenging concepts in biology, natural selection.  相似文献   

8.
This study involved the development and application of a two-tier diagnostic test measuring students understanding of flowering plant growth and development. The instrument development procedure had three general steps: defining the content boundaries of the test, collecting information on students misconceptions, and instrument development. Misconception data were collected from interviews and multiple-choice questions with open response answers. The data were used to develop 13 two-tier multiple-choice items. The conceptual knowledge examined was flowering plant life cycles, reproduction, precondition of germination, plant nutrition, and mechanism for growth and development. The diagnostic instrument was administered to 477 high school students. The correlation coefficient of test-retest was 0.75. Difficulty indices ranged from 0.24 to 0.82, and discrimination indices ranged from 0.32 to 0.65. Results of the Flowering Plant Growth and Development Diagnostic Test suggested that students did not acquire a satisfactory understanding of plant growth and development concepts. Nineteen misconceptions were identified through analysis of the items that could inform biology instruction and resource.  相似文献   

9.
10.
We present a diagnostic question cluster (DQC) that assesses undergraduates' thinking about photosynthesis. This assessment tool is not designed to identify individual misconceptions. Rather, it is focused on students' abilities to apply basic concepts about photosynthesis by reasoning with a coordinated set of practices based on a few scientific principles: conservation of matter, conservation of energy, and the hierarchical nature of biological systems. Data on students' responses to the cluster items and uses of some of the questions in multiple-choice, multiple-true/false, and essay formats are compared. A cross-over study indicates that the multiple-true/false format shows promise as a machine-gradable format that identifies students who have a mixture of accurate and inaccurate ideas. In addition, interviews with students about their choices on three multiple-choice questions reveal the fragility of students' understanding. Collectively, the data show that many undergraduates lack both a basic understanding of the role of photosynthesis in plant metabolism and the ability to reason with scientific principles when learning new content. Implications for instruction are discussed.  相似文献   

11.
We have designed, developed, and validated a 17-question Meiosis Concept Inventory (Meiosis CI) to diagnose student misconceptions on meiosis, which is a fundamental concept in genetics. We targeted large introductory biology and genetics courses and used published methodology for question development, which included the validation of questions by student interviews (n = 28), in-class testing of the questions by students (n = 193), and expert (n = 8) consensus on the correct answers. Our item analysis showed that the questions’ difficulty and discrimination indices were in agreement with published recommended standards and discriminated effectively between high- and low-scoring students. We foresee other institutions using the Meiosis CI as both a diagnostic tool and an instrument to assess teaching effectiveness and student progress, and invite instructors to visit http://q4b.biology.ubc.ca for more information.  相似文献   

12.
We report on the development of an item test bank and associated instruments based on the National Research Council (NRC) K–8 life sciences content standards. Utilizing hundreds of studies in the science education research literature on student misconceptions, we constructed 476 unique multiple-choice items that measure the degree to which test takers hold either a misconception or an accepted scientific view. Tested nationally with 30,594 students, following their study of life science, and their 353 teachers, these items reveal a range of interesting results, particularly student difficulties in mastering the NRC standards. Teachers also answered test items and demonstrated a high level of subject matter knowledge reflecting the standards of the grade level at which they teach, but exhibiting few misconceptions of their own. In addition, teachers predicted the difficulty of each item for their students and which of the wrong answers would be the most popular. Teachers were found to generally overestimate their own students performance and to have a high level of awareness of the particular misconceptions that their students hold on the K–4 standards, but a low level of awareness of misconceptions related to the 5–8 standards.  相似文献   

13.
Concept inventories, consisting of multiple-choice questions designed around common student misconceptions, are designed to reveal student thinking. However, students often have complex, heterogeneous ideas about scientific concepts. Constructed-response assessments, in which students must create their own answer, may better reveal students' thinking, but are time- and resource-intensive to evaluate. This report describes the initial meeting of a National Science Foundation-funded cross-institutional collaboration of interdisciplinary science, technology, engineering, and mathematics (STEM) education researchers interested in exploring the use of automated text analysis to evaluate constructed-response assessments. Participants at the meeting shared existing work on lexical analysis and concept inventories, participated in technology demonstrations and workshops, and discussed research goals. We are seeking interested collaborators to join our research community.  相似文献   

14.
We describe the development and validation of a three-tiered diagnostic test of the water cycle (DTWC) and use it to evaluate the impact of prior learning experiences on undergraduates’ misconceptions. While most approaches to instrument validation take a positivist perspective using singular criteria such as reliability and fit with a measurement model, we extend this to a multi-tiered approach which supports multiple interpretations. Using a sample of 130 undergraduate students from two colleges, we utilize the Rasch model to place students and items along traditional one-, two-, and three-tiered scales as well as a misconceptions scale. In the three-tiered and misconceptions scales, high confidence was indicative of mastery. In the latter scale, a ‘misconception’ was defined as mastery of an incorrect concept. We found that integrating confidence into mastery did little to change item functioning; however, three-tiered usage resulted in higher reliability and lower student ability estimates than two-tiered usage. The misconceptions scale showed high efficacy in predicting items on which particular students were likely to express misconceptions, and revealed several tenacious misconceptions that all students were likely to express regardless of ability. Previous coursework on the water cycle did little to change the prevalence of undergraduates’ misconceptions.  相似文献   

15.
College-level biology courses contain many complex processes that are often taught and learned as detailed narratives. These processes can be better understood by perceiving them as dynamic systems that are governed by common fundamental principles. Conservation of matter is such a principle, and thus tracing matter is an essential step in learning to reason about biological processes. We present here multiple-choice questions that measure students' ability and inclination to trace matter through photosynthesis and cellular respiration. Data associated with each question come from students in a large undergraduate biology course that was undergoing a shift in instructional strategy toward making fundamental principles (such as tracing matter) a central theme. We also present findings from interviews with students in the course. Our data indicate that 1) many students are not using tracing matter as a tool to reason about biological processes, 2) students have particular difficulties tracing matter between systems and have a persistent tendency to interconvert matter and energy, and 3) instructional changes seem to be effective in promoting application of the tracing matter principle. Using these items as diagnostic tools allows instructors to be proactive in addressing students' misconceptions and ineffective reasoning.  相似文献   

16.
Cell biology is an academic discipline that organises and coordinates the learning of the structure, function and molecular composition of cells in some undergraduate biomedical programs. Besides course content and teaching methodologies, the laboratory environment is considered a key element in the teaching of and learning of cell biology. The aim of this study was to determine students’ opinions about the quality of the teaching and learning environment in cell biology laboratory practice. For this study, we used a short form of the Science Laboratory Environment Inventory (SLEI), which we adapted and translated into Spanish. The questionnaire, administered to students enrolled in four undergraduate programs, consisted of 24 questions divided into four scales: integration of content, clarity of laboratory rules, cohesion between students and teachers, and quality of laboratory infrastructures and materials. The results suggested that (1) students positively assessed the learning environment provided for cell biology practice, (2) the short Spanish form of the SLEI was a valid, reliable instrument for evaluating student satisfaction, laboratory activities, the degree of cooperation between students and teachers, and theoretical and practical organisation of content and (3) the questionnaire detected differential perceptions of the learning environment based on gender and the program studied.  相似文献   

17.
This study offers an innovative and sustainable instructional model for an introductory undergraduate course. The model was gradually implemented during 3 yr in a research university in a large-lecture biology course that enrolled biology majors and nonmajors. It gives priority to sources not used enough to enhance active learning in higher education: technology and the students themselves. Most of the lectures were replaced with continuous individual learning and 1-mo group learning of one topic, both supported by an interactive online tutorial. Assessment included open-ended complex questions requiring higher-order thinking skills that were added to the traditional multiple-choice (MC) exam. Analysis of students’ outcomes indicates no significant difference among the three intervention versions in the MC questions of the exam, while students who took part in active-learning groups at the advanced version of the model had significantly higher scores in the more demanding open-ended questions compared with their counterparts. We believe that social-constructivist learning of one topic during 1 mo has significantly contributed to student deep learning across topics. It developed a biological discourse, which is more typical to advanced stages of learning biology, and changed the image of instructors from “knowledge transmitters” to “role model scientists.”  相似文献   

18.
Learning progressions, or representations of how student ideas develop in a domain, hold promise as tools to support teachers' formative assessment practices. The ideas represented in a learning progression might help teachers to identify and make inferences about evidence collected of student thinking, necessary precursors to modifying instruction to help students advance in their learning. The study reported in this article took the novel approach of using a learning progression for natural selection to support teachers' enactment of formative assessment. Sources of data include interviews and videotapes of six high school biology teachers leading assessment conversations around the same formative assessment questions. Results indicate that while teachers picked out and made inferences about student ideas related to the learning progression during assessment conversations, they did not use all parts of the learning progression in the same way. Furthermore, several of the teachers seemed to use the learning progressions simply as catalogs of misconceptions to be “squashed” rather than drawing upon the developmental affordances offered by a learning progression. Results are framed in terms of the utility of learning progressions as supports for classroom practice. © 2012 Wiley Periodicals, Inc. J Res Sci Teach 49: 1181–1210, 2012  相似文献   

19.
Writing-to-learn activities in science classrooms can have an impact on student learning. This study sought to examine if the audience for which students write explanations of biology concepts affects their understanding of these concepts. One hundred eighteen Year 9/10 biology students from four classes participated in the study. There were four different audiences: teacher, younger students, peers, and parents. Students' writing for peers or younger students performed significantly better on conceptual questions than students writing for the teacher or the parents.  相似文献   

20.
Novice programmers are facing many difficulties while learning to program. Most studies about misconceptions in programming are conducted at the undergraduate level, yet there is a lack of studies at the elementary school (K-12) level, reasonably because computer science neither programming are regularly still not the part of elementary school curricula’s. Are the misconceptions about loops at elementary school level equal to those at the undergraduate level? Can we “prevent” the misconceptions by using the different pedagogical approach, visual programming language and shifting the programming context toward game programming? In this paper, we tried to answer these questions. We conducted the student misconceptions research on one of the fundamental programming concepts – the loop. The research is conducted in the classroom settings among 207 elementary school students. Students were learning to program in three programming languages: Scratch, Logo and Python. In this paper, we present the results of this research.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号