Jessica J. Reed, Jeffrey R. Raker, Kristen L. Murphy
The ability to assess students’ content knowledge and make meaningful comparisons of student performance is an important component of instruction. ACS exams have long served as tools for standardized assessment of students’ chemistry knowledge. Because these exams are designed by committees of practitioners to cover a breadth of topics in the curriculum, they may contain items that an individual instructor may not cover during classroom instruction and therefore chooses not to assess. For the instructor to make meaningful comparisons between his or her students and the national norm sample, the instructor needs norms that are generated upon the basis of the subset of items he or she used. The goal of this project was to investigate the effects of norm stability when items were removed from ACS General Chemistry Exams. This was achieved by monitoring the average change in percentile for students as items were removed from the exam and noting when average change crossed a specified threshold. An exploration of subset norm stability for three commonly used ACS General Chemistry Exams is presented along with implications for research and instruction.
© 2001-2024 Fundación Dialnet · Todos los derechos reservados