Ted M. Clark, Ellie Anderson, Nicole M. Dickson Karn, Comelia Soltanirad, Nicolas Tafini
Student performance on open-response calculations involving acid and base solutions before and after instruction in general chemistry and analytical chemistry courses was compared with the output from the artificial intelligence chatbot ChatGPT. Applying a theoretical model of expertise for problem solving that includes problem conceptualization, problem strategy, and solution, it is found that students errors following instruction primarily involved problem conceptualization and the misapplication of heuristics like the Henderson–Hasselbalch equation When the same problems were used as input to ChatGPT the responses were comparable to worked examples found in general chemistry textbooks in terms of length and detail and usually displayed strong problem conceptualization. Response accuracy of the chatbot varied greatly for different topics, being best for calculations of pH for a strong acid or strong base and much lower for more complex problems involving titrations or aqueous salts. Chatbot and student errors differed in that the chatbot did not misapply heuristics but did make mathematical errors uncommon for students. The variability in the correctness of ChatGPT’s responses and the nature of its errors vis-à-vis students will influence its potential use as an instructional resource for calculations involving acids and bases.
© 2001-2024 Fundación Dialnet · Todos los derechos reservados