My (Andrej Favia) Ph.D. thesis involves quantifying the "difficulty" of unlearning common astronomy misconceptions. I do this by applying factor analysis and Item Response Theory (IRT) to a retrospective inventory of when, or... more
My (Andrej Favia) Ph.D. thesis involves quantifying the "difficulty" of unlearning common astronomy misconceptions. I do this by applying factor analysis and Item Response Theory (IRT) to a retrospective inventory of when, or if, college students dispelled the misconceptions under consideration. Our inventory covers 235 misconceptions identified over the span of 10 years of teaching the college astronomy lecture course at the Universe of Maine by NFC. The analysis yields logical groupings of topics (e.g., teach one planet at a time rather than use comparative planetology) and the "order of difficulty" of the associated topics. We have results for about one fourth of the inventory, and our results show that there are concepts of different difficulties, which suggest that they should be presented in different orders. We also find that the order of teaching concepts is sometimes different for high school and college level courses.
This is the first in a series of papers that analyze college student beliefs in realms where common astronomy misconceptions are prevalent. Data was collected through administration of an inventory distributed at the end of an... more
This is the first in a series of papers that analyze college student beliefs in realms where common astronomy misconceptions are prevalent. Data was collected through administration of an inventory distributed at the end of an introductory college astronomy course. In this paper, we present the basic mathematics of item response theory (IRT), and then we use it to explore concepts related to galaxies. We show how IRT determines the difficulty of each galaxy topic under consideration. We find that the concept of galaxy spatial distribution presents the greatest challenge to students of all the galaxy topics. We also find and present the most logical sequence to teach galaxy topics as a function of the audience's age.
Though probability and simple statistics are common in our society (ranging from baseball batting averages to polling data with its statistical measures), we rarely teach the concepts in our introductory physics courses. The learning of... more
Though probability and simple statistics are common in our society (ranging from baseball batting averages to polling data with its statistical measures), we rarely teach the concepts in our introductory physics courses. The learning of the concept of probability has been studied in great detail in the area of mathematics and statistics—for example, in Refs. 1 and 2—and there is a thorough (though slightly outdated) bibliography on the topic.
A variety of tools have been created to understand student performance on multiple-choice tests, including analysis of normalized gain, item response curves, and more. These methods typically focus on correct answers. Many incorrect... more
A variety of tools have been created to understand student performance on multiple-choice tests, including analysis of normalized gain, item response curves, and more. These methods typically focus on correct answers. Many incorrect responses contain value and can be used as building blocks for instruction, but present tools do not account for productive reasoning leading to an incorrect response. Inspired by Item Response Curves, we introduce Idea Use Curves, which relate frequency with which an idea is used to student performance. We use this tool to consider ideas which may be present in both correct responses and distractors, letting us attend more to students' conceptual understanding. This tool is made with the goal of identifying ideas that are consistently used by students who perform well or poorly, allowing researchers and instructors to look beyond the " correct/incorrect " paradigm. We explore student reasoning about energy as a proof of concept for this method.