Research progress today:
I started the day with the Bio (40 question) tests read in, formatted, and processed for all 40 questions. Today I was working on getting the results for the taught vs. not-taught questions. When we started this project, we took the standards-based MOSART tests and used all of their questions. However, realistically, we cannot in 4 semesters cover 9 years worth of science curriculum. Nor should we. There are many questions in those assessments that are not good measures for the progress of our students. Any questions covering grade 6-8 material that is not related to deeper understanding of the K-5 material is irrelevant to us. In addition, there are some questions that we think are bad questions (our students understand the issue more deeply than the question intends) and so they get it wrong, but are scientifically right. We decided that keeping these questions as part of our overall scoring process is not reasonable. It simply muddies the water and makes it hard to measure what we want to know, which is: for the content that we consider vital, do our students learn it better in HoS than they would have in the alternative lecture classes they would have been taking?
So today:
-I created processtaught.pro which runs the same analysis on learning gains overall (pre average, post average, histograms), but only on the questions that instructors agree are "taught" in HoS. This can include explicitly covered, or relevant to students and we expect they should understand it based on having taken our classes.
Relevant output: *qN.meas, *qN.hist, *.qNhist.pdf
-Started really hammering out itemanal.pro, but this is not complete.
--Now we read in whole fits files and sift out taught/not-taught (generalized for any class),
--Plot all questions from the assessment. These are the arrow plots where classwide change for each question is shown, with averages on the end for taught & not-taught content. The purpose of this file is to gauge the relevance of the not-taught questions. Presumably, if we are not teaching them, or anything related to them, students should show little change on these questions (grayed out), and the average should be a short arrow.
Relevant output: *itemQall.pdf
--Plot only the taught questions. These are also arrow plots per question, but for this version any questions with a decrease in the fraction of students who answer correctly (show negative learning gains) are highlighted in red. We should be worried if there is a high fraction of these or if there are large changes downward.
Relevant output: *qN.pdf
No comments:
Post a Comment