Tag Archives: adjustments

Examining Data to Improve Student Learning

PRINT AS PDF

“How do you know students are learning?” That question is probably as old as the teaching profession itself.

Each semester can produce a wealth of valuable information. Over periods of time, that information can be utilized to improve all aspects of courses. In that context, the notion that a course is complete once the final grades have been submitted to the Registrar’s Office has little basis in fact.

When it comes to final exams results or final grades, such information lends some insight, but a lot more data is needed. Such scores or grades fluctuate from one semester to another, even when the measuring tools are held largely constant. The reason for such variability is that each class comes in with its own strengths and weaknesses. The skill sets and knowledge base of incoming students are never identical.

To address this issue, I administer a diagnostic exam at the beginning of each semester to my BBA 407 Strategic Management students. The diagnostic exam covers major course concepts. Learning Potential is then defined by what each incoming class does not know. Students do not receive the solutions to the diagnostic exam. At the end of each semester, the concepts are again evaluated with questions that are embedded in the final exam. Then, a comparison is made. Realized Learning Potential is the percentage of previously unfamiliar material the students have learned during the semester. The benefit of this approach is that it standardizes outcomes across classes.

Since I began administering this approach, Realized Learning Potential has ranged from 55.9% to 61.5%. The average figure has been 58.3%. The just-concluded semester resulted in the highest figure, even as the mean and median final exam scores were somewhat lower than those of some earlier classes. Those differences in final exam scores are very likely the result of the last class having come to the course with the least prior knowledge based on the diagnostic exam outcomes. Part of the larger gain in student learning may be attributed to increased exercises in certain content areas, as those areas registered the largest gains from the start of the semester. Further evaluation will be needed.

Another issue I examined was whether there was any analytical information that could provide some insight into how students performed on their final exam relative to the mid-term exam. Considering that the final exam is cumulative and questions somewhat more rigorous than those on the mid-term, a decline in scores is not uncommon.

Attendance, which is a minimal measure of student engagement, was essentially meaningless. However, there was a noted difference between groups of students who completed their final two assignments and those who completed none of them. Those who completed their final two assignments had a mean decline of 2.2 points and median decline of 1.5 points from their mid-term exam. Those who completed none of their final two assignments had a mean decline of 8.9 points and a median decline of 4.8 points. This evaluation will be repeated next semester. If the outcomes are similar, later assignments will carry greater weight than earlier ones.

Finally, an issue I have been examining for the past three semesters concerns the development of early indicators that might identify students who are at elevated risk of doing poorly. The data revealed that students who incorrectly answered two specific questions on the mid-term exam fared markedly worse on that exam and, typically, for the course. On the mid-term, those students received scores that averaged 9.5 points to 18.9 points below the overall mid-term average. In response, I have created a new quiz. That quiz will allow me to identify such students earlier than has been possible with the mid-term exam. At risk students will be targeted for more intensive intervention. At the end of the semester, the outcome of that targeted intervention will be assessed.

In conclusion, the interval between the end of one semester and the start of another offers a rich opportunity for rigorously examining what took place during the preceding semester, reflecting on recurrent themes and persistent trends, and then making evidence-based adjustments for the upcoming semester. Afterward, those adjustments should be evaluated. In the longer-term, such an evolutionary evidence-driven process could lead to improvements in objective measures of student learning.