Numbas Adaptive Marking Bug: Incorrect Grades And Data Discrepancies
Hey everyone, let's dive into a frustrating bug that cropped up with Numbas and its adaptive marking system. We're talking about a situation where a student aced a question, the system acknowledged it internally, but the final grade reflected an error. Sounds like a headache, right?
The Core of the Problem: Adaptive Marking Gone Wrong
The heart of the issue lies within Numbas's adaptive marking feature. This is a pretty cool system that adjusts how a student's answer is evaluated, potentially giving them a chance to correct their mistakes without a massive penalty. However, in this particular instance, things went haywire. The student correctly answered a question, specifically part (e) of a resultant vector calculation question. Despite getting the right answer, the system initially marked it as incorrect, leading to a lower overall grade. This caused significant confusion and frustration for the student, as they knew they had answered the question correctly. They ended up with a discrepancy between what they saw on their screen and what was reported to the Learning Tools Interoperability (LTI) server.
Now, here's where it gets even more interesting. When the data was examined, it revealed a hidden truth. The system did award the correct mark for the question in the attempt data. This contradiction showed that internally, Numbas recognized the correct answer, but this wasn't reflected in the final grade presented to the student or transmitted to the LTI server. For example, the student's screen showed a score of 33.5 out of 72, whereas the LTI server correctly reported the score as 34.5 out of 72. This subtle difference represents a loss of marks due to the erroneous marking of a question despite the correct response.
The settings for the adaptive marking were set up to be relatively forgiving. They were designed to let students try without replacements first and without a penalty when adaptive marking was in use. This type of setup generally encourages students to attempt problems, learn from their mistakes, and, eventually, improve their understanding. The fact that this configuration led to this marking error is particularly troubling, as it undermines the intended benefits of this pedagogical approach. It's a reminder that even the most well-intentioned systems can have unexpected glitches that impact the learning experience.
The specific question in question can be found here: https://numbas.mathcentre.ac.uk/question/139746/calculate-resultant-vector-for-exam/ part (e).
Deep Dive into the Discrepancies
Let's unpack the inconsistencies a bit more. The student's experience was jarring. They completed a question successfully, only to be told they were wrong. This kind of experience can damage a student's confidence and create a sense of unfairness. Furthermore, it complicates the use of assessment as a learning tool. The student might question the feedback they receive, and this makes it harder for them to learn from their mistakes. The entire point of assessment is to provide feedback, to understand where a student struggled, and also to give them confidence when they succeed. This particular bug does the opposite. Imagine the student's surprise when they see their marks, which could have a huge impact on their final grade and perception of the course.
The attempt data, thankfully, held the key to the truth. Within the system's logs, the correct marks were awarded, even though they weren't reflected in the final score displayed to the student. This internal acknowledgment suggests that the problem may lie within the process of calculating and displaying the final grade or in how the information is communicated between different parts of the Numbas system. This discrepancy is a symptom of a larger issue. When systems don't work correctly, they can make it harder for teachers to assess students fairly. Teachers rely on these systems to grade fairly, and when those systems fail, it can have serious consequences. For instance, in an exam setting, a single mark can drastically affect the final grade, so it's essential that the grading process is as accurate as possible.
It’s especially concerning that this bug occurred with adaptive marking, which, as we mentioned earlier, is designed to give students more chances. The goal is to provide a more forgiving and educational approach to assessment, allowing students to learn from their mistakes. When a bug like this appears within such a system, it undermines the goal. The error defeats the intended benefits of providing an environment in which students can develop and learn without penalty.
Analyzing the Adaptive Marking Settings and Their Impact
The settings used for adaptive marking also play a crucial role in understanding this bug. In this instance, the system was configured to allow attempts without replacements first, meaning the students were granted opportunities to refine their answers. When the marking doesn't accurately reflect their performance, the whole process loses its integrity. If students aren't being fairly assessed based on the settings, the whole thing loses its value. This highlights the importance of carefully testing and monitoring the adaptive marking feature to ensure that the intended pedagogical benefits are delivered.
The