08 jan Learning by comparing: a research report
Comproved values scientific research highly. That is why we are happy to support PhD students. One of the students who uses Comproved in her research is Marie Hoffelinck from the University of Liège (Belgium). In the article below, Marie describes how they got into comparative judgement (CJ) and how they operationalized and researched “learning from comparing”. Curious about the results? Enjoy reading!
Reason for the research
In October 2022, I started a PhD on learning-by-comparing in higher education at the University of Liège. Quickly enough, I stumbled across comparative judgement tools and a very new line of inquiry some UK researchers were putting forward at the time, the concept of “learning by evaluating” with comparative judgement (Bartholomew et al., 2022). The idea is that making judgements may foster learning. Hence, comparative judgement becomes not only an assessment tool but also a learning one.
Indeed, psychologists such as Wendell Garner (Garner, 1974) or James & Eleanor Gibson (Gibson & Gibson, 1955) have long ago stressed the importance of comparing in cognition and perceptual development. More recently, a prominent educational psychologist, Ference Marton has also highlighted the importance of contrast in learning (Marton, 2015). It is therefore not surprising that making students participate to learning activities designed to make them compare between elements could be a valuable source of learning.
Context of the research
Starting from these initial reflections, I really wanted to put the idea to the test by implementing a real-world comparative judgement for learning activity. To do so, I partnered with Pr. Florence Pirard and her team of assistants at the University of Liège to design a CJ peer-assessment task for her course “Question and practice of young child education”.
This course aims at developing students’ analytical skills for educational situations involving young children. As a final assignment, students must write an essay in which they analyse such a situation and develop a critical standpoint on criteria and conditions for young children wellbeing substantiated with scientific and professional literature. The final assessment criteria for their examination are provided to students at the beginning of the semester and progressively exemplified during classes through collective video analyses and critical discussions of the literature.
For several years, Pr. F. Pirard used to ask her students to submit a first draft of their final assignment at mid-semester. The teaching team would then provide individual formative feedback to each of them to further help students concretely understand her expectations for the final essay.
Specific implementation
In 2023-24, we replaced this activity by a CJ peer-assessment task. The idea was that it would maximise the learning opportunities offered to students by making them engage even more deeply with the final assessment criteria by reviewing their peers’ work through this lens, while also receiving feedback.
Concretely, each student was requested to submit a 3-page long draft of their final essay on Comproved. Then, they had two weeks to perform 5 comparative judgements on their peers’ productions, writing feedback comments using the teaching team assessment criteria as guiding principles for two comparisons (i.e. 4 essays). After that, students received comments from their peers on their own productions (between 3 to 5 per student) and a collective debriefing was organised by one of the course assistants, who had read through the comments written by students. This collective debriefing was the occasion to highlight a few elements that were sometimes disregarded by students in their feedback.
Research questions
From a research perspective, I wanted to use this opportunity to inquire about the student’s experience of CJ at different key-moments of the task: before making CJ, after making them, after receiving their peers’ feedback and after submitting their final examination task.
Results
Seven students accepted to take part in interviews at this four key-moments. These are still under analysis for publication in a scientific paper but there are already a few significant results that can be shared:
- When reviewing their peers’ productions, students have their own production in mind. Participating to the CJ session allows them to find ways in which to improve their document for final submission.
- When making judgements, students use both their own internal conception of quality and the teacher’s criteria. There is some sort of progressivity in the session: their first judgements are more guided by their “intuition”, or their “feelings” and the last ones are more informed by the teacher’s criteria.
- Writing feedback comments necessitate a different form of cognitive engagement with the task and seems incremental in favouring the appropriation of the criteria given by the teacher.
Additional questionnaire
We also submitted a questionnaire to students after the completion of the task to have a broader view on how it unfolded for them. Despite an initial scepticism on the students’ part that was palpable during the class when the activity was presented, the activity seems to have been well received: in 2023-2024, out of 37 respondents to the questionnaire 33 thought that the activity should be implemented again the next year. The overall feeling from the teaching team was positive as well as it decreased their workload without impacting the quality of the students’ final productions.
Discussion
These positive elements have convinced the teaching team to re-iterate the activity in 2024-25. It seems that the students’ perception was less positive this year (25 out 33 respondents though the activity should be implemented again), the reasons why still must be inquired about. Possible explanations include:
- the fact that unlike the previous year, the instructions for the CJ task were communicated through a videoconference call, while students were attending their in-class meeting, which was less prone to in depth Q&A;
- a few students appear to have used generative artificial intelligence to produce feedback for their peers’ productions, which might have put off the ones who received those while they had themselves write consistent comments;
- students were invited to perform 2 “trial” comparisons beforehand, which, even though it was not mandatory, has increased their workload
- some students experienced issues with writing comments due to their web browser auto-correct function.
Conclusion
Nevertheless, the overall feeling was still positive, and the teaching team has already decided to continue using CJ next year!
References
Bartholomew, S. R., Mentzer, N., Jones, M., Sherman, D., & Baniya, S. (2022). Learning by evaluating (LbE) through adaptive comparative judgment. International Journal of Technology and Design Education, 32(2), 1191–1205. https://doi.org/10.1007/s10798-020-09639-1
Garner, W. R. (2014[1974]). The processing of information and structure. Psychology Press.
Gibson, J. J., & Gibson, E. J. (1955). Perceptual learning: Differentiation or Enrichment? Psychological Review, 1, 32–41.
Marton, F. (2015). Necessary conditions of learning. Routledge.


