24 Nov Comparing tasks is instructive and fun
Janneke van der Loo works for Tilburg University. She is a didactics teacher and works as a program director at the Dutch language teacher training program. She is also an associate professor at the Communication and Information Sciences program. In several of her courses she uses the Comproved comparing tool. A recent evaluation shows that her students find comparing tasks instructive and fun. We were curious how Janneke applies the tool in her classes.
How did you learn about Comproved?
“Through a network of researchers, I met one of the researchers of the research project D-PAC [predecessor of Comproved – Ed.] during a seminar at the University of Antwerp. We started talking about comparative judgement and I immediately saw the possibilities for my course in Academic Dutch.”
When we used peer feedback in the traditional, analytical way, the criteria list became a checklist. Feedback remained superficial. Neither giving nor receiving feedback resulted in more learning.
What was your motivation for getting started with the comparing tool?
“In my Academic Dutch course, students learn how to write a research paper. One of the learning objectives of that course is that they learn to reflect on their own work by giving and processing feedback. In earlier years, I let students give each other peer feedback in the traditional, analytical way, with criteria lists. Every time I used that method I was dissatisfied. The criteria list became a kind of checklist and the feedback remained very superficial. I had the impression that both giving and receiving feedback did not result in more learning.”
“In addition, I had a practical problem when I organized the peer feedback online. I used to pair students with each other for giving and receiving feedback, but this made them very dependent on the one person they were paired with. For example, if you were unlucky enough that your feedback provider did not turn in work or quit the course, you suddenly did not receive any feedback.”
“So first, I did not think the quality of the feedback was that good and the activity was no fun. Second, students were very dependent on the feedback giver they were paired with. I hoped to solve these problems with the comparing tool.”
How did you embed the tool in your classes?
“I proceeded in several steps. First, I introduced the students to the technique of comparative judgement, but without the tool. During class, groups of students were given a number of texts from the previous year and had to rank them from not so good to best. After this we started a conversation about the quality requirements of such a text and formulated criteria. The students got a clear idea of what was expected of them.”
“Then the students worked on their own writing. In the course they wrote an introduction to a research paper in which they had to incorporate various sources. They had to upload that introduction into Comproved to then assess each other’s works. The students had to make seven comparisons each, choosing the best text in each pair. Students are good at comparing. They quickly see the difference between what is good and what is not so good.”
“For the last three comparisons students also had to give feedback. We gave guidelines for that. The students had to name one positive point and two points of improvement. For the areas of improvement, they had to indicate what problem they observed, why they thought that was a problem and what a solution might be.”
“Then, based on that peer feedback, the students rewrote their introduction. When their entire text was finished, they handed it in again. In this round they received teacher feedback. We did that without Comproved because the final texts were long and we found that less workable to do it properly the comparative way.”
How did students experience this method?
“We evaluated the course extensively with a research assistant. The results showed that the students found Comproved very useful and enjoyable. They also found the activity of comparing easy when the differences between two texts were large. Once the texts were closer in quality, they found it harder to pinpoint the best one. Of course, it is very useful for learning to think about what makes you hesitate and what aspect is the deciding factor for making a choice.”
“Furthermore, students found the ranking very insightful. All students could see on a ranking how well they did compared to the other students. I was curious about how they would experience that, because I can imagine that it does something to your motivation if you are low on the ranking. But overall the response was positive. The students liked that they could look at many different texts. They saw that there is not one right way to write a good text, but that there are different ways.”
“Finally, students found it helpful to get a lot of feedback. It helped them gain new insights and look critically at their own text. They did find it difficult that they received conflicting feedback sometimes. But you can learn from that too. There is a reason why opinions differ and it is a good reason to look at your text again specifically on that aspect. Students learn to deal with uncertainty this way. I think that is one of the biggest benefits of Comproved.”
The holistic approach to comparative judgement allowed for better quality feedback.
Did you notice much improvement in students’ writing for the second version of the introduction?
“Yes, those second versions were a whole lot better. I would like to say that I only got very good introductions the second round, but then again, that was not the case (laughs). After all, students are only just starting their academic careers in the first semester. This was their first time writing an academic paper. They had to come a long way.”
“In general, I did have the impression that they had rewritten their texts on a deeper level. When we gave feedback with a criteria list, students usually focused a lot on spelling errors and grammar. Of course it is good that students notice these errors, but it is not going to make a big difference in text quality. The structure of the text is more relevant. With the holistic approach of comparative judgement, the feedback was given more at that level.”
What do you think could make the tool even better?
“There are some small, practical things that could be improved. For example, the fields in which to give feedback are on the small side which gave some students the impression that they did not have to give extensive feedback. In terms of didactic design, on the other hand, I have not encountered any drawbacks. For peer feedback it works very well. The students thought comparing was a fun and interesting activity. It was also very nice to review the rank order and feedback together afterwards. During a lecture, the students discussed each other’s feedback and what it meant for rewriting their texts. I found that very valuable.”
“The tool is also very easy to use. You do not have to work your way through big manuals to get started with it. I think it works very intuitively. The students also hardly needed any instruction to work smoothly with the tool.”
Do you plan to continue using the tool?
“I am going to use the same assignment this academic year. I also want to use the tool for the second assignment in the course. That assignment is a group assignment. I am still looking at how I am going to approach that and what exactly I want to achieve with it.”
“I use the tool in other courses as well. In these courses it has a dual purpose. In the Dutch teacher training program I use it to help students improve their texts. At the same time I want to introduce them to the tool so that they might use it in their own classes later on. In the master’s course in linguistics I show how you can use Comproved for research, for example to measure writing quality.”
What might convince teachers to get started with the tool?
“What helps, I think, is that an analytical assessment method such as a criteria list offers a false sense of certainty. A criteria list has concrete assessment points and that can be useful for students. They can see which items they score well on and which they still need to improve. That gives students the feeling that they have a better grip on their process. Holistic and comparative assessment, on the other hand, is less known. But I noticed through working with Comproved that assessing through comparison is much easier than looking at student work in isolation.”
“As teachers we all know the sequencing effect. If you have just seen three bad papers and then a mediocre one, you are probably going to give the mediocre one a higher grade than it really deserves. With Comproved, the sequencing effect is avoided because all the works are compared multiple times in different pairs and by different reviewers. Such things may convince teachers to try the tool.”
“Furthermore, I think many teachers already assess comparatively without realizing it. They make different piles of student work, compare them and rank them according to quality. You can talk to colleagues about that and show them that this is actually the principle behind Comproved.”
Want to learn more about comparative judgement? read all about it in our ebook.