15 Jan Interview: Renske Bouwer and Sven De Maeyer on 10 years of research
In 2014, the seed was planted from which Comproved grew. That’s when the University of Antwerp, imec and Ghent University started a research project together with the central question: what is the added value of comparative judgement for the evaluation of complex competences? The project was named D-PAC (Development of a Platform for the Assessment of Competences). Prof. Dr. Sven De Maeyer and Dr. Renske Bouwer were at the helm of the research project and are still involved with Comproved as consultants today. In honor of Comproved’s fifth anniversary, we thought it was about time to talk to them about those early years and what followed.
How did the idea for the research project come about?
Renske: I wasn’t there from the beginning.
Sven: That’s a long way off! One day a PHD student, who was working on everything to do with technology skills in education, came up with an obscure experiment by Kimbell and Pollitt. In that experiment, they had students from different schools do a similar task. That was filmed and those clips were then compared by teachers. That experiment was based on the method of adaptive comparative judgement. It intrigued me so much that I wanted to look further into it, but I quickly came to the conclusion that there was little or no literature about it. There were a lot of assumptions, but nothing was really substantiated. We wanted to do something with that, but of course we couldn’t just sell the idea of comparing students. So, the plan was to substantiate the validity and reliability of comparative judgement in order to convince people of the method. Totally naïve, we then wrote the D-PAC project. We never thought we would land it (laughs).
Renske: But you felt right away that this could become big?
Sven: Yes, because it was so recognizable. Also, from my own experience with reviewing papers, I noticed that I was actually constantly comparing. Hence, I did see the potential in it.
So, when did you join the project, Renske?
Renske: They were just two years into the project when I got my PhD in Utrecht and then….
Sven: You were headhunted.
Renske: Exactly! A spot became available for a project coordinator. I already had a link through Marije (Lesterhuis) with whom I was part of a network of writing researchers from the Netherlands and Flanders. There we sometimes talked about D-PAC and I realized that I was actually doing the same thing, but with benchmarks. The underlying principle in both methods is the comparing, so I immediately believed in it. The cool thing was that at D-PAC they had developed a tool that worked much more dynamically. I thought ‘why are you only doing this in Antwerp? This should be done by everyone!’
Did you ever imagine that the research project would grow into what Comproved is today?
Sven: I must say that the arrival of Renske brought a new schwung to the project. She managed to re-ignite everyone’s enthusiasm and I started to believe in the long-term potential again. I remember that we had started pitching to educational institutions to convince teachers to use the tool so we could collect real life data. When Renske joined, we actually already had a lot of users. But we didn’t realize that. We thought those were just friendly people who wanted to help us (laughs). Renske made us realize that that was exceptional after barely two years of research.
Renske: When I had just started, I spoke to all the staff separately. Everyone then just listed the things that were not going well. So, I was the right person at the right time to point out to them that many things were already going well. And that indeed many people were already using the tool. I then turned it around: ‘in two years, when the research project ends, you can’t take everything away from everyone’. That was an eye-opener.
When we started 10 years ago, we found that in the field there were a lot of ingrained ideas about what exactly assessment is. Often, that is still the case, by the way. The famous rubrics and the like are the norm. And at the same time, everyone is massively assessing without thinking critically about it.
In what way are you involved with Comproved today?
Renske: I have more of an advisory role now. Which I love. I get to say things and don’t have to execute anything (laughs). I also have a bit of an ambassador role. If anyone around me has questions about Comproved, I can answer them. I don’t have shares or anything, so that puts me in a neutral position. I also still use the tool myself as a researcher and teacher.
Sven: I’m on the Board of Directors so I’m also an advisor to some extent. In essence, that mainly means keeping people motivated. I also hold up a mirror to them from time to time. I point out the things that are going well, where they can grow, where there is potential. They remain humble people who need that extra push every now and then.
Why do you think is it important to keep doing research on assessment?
Renske: I think that’s the strength of Comproved. With a spin-off, you’re always at a bit of a crossroads. You can either go completely down the commercial road, as in: ‘you ask, we deliver’. And that would have been possible, because that’s also the kind of tool for that. Or you can choose to really go for quality. Making sure that it’s really thought through and that it’s right. And that’s that scientific base. I firmly believe that the latter option leads to a much better and more sustainable product in the end. We’re also making a product for people in the field. Things depend on it. Judgements about other people are being formed. It just has to be right, so in that respect that research is crucial. We could have gone the other route, by now we might have had a team of 50 people and a completely different product, but then we would have denied ourselves. After all, we are and remain all scientists.
Sven: When we started 10 years ago, we found that in the field there were a lot of ingrained ideas about what exactly assessment is. Often, that still is the case, by the way. The famous rubrics and the like are the norm. And at the same time, everyone is massively assessing without thinking critically about it. Because if you look purely at the research on assessment within educational sciences, there is not that much to be found.
Renske: Good point! And yet people are assessing all day long.
Sven: In that sense, I think there is still a lot of work to be done. For example, I find it interesting to find out what happens in the minds of assessors during assessing. We’ve already done some research on that and we have some evidence here and there, but as far as I’m concerned, we need to find out a lot more about it.
How do you translate research into practice?
Renske: It always starts with someone having a problem. Because you can say you have a solution, but if no one experiences a problem, it stops. But fortunately for us, everyone has a problem with assessment (laughs). That’s always my point of entry. They come to me for a reason. There’s something in that rubric that’s not working right. It’s helpful to determine that first. Then it’s about a cultural shift in thinking about assessment. Comparing is then an easy next step because everyone understands it and can do it.
Sven: In higher education, change is slowly coming in that area. In secondary education, on the other hand, there is still so much to do. There you have people who are stuck with their assignments, with their classes and habits. Getting things moving there is difficult. If I could dream about what Comproved would ever succeed in, it would be getting movement in that tanker that is secondary education. Making people there more aware of what assessment means at all.
Another evolution I see, is the growing importance of (peer) feedback and that not only teachers need to be able to make judgements, but also the students themselves. This is where rubrics fall completely short.
How has the assessment practice evolved in recent years?
Sven: I think it is already easier to start the conversation within colleges and universities. I think people are much more aware that assessment is not so obvious and that not everything can be solved with a rubric.
Renske: Yes, there is an awareness that we don’t just have to assess, but we also have to be able to argue why we can trust those judgements. That’s why the whole rubric movement came up. And those rubrics get more elaborate with time, but they still don’t solve the problems. That realization is there. We have to somehow be able to show that we are not just handing out grades. Another evolution I see, is the growing importance of (peer) feedback and that not only teachers need to be able to make judgements, but also the students themselves. This is where rubrics fall completely short. They do not show a concrete picture of what quality is. That’s why it has been very easy to use Comproved as a peer feedback method. By comparing several examples, students immediately get a concrete picture of what quality is.
Sven: Right! That was the first application that was always immediately clear to everyone.
How is Comproved keeping up with these evolutions?
Sven: For example, by implementing the ‘feedback action plan’ as a new feature in the tool. The people within Comproved are good at making the link between the needs that exist, the things that are evolving and what we know from the literature. Knowing how to translate that into the product is really impressive.
Renske: Exactly! The fact that we have been doing research for 10 years and that Comproved has been around for 5 years is the best proof that Comproved continues to grow and evolve, right? Early adopters continue to use the tool and more and more users are joining. More and more people are also researching comparative judgement. I am convinced that we cannot take the tool away from people anymore. So, the only way is forward.
Sven: If the tool is already booming in Flanders and the Netherlands, why not abroad as well? Surely that is a dream for the next 5 years!
Here you can read the interview with Comproved’s founders.