Many academic conferences award one or more “best paper”, “best demo” or “best poster” awards. The awardees are either selected by the program committee or by an anonymous audience vote.
However, in my opinion, we should get rid of these awards.
For three reasons I think these are a bad idea.
1. The big question regarding such awards is: “In which way are they useful for the community”. Like in education, an intervention (the award) should have a lasting positive effect either on the awardee or the community. Sure, the individual author who receives an award might feel happy for a short while. However, this positive effect might also be reached by just patting them on the back and saying “Great Work!”. There is no evidence that such awards lead to higher achievements. Quite to the contrary, a number of publications claim that awards and incentives actually have overall negative effects on individuals and communities, lowering their performance [1].
2. Awards are a bad metric for great research. Bartneck and Hu have pointed out that (on average) papers which got a CHI best paper awards did not get more citations than a random sample [2]. It seems that even a commitee of experts is unable to predict which papers will have the highest impact. If detecting great work is not even possible for papers with their fixed structure, why should it work for posters or demos? Especially demos are so diverse that a one-size-fits-all award is plain wrong. Is an artful, thought-provoking demo better than a demo of a novel, extremely versatile sensing technology?
3. Undersampling is another problem. How many of the conference attendees have seen all posters and demos? I would guess that, for any poster session, not a single attendee has read all poster titles. Likewise, it is hard to judge the quality of a demo without understanding it. For understanding a demo, you have to try it out for some time. With hundreds of other attendees trying out the same demos, there is just no time for this. Therefore, almost all votes for a best demo or poster can consider only a small subset. Nearly noone is able to make a qualified decision. Variables like poster/demo placement or group dynamics might have more of an effect on the votes than any kind of actual “quality” of a poster or demo.
Overall, best poster/paper/demo awards are neither shown effective nor valid nor at least fair. Why are we then clinging to them?
The TEI conference – which has a very intensive and diverse demo and poster session – has opted not to have any awards – for more or less these reasons.
[1] Kohn, A. (1993). Why incentive plans cannot work. Harvard Business Review, September 1993
[2] Bartneck, C., & Hu, J. (2009). Scientometric Analysis Of The CHI Proceedings. Proceedings of the Conference on Human Factors in Computing Systems (CHI2009), Boston, pp.699-708