Two Quotes on Research and Teaching

In a Hacker News discussion, I just stumbled upon two quotes which – in my view – beautifully capture the essence of the symbiosis of academic teaching and research:

“If you pay a man a salary for doing research, he and you will want to have something to point to at the end of the year to show that the money has not been wasted. In promising work of the highest class, however, results do not come in this regular fashion, in fact years may pass without any tangible result being obtained, and the position of the paid worker would be very embarrassing and he would naturally take to work on a lower, or at any rate a different plane where he could be sure of getting year by year tangible results which would justify his salary. The position is this: You want one kind of research, but, if you pay a man to do it, it will drive him to research of a different kind. The only thing to do is to pay him for doing something else and give him enough leisure to do research for the love of it. “

— J. J. Thompson

“I don’t believe I can really do without teaching. The reason is, I have to have something so that when I don’t have any ideas and I’m not getting anywhere I can say to myself, “At least I’m living; at least I’m doing something; I am making somecontribution” — it’s just psychological.

When I was at Princeton in the 1940s I could see what happened to those great minds at the Institute for Advanced Study, who had been specially selected for their tremendous brains and were now given this opportunity to sit in this lovely house by the woods there, with no classes to teach, with no obligations whatsoever. These poor bastards could now sit and think clearly all by themselves, OK? So they don’t get any ideas for a while: They have every opportunity to do something, and they are not getting any ideas. I believe that in a situation like this a kind of guilt or depression worms inside of you, and you begin to worry about not getting any ideas. And nothing happens. Still no ideas come.

Nothing happens because there’s not enough real activity and challenge: You’re not in contact with the experimental guys. You don’t have to think how to answer questions from the students. Nothing!

In any thinking process there are moments when everything is going good and you’ve got wonderful ideas. Teaching is an interruption, and so it’s the greatest pain in the neck in the world. And then there are the longer period of time when not much is coming to you. You’re not getting any ideas, and if you’re doing nothing at all, it drives you nuts! You can’t even say “I’m teaching my class.”

If you’re teaching a class, you can think about the elementary things that you know very well. These things are kind of fun and delightful. It doesn’t do any harm to think them over again. Is there a better way to present them? The elementary things are easy to think about; if you can’t think of a new thought, no harm done; what you thought about it before is good enough for the class. If you do think of something new, you’re rather pleased that you have a new way of looking at it.

The questions of the students are often the source of new research. They often ask profound questions that I’ve thought about at times and then given up on, so to speak, for a while. It wouldn’t do me any harm to think about them again and see if I can go any further now. The students may not be able to see the thing I want to answer, or the subtleties I want to think about, but they remind me of a problem by asking questions in the neighborhood of that problem. It’s not so easy to remindyourself of these things.

So I find that teaching and the students keep life going, and I would never accept any position in which somebody has invented a happy situation for me where I don’t have to teach. Never.”

— R.Feynman

 

Advertisement

Thoughts on Peer Review in the CHI Community

I just filled out the CHI reviewer pre-review questionnaire. The final question asks about general thoughts on the CHI review process. I have documented my answer below. None of the potential improvements I mention are really novel – they have been implemented at other conferences or journals before.

Edit (01.10.2015, 09:15 UTC+2): I would not expect such changes to be made for CHI 2016 or 2017. Instead, some of these changes could be tried out at smaller conferences first in order to work out a usable implementation.

(For the record: this is not about open access. While I am a fan of open source/science/access/…, I think that the ACM’s approach (much freedom for authors, affordable access to the Digital Library and individual papers, offering an OA option) is very reasonable.)

In general, I would like to see four changes to the current review process:

Post-Publication Peer Review

I find the currently practiced pre-publication peer review quite problematic. I have reviewed or otherwise seen plenty of papers that contained interesting findings but were rejected (sometimes rightfully) due to some flaw or another.
A huge share of these rejected papers was never published in another venue, the insight contained in them (and errors from which others could learn) lost to the community. Many other papers were only published one or more years later – thereby delaying all research that could build on it or try to replicate its findings.

I would very much like CHI to adopt a post-publication peer review or a similar approach that improves speed and visibility. For example, a process similar to alt.chi (in some ways) would be great, where all submitted papers are made public or semi-public and peer review then selects those that are to be presented at the conference. Ideally, reviews would also be published for all papers. This would also make it harder for authors to re-submit their paper unaltered to another conference without addressing the issues mentioned by the reviewers. I hate when I give extensive feedback to a paper and the authors do not even bother to fix the spelling errors that I pointed out when they submit it to another conference.

Open Peer Review

Quite often I read a paper which incorrectly quotes a paper of mine or which omits important related work. With the current review process, the quality of the reviews depends on the ACs ability and desire to find the right reviewers.  Allowing any researcher to provide a review for a submitted paper would make sure that domain experts can chime in and point out flaws or interesting use cases that the other reviewers missed. This has already been tried at alt.chi, too.

Modular Peer Review

The standard PCS review form asks the reviewer how they would rate their expertise on a four-point Likert scale. This is a rather simplistic measure. While I feel very competent to judge novelty or validity of a sensing technique, I can not honestly tell whether the correct statistical tests were used in the evaluation of this technique. Similarly, I won’t be able to tell whether all relevant related work has been cited in a paper on design techniques but can certainly offer my opinion on further applications or spelling errors.

I would prefer to describe my expertise more accurately and also focus only on certain aspects of a paper as a reviewer. By telling the AC that I do not know enough about statistics, I give them the opportunity to find another reviewer who does. Furthermore, the AC could also assign a subset of reviewers to individual aspects of a paper (e.g., related work, writing style, experimental setup, statistical tests, technical correctness, replicability). This could avoid duplicated work and simultaneously increase the quality of the reviews.

Author-Reviewer Collaboration

Reviewers point out weaknesses and suggest improvements in a paper. Given the significant amount of time some reviewers invest in reviewing a paper, and given that their suggestions may significantly improve a paper, it might be a good idea to offer a way of attributing their contributions. For example, reviewers could be mentioned by name in the Acknowledgements section (if they want) or they could become co-authors of the final submission. This would certainly change the character of the review process and introduce some problems. While the aforementioned changes could already be implemented for next year’s CHI conference, this final suggestion certainly needs more discussion and refinement to get it right. One option could be for reviewers to ‘fork’ a submission and submit ‘pull requests’ with proposed changes similar to software development processes on GitHub and similar platforms. (In general, a version-controlled approach to paper writing, including authorship attribution for each sentence sounds quite interesting.)

Given that the program chairs of recent CHI conferences have put much effort into evolving the conference series, I am optimistic that some of these proposed changes will be implemented in the near future. And frankly, it would be very fitting for the CHI community to be at the forefront of academic collaboration and publishing.

Protected: Tips for Student Volunteers Chairs at Medium-Sized Conferences

This content is password protected. To view it please enter your password below:

META: imported old my.opera.com blog posts and comments – images broken at the moment

I just found time to import my old blog formerly hosted on my.opera.com/raphman .

Posts and comments have been imported flawlessly.However, all images are currently broken until I fix the links.

Schavan, schludrig

(more…)

How would you like to be remembered by the people who will live in 2200?

At TEI 2008, Hiroshi Ishii gave a very interesting keynote, presenting highlights of the incredible amount of research he had been involved over the last decades.

However, it was the last slide that has stayed in my mind since then:

Wiley’s Major Reference Works available for free (sort of)

Update (08. March 2012): Wiley has informed me that they just fixed the issue.

70 GB of digital content from Wiley’s Major Reference Works can be downloaded for free from Wiley’s web server. This is probably not intended, as Wiley still charges hundreds or even thousand of Euros for each one. I notified them of this fact several times during the last two months. They do not seem to mind. In this post I explain how I found out about this, how one can download the content, and why this is probably legal (at least in some jurisdictions).
(more…)

Why you should not trust Sheridan Printing with your conference paper

In 2009 I found a pretty obvious security flaw in Sheridan Printing’s submission management system. It allows anyone to view and modify all papers in the conference proceedings of many major computer science conferences prior to printing and publication.

Over the last two years I have continuously tried to get this problem fixed silently – without success. Therefore, I publish the issue now, giving authors the chance to make informed decisions.

In this blog post I describe the problem, explain its possible consequences, and propose ways to fix this issue.
(more…)

Best {Paper, Demo, Poster} Awards Considered Harmful

Many academic conferences award one or more “best paper”, “best demo” or “best poster” awards. The awardees are either selected by the program committee or by an anonymous audience vote.
However, in my opinion, we should get rid of these awards.
For three reasons I think these are a bad idea.

1. The big question regarding such awards is: “In which way are they useful for the community”. Like in education, an intervention (the award) should have a lasting positive effect either on the awardee or the community. Sure, the individual author who receives an award might feel happy for a short while. However, this positive effect might also be reached by just patting them on the back and saying “Great Work!”. There is no evidence that such awards lead to higher achievements. Quite to the contrary, a number of publications claim that awards and incentives actually have overall negative effects on individuals and communities, lowering their performance [1].

2. Awards are a bad metric for great research. Bartneck and Hu have pointed out that (on average) papers which got a CHI best paper awards did not get more citations than a random sample [2]. It seems that even a commitee of experts is unable to predict which papers will have the highest impact. If detecting great work is not even possible for papers with their fixed structure, why should it work for posters or demos? Especially demos are so diverse that a one-size-fits-all award is plain wrong. Is an artful, thought-provoking demo better than a demo of a novel, extremely versatile sensing technology?

3. Undersampling is another problem. How many of the conference attendees have seen all posters and demos? I would guess that, for any poster session, not a single attendee has read all poster titles. Likewise, it is hard to judge the quality of a demo without understanding it. For understanding a demo, you have to try it out for some time. With hundreds of other attendees trying out the same demos, there is just no time for this. Therefore, almost all votes for a best demo or poster can consider only a small subset. Nearly noone is able to make a qualified decision. Variables like poster/demo placement or group dynamics might have more of an effect on the votes than any kind of actual “quality” of a poster or demo.

Overall, best poster/paper/demo awards are neither shown effective nor valid nor at least fair. Why are we then clinging to them?

The TEI conference – which has a very intensive and diverse demo and poster session – has opted not to have any awards – for more or less these reasons.

[1] Kohn, A. (1993). Why incentive plans cannot work. Harvard Business Review, September 1993

[2] Bartneck, C., & Hu, J. (2009). Scientometric Analysis Of The CHI Proceedings. Proceedings of the Conference on Human Factors in Computing Systems (CHI2009), Boston, pp.699-708

ITS 2010 – Day 2

I’m attending ITS 2010 – the ACM International Conference on Interactive Tabletops and Surfaces 2010 in Saarbrücken, Germany. This is a short collection of interesting stuff I’ve seen and heard on day 2 (Monday, 6. November 2010).

(The demo and poster session is like a huge, dark playground with (literally) tons of amazing touch interfaces.)

Monday was the first day of paper presentations. There was a wealth of papers on several topics. Therefore my account is very selective. You can get all papers at the conference website.

The day started very relaxing with “Tafelmusik“, two musicians with a digital audio sequencer and a table full of objects that make sounds. See their website for a photo. By sampling them and continuously replaying these sounds they created a sound landscape – sometimes soothing and sometimes fascinating.

Brad Paley gave a keynote covering a wide range of topics but centering about ways to visualize information. Some of his claims:

  • “CHI” considered harmful: instead “computer mediated human-to-human interaction”
  • Color is bad for encoding data
  • Consistence *impairs* performance
  • 15:1 increases in information density, 20:1 speed-ups can be easily reached
  • “users” considered harmful

While Brad did not explicitly say so, I think in their entirety these claims only apply to UIs for expert users, however.

Afterwards, Malte Weiss (RWTH Aachen) presented “BendDesk: Dragging Across the Curve” [PDF]. He and Simon Voelker built a desk with an interactive surface bent partly upwards. Malte kindly mentioned Curve – our research on this topic. We are currently figuring out how to connect both prototypes for remote interaction.

In the same session, Yvonne Jansen presented MudPad [PDF], a tactile display using ferrofluid and magnets.

Antti Virolainen presented an interactive surface made out of ice (FTIR in ice is probably not possible).

Hrvoje Benko (Microsoft Research) presented another spherical multitouch surface – but this time a large dome where you walk inside [PDF]. Interesting link from his talk: worldwidetelescope.org

In the afternoon, Dietrich Kammer (TU Dresden) presented an interesting theoretical framework for describing gestures [PDF].

For me, the demo and poster session is always the highlight of a conference. At ITS 2010 it took place at DFKI. There was a wealth of really cool demos and interesting posters. As I had to present my own poster (“Some Thoughts on a Model of Touch-Sensitive Surfaces” [PDF]), I did not find time to have a look at every demo. However, there was an amazing mixture of art, high-tech hardware, and applications. See the photos on Facebook!

While I liked some demos and posters more than others, I did not fill out my voting sheet for best poster or demo. More on this later.

Photo taken from the official ITS 2010 Facebook album: