The paper selected for the third astronomy twitter journal club meeting (8pm UK time, 7pm UT, Thursday 30th June) is Telescope Time Without Tears: A Distributed Approach to Peer Review.
The authors of the paper, Michael Merrifield and Donald Saari, put forward their ideas for improving the process of deciding who gets time on telescopes.
Currently time on a telescope is generally allocated by a small panel of 5 or 6 astronomers. They make their decisions based on proposals submitted by the prospective observers, which they rank according to their technical feasibility and science quality. The drawback to this process is that an immense amount of work can fall on a small number of people if, as is often the case, the telescope is heavily over-subscribed. This can mean that there is insufficient time to properly assess the quality of individual proposals. It also provides a disincentive to take part:
…since acquiring a reputation for doing this job well simply invites more of it to be heaped on you.”
The authors propose the following replacement for the current time allocation model:
1. As now, principal investigators (PIs) submit proposals against a set deadline, specifying in which sub-field their applications lie.
2. After the deadline, each of these N PIs is sent m proposals from other applicants in their sub-field.
3. If the PI has a conflict of interest (institutional or personal) on any application, he declares it as in most current refereeing processes, and the application is replaced by an alternate.
4. The PI assesses these m applications, and compiles a ranked list, placing them in the order in which she thinks the community would rank them, not her personal preferences. How she carries out this process is up to the PI: she could, for example, call on the combined views of her co-applicants, or delegate the task to one co-investigator. As now, she is not allowed to communicate with the applicants on the proposals she is assessing.
5. The PIs all submit their ranked list of m applications. Failure to carry out this refereeing duty by a set deadline would lead to the PI’s own proposal being automatically removed from the ranking: the refereeing element should be viewed as much a part of the application as any other, and not carrying it out means that the proposal is incomplete and should be rejected.
6. These individual sub-lists of rankings are then combined to produce an optimized global list ranking all N applications.
7. Finally, each PI’s individual rankings are compared to the positions of the same m applications in the globally-optimized list. If both lists appear in approximately the same order, then the PI has done a good job and is rewarded by having his application moved a modest number of places up the final list relative to those from PIs who have not predicted the community’s view so well.
Rewarding ‘good’ reviewers by moving their applications up the rankings should only have a modest effect on the final list – proposals near the bottom won’t receive enough of a boost to propel them into the accepted category.
Four limitations of this new allocation process are put forward in the paper. Firstly, a dishonest reviewer could steal the ideas in other people’s proposals; secondly, there would be no feedback on the submitted proposals; thirdly, there is no option for pointing out obvious flaws in a proposal (i.e. the science has already been done, or the object to be observed is in a different hemisphere); and finally, this system could result in a trend towards mediocre projects since a ‘long-shot’ idea may be unsupported. Check out the paper to see how these issues could be solved.
I chose this paper as I thought it would provoke some good discussions. The authors’ idea is an interesting suggestion for improving the time allocation process and I’m looking forward to seeing what people make of it.