Review: Telescope time without tears: a distributed approach to peer review

The paper selected for the third Astronomy Twitter Journal Club meeting (8 p.m. UK time, 7 p.m. UT, Thursday 30th June 2011) was Telescope Time Without Tears: A Distributed Approach to Peer Review. Please see the preview of the paper for more details.

You can also see the unedited transcript of all Tweets.

The authors of the paper, Michael Merrifield and Donald Saari, put forward their ideas for improving the process of deciding who gets time on telescopes.

For the first time, one of the authors joined in with the discussion. Michael Merrifield created a Twitter account (@ProfMike_M) this week so that he could do so!

The current process relies on a Telescope Allocation Committee (TAC) to assess all the proposals for telescope time. This is time consuming for each committee member. In the paper, Mike states he was:

“…motivated to contemplate the shortcomings of this procedure by the presence on his desk of a bulging file of 113 telescope applications that he had been sent to assess as a member of the European Southern Observatory (ESO) Observing Programme Committee.”

Other issues include:

  • There is a disincentive to do a good job. If you do it well, you’ll just be asked to to so more assessments. If you do it badly, you won’t be asked again.
  • The situation is going to get worse as the number of applications is going up dramatically, whilst the resource availability is obviously static.
  • Applicants recognize that there is a larger random element in winning telescope time, and accordingly “buy more lottery tickets” by putting in larger numbers of applications.

The main objectives listed in the paper are:

  • Some incentive should be put in place to reduce the pressure toward ever more applications.
  • The workload of assessment should be shared equitably around the community.
  • The burden on each individual should be maintained at a reasonable level so that it is physically possible to do a fair and thorough job.
  • There should be some positive incentive to do the task well.
  • The ultimate ranked list of applications should be as objective a representation of the community’s perception of their relative scientific merits as possible.

The paper presents a solution whereby the workload is spread throughout the community. Many people would take part in the review process by ordering a subset of the papers into an ordered list. All the returned lists would then be combined to form a master list. If a reviewer’s list matched the master list well, the reviewer’s own paper would be promoted a few places. Conversely, if a reviewer’s list was a bad fit against the final master list, the reviewer’s own paper would be demoted a few places.

The paper concludes:

“Here, an alternative is presented, whereby the task is distributed around the astronomical community, with a suitable mechanism design established to steer the outcome toward awarding this precious resource to those projects where there is a consensus across the community that the science is most exciting and innovative.”

On to the discussion…

[Note that acronyms are defined at the bottom of this post]

Agreement that there is a workload problem

There was a general acknowledgment that there is a problem that needs addressing:

“It’s a good day to have this discussion: the deadline for the ALMA early science call was today. There were >900 proposals!” – @astronomyjc

“The ALMA process is a good case in point: about 10:1 over-subscribed, and some ppl on many many proposals” – @Matt_Burleigh

“ESO TAC is currently 10 days of reading and a week of meetings, twice a year. Yuck!” and “ESO still hovering at around 1000 proposals per semester” – @ProfMike_M

Current system positives

“One additional pro of UK process is that we ask for referees for each proposal, who send detailed comments to panelists.” – @Matt_Burleigh

“Expert review may be valuable even if inconsistently available.” – @NGC3314

“…panel discussion can change my mind” – @Matt_Burleigh

Current system negatives

Apart from the already mentioned workload and over subscription issues with the current system, other problems were highlighted:

“A common complaint is TAC feedback indicates panelists missed important info or points, indicating haven’t read properly…” – @Matt_Burleigh

And confirmation that there is a culture of astronomers putting more applications in so that they have more chance of success:

“the ‘buying more lottery tickets’ phenomenon: we see it today with ALMA, & I see it every year with HST” – @Matt_Burleigh

A shadow panel experiment

“Very little data on whether current system works. HST did shadow panel, and found *zero* correlation in outcome!” – @ProfMike_M

“Same system; same proposals; different panel – completely different outcome!”  – @ProfMike_M

“a complete duplication of the panel. Plot of rank panel a versus rank panel b = scatter diagram!”  – @ProfMike_M

“Very subjective then. Sounds like an “averaging system” would help come to a consensus” – @kashfarooq

“Switching to the system proposed in the paper should smooth the results out” – @astronomyjc

Data on the current system

In response to the statement “I think we need more data on the current system and a trial of alternatives” (by @ProfMike_M), the following was revealed:

“Looking over stats from a previous HST cycle. Rankings 1 thru 5. Top/bottom proposals avg: 1.9/4.8 (0.6 sigma for both)” – @Paul_Crowther

“Similar test on ESO data revealed 1-sigma shifts a proposal from bottom quartile to top quartile.” – @ProfMike_M[This ESO report is due soon]

“Issue with small (N=5) sub-panels, as operated by ESO. Sigma more robust for larger (N=10) HST sub-panels, but agree <that this would be expensive to try>” – @Paul_Crowther

“Better than comparing duplicate panels would be checking that accepted proposals result in good science? And also keeping track of if rejected proposals eventually happen and work” –
@KarenLMasters

“No way to know now(shadow panel?) whether rejected proposals (not even counting mine) would not yield good science.” – @NGC3314

Has the proposed system been tried anywhere?

“ESO implemented the scheme for allocating Director’s Fund to employees.” – @ProfMike_M

“Feedback was that junior staff not happy trying to decide between study leave and a new filter.” – @ProfMike_M

[ESO staff can fund sabbaticals from the fund, in competition with building a new bit of instrumentation]

“So, has any telescope shown an interest in trying it? That is the only true test…” – @kashfarooq

“Yes (but they want to keep it quiet until they decide!)” – @ProfMike_M

Questions and answers about the proposed system

Question:

“Is there currently a culture with proposers wanting to keep their ideas secret? Or at least known to a smaller group?” – @kashfarooq

Answers:

“Definitely a concern, though not convinced it is a real issue.”- @ProfMike_M

“Yes. I strongly suspect ideas have been nicked over the years as a result of proposals! Can’t prove of course..” – @Matt_Burleigh

“Same number of people see each proposal as in current system (~5-10). Just different sets for each proposal.” – @ProfMike_M

Question:

“Would the new system result in people from the relevant fields seeing a paper they normally wouldn’t? Especially other researchers trying to make discoveries in the same area?”- @kashfarooq

Answers:

“Yes, this could be an issue” – @chris_tibbs

“As now with TACs, assessors would have to be honest and declare conflicts of interest before reading.” – @ProfMike_M

“The best ideas are made public anyway as the proposal abstracts are often put on ADS” – @nialldeacon

Question

“A referee may not be “evil”. They may be right & everyone else deceived by a sexy or devious proposal. How to mitigate for that?” – @Matt_Burleigh

[The paper suggests that “evil” referees who deliberately mark papers down would be penalized as their final list order would not match any of the other referees’s list orders, and hence the “evil” referee would drop down the pecking order.]

Answers:

“Make distributed review more like real panel in 2 stages: first comments circulated between all readers, then vote” – @ProfMike_M

“There should be a way to flag up crazy proposals & explain reasons so you’re not seen as an ‘evil reviewer'” – @astronomyjc

Question:

“Balance needed between low risk (incremental) & high risk (high potential). Latter lost in distributed TAC?” – @Paul_Crowther

Answers:

“Guidance to proposers/referees tells them to rank innovation highly. They will because they know everyone else is.” – @ProfMike_M

Question:

How do you get a fair share of theorists and simulators involved in the review process in the proposed system?- @MarcelAstroph

Answers:

“Good point – non-observing theorists could potentially be left out of this system altogether” – @astronomyjc

“I’d like to see them as coIs on proposals (very useful for simulating the plausibility of the observation!)”- @ProfMike_M

Would you agree to switch to this system for your next proposal?

“No!” – @ProfMike_M (Tongue in cheek?)

“No, but I think it stimulates thought about how the present system can be improved” – @Matt_Burleigh

“I think we need more data on the current system and a trial of alternatives.” – @ProfMike_M

I’m pro if one COULD get involved w/o writing proposal if they wish (groups cant complain about underrep. if they refuse service) – @MarcelAstroph

“A real test at the real telescope would be nice – to see the method in action …” – @khanzadyan

“I think I like @ProfMike_M‘s proposal. I’d prefer to know a consensus view was being reached, than a small number of people had all the power.” – @KarenLMasters

“I’d like to give this system (or another alternative) a go – it’d be interesting to see how different the results are” – @astronomyjc

Conclusion

No overall conclusion was reached – some people liked the idea, some thought it wouldn’t work.

However, we felt the overall view was that we’d just have to try. @ProfMike_M mentioned that a telescope is trying it – we eagerly await the results from that.

Thanks to everyone who took part and we hope you enjoyed it.

Acronyms

  • ADS – Astrophysics Data System (an online astronomy paper archive)
  • ALMA – Atacama Large Millimeter/Submillimeter Array
  • CoI – co-investigator
  • ESO – European Southern Observatory
  • HST – Hubble Space Telescope
  • PI – principal investigator (project leader)
  • TAC – Telescope Allocation Committee
Advertisements
This entry was posted in Reviews, Uncategorized and tagged , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s