Friday, September 9th 2022

From idea to funding decision at the SNSF (prior to 2022)

From idea to funding decision at the SNSF (prior to 2022)

From idea to funding decision at the SNSF (prior to 2022)









ID Voter 1 Voter 2 Voter 3 Voter 4 Voter 5 Voter 6 Voter 7 Voter 8 Voter 9 Voter 10 Voter 11 Average
#1 C AB A BC B AB AB A AB AB B 4.55
#2 C AB A BC COI AB AB A AB AB B 4.6
#3 A A .. .. .. .. .. .. .. C A 4.73
#4 A AB .. .. .. .. .. .. .. COI A 5.63
#5 C C .. .. .. .. .. .. .. C BC 2.33

Simple ranking based on averages - not optimal

Easily computed and communicated, but

  • - The effect of COIs and other abstentions seems arbitrary!
  • - Highly influenced by outliers: Reviewer or Panel effect.
  • - What is a meaningful difference? It depends…


  • We need a method that
  • \(\rightarrow\) allows to split scientific evaluation and funding decision
  • \(\rightarrow\) define the funding line and a lottery group in a consistent, transparent and reproducible way

Possible solution: Bayesian Ranking combined with Lottery

  • Let’s assume that \(y_{ij}\) is the estimation of the quality of proposal \(i\) by voter \(j\).

  • Bayesian Hierarchical Model (given some priors) for the panel votes: \[y_{ij} \ | \ \theta_i, \lambda_{ij} \sim N(\bar{y} + \theta_i + \lambda_{ij}, \sigma^2)\] \[\theta_i \sim N(0, \tau^2_{\theta})\]

    \[\lambda_{ij} \sim N(\nu_j, \tau^2_{\lambda})\]


  • Then, we extract the distribution of the rank of the \(\theta_i\) to achieve the Bayesian Ranking.

Funding Recommendation


  • 28% accepted (100)
    4% in lottery (12)
    68% rejected (241)

Pros and cons of Bayesian Ranking

  • + Quantify uncertainty with respect to the true rank
    + Truly comparative ranking
    + Adjust for grading habits of panel members (and possible of panels)



  • - Higher complexity
    - Longer and intense computation needed

Bayesian Ranking adopted by the SNSF and fully integrated in process in Fall 2022

Lessons learned

  • Bayesian Ranking is a (still imperfect) decision making tool

  • Limitations and assumptions need to be clearly communicated

  • Developement and implementation process needs to be communicated transparently and all panel members should be included in discussion (\(e.g\) no black box)

  • Methodology implemented in R-package available on github ERforResearch
    Scientific publication available from Statistics and Public Policy DOI: 10.1080/2330443X.2022.2086190

License


This presentation is licensed with a CC-BY international license 4.0 https://creativecommons.org/licenses/by/4.0/

Available from github: rachelhey.github.io/talks/BR_chicago.

Please cite as: R Heyard, M Ott, J Bühler, G Salanti, M Egger “A Bayesian Approach to Address Bias in the Peer Review Ranking of Grant Proposals Submitted to the Swiss National Science Foundation”, International Congress on Peer Review and Scientific Publication, 2022.

Thank you!