Preregistration in a nutshell

Separate the generation and confirmation of hypotheses:

  Come up with an exciting research question  

  Write a paper proposal without confirmatory experiments  

  After the paper is accepted, run the experiments and report your results  

What does science get?

  • A healthy mix of positive and negative results
  • Reasonable ideas that don’t work still get published, avoiding wasteful replications
  • Papers are evaluated on the basis of scientific interest, not whether they achieve the best results

What do you get?

  • It's easier to plan your research: get feedback before investing in lengthy experiments
  • Your research is stronger: results have increased credibility
  • Convince people that they will learn something even if the result is negative

Important dates

Preregistration deadline: 15th 20th July 2019 (writing only, no experiments)
Rebuttal period: 29 6th- 13th August
Experiments period: 10 19th August - 26th October
ICCV 2019 workshop: 2nd November

Call for papers

Topics and scope: Any topic within the scope of the ICCV will be accepted. For example:

  • Proposal of a theoretical model or explanation for a phenomenon, with empirical investigation.
  • Proposal of a new family of methods and evaluation of different variants in a computer vision setting.
  • Review of known algorithms, hyper-parameters or architectures to suggest best-practices.
  • Papers with an emphasis on user studies, such as those on active learning.

Get started

Frequently asked questions

  • Don't we need a positive publication bias? After all, there many more ideas that don't work than ones that do. Why is it useful to allow negative results? There are several benefits to publishing negative results. If an idea is well-motivated and intuitively appealing, it may be needlessly repeated by multiple groups who replicate the negative outcome, but do not have a venue for sharing this knowledge with the community (see the CVPR 2017 workshop on negative results for a more detailed discussion of the benefits of publishing negative outcomes).
  • How does exploratory data analysis fit into this model? Exploratory analysis can come in multiple forms including: (1) Small scale experiments (typically on toy data); (2) Results listed in prior work. Both should be reported in the proposal paper as part of the justification for your idea. Neither should be considered by the reader of your paper as providing confirmatory evidence in support of your hypothesis (the goal of preregistration is to make this distinction explicit). By contrast, the confirmatory experimental protocol which you propose should seek to rigorously evaluate your hypothesis and must be performed on different data to your own exploratory experiments. However, for practical reasons, it may use datasets that have also been previously used in the literature (further discussion below).
  • What's the rationale for changing the review model? “Preregistration separates hypothesis-generating (exploratory) from hypothesis-testing (confirmatory) research. Both are important. But the same data cannot be used to generate and test a hypothesis, which can happen unintentionally and reduces the credibility of your results. Addressing this problem through planning improves the quality and transparency of your research, helping others who may wish to build on it.” (source: cos.io)
  • Will the papers be published in the ICCV proceedings? Yes. The proposal papers will be published in the ICCV proceedings, and available on IEEE Xplore and CVF Open Access. Due to the non-standard nature of the review process (experiments are published only many months after review), the results will be made available as an addendum to the proposal paper.
  • Doesn't prior work on existing benchmarks weaken my confirmatory experiments? Yes. Each prior result reported on a dataset leaks information that reduces its statistical utility (we are strongly in favour of limited-evaluation benchmarks for this reason). Unfortunately, from a pragmatic perspective, it is infeasible to expect every computer vision researcher to collect a new dataset for each hypothesis they wish to evaluate, so we must strike a reasonable balance here.
  • Is it OK to make changes to the preregistered experimental protocol? Although you should endeavour to follow the proposed experimental protocol as closely as possible, you may find that it is necessary to make small changes or refinements. These changes should be carefully documented when reporting the experimental results: it is important to make clear which protocols have been modified after observing the evidence.
  • Where can I found more information about preregistration? There are a number of good resources for further reading around the ideas related to preregistration, including, but not limited to:

Organisers

João F. Henriques (University of Oxford)
Samuel Albanie (University of Oxford)
Luca Bertinetto (FiveAI)
Jack Valmadre (Google Research)

Questions?