Don't we need a positive publication bias? After all, there many more ideas that don't work than ones that do. Why is it useful to allow negative results?
There are several benefits to publishing negative results. If an idea is well-motivated and intuitively appealing, it may be needlessly repeated by multiple groups who replicate the negative outcome, but do not have a venue for sharing this knowledge with the community (see the CVPR 2017 workshop on negative results for a more detailed discussion of the benefits of publishing negative outcomes).
How does exploratory data analysis fit into this model?
Exploratory analysis can come in multiple forms including:
(1) Small scale experiments (typically on toy data);
(2) Results listed in prior work.
Both should be reported in the proposal paper as part of the justification for your idea. Neither should be considered by the reader of your paper as providing confirmatory evidence in support of your hypothesis (the goal of preregistration is to make this distinction explicit). By contrast, the confirmatory experimental protocol which you propose should seek to rigorously evaluate your hypothesis and must be performed on different data to your own exploratory experiments. However, for practical reasons, it may use datasets that have also been previously used in the literature (further discussion below).
What's the rationale for changing the review model?
“Preregistration separates hypothesis-generating (exploratory) from hypothesis-testing (confirmatory) research. Both are important. But the same data cannot be used to generate and test a hypothesis, which can happen unintentionally and reduces the credibility of your results. Addressing this problem through planning improves the quality and transparency of your research, helping others who may wish to build on it.” (source: cos.io)
Will the papers be published in the ICCV proceedings?
Yes. The proposal papers will be published in the ICCV proceedings, and available on IEEE Xplore and CVF Open Access. Due to the non-standard nature of the review process (experiments are published only many months after review), the results will be made available as an addendum to the proposal paper.
Doesn't prior work on existing benchmarks weaken my confirmatory experiments?
Yes. Each prior result reported on a dataset leaks information that reduces its statistical utility (we are strongly in favour of limited-evaluation benchmarks for this reason). Unfortunately, from a pragmatic perspective, it is infeasible to expect every computer vision researcher to collect a new dataset for each hypothesis they wish to evaluate, so we must strike a reasonable balance here.
Is it OK to make changes to the preregistered experimental protocol?
Although you should endeavour to follow the proposed experimental protocol as closely as possible, you may find that it is necessary to make small changes or refinements. These changes should be carefully documented when reporting the experimental results: it is important to make clear which protocols have been modified after observing the evidence.
Where can I found more information about preregistration? There are a number of good resources for further reading around the ideas related to preregistration, including, but not limited to: