Do 'Faculty of 1000' (F1000) ratings of ecological publications serve as reasonable predictors of their future impact?

David A. Wardle


There is an increasing demand for an effective means of post-publication evaluation of ecological work that avoids pitfalls associated with using the impact factor of the journal in which the work was published. One approach that has been gaining momentum is the 'Faculty of 1000' (hereafter F1000) evaluation procedure, in which panel members identify what they believe to be the most 'important' recent publications they have read. Here I focused on 1530 publications from 7 major ecological journals that appeared in 2005, and compared the F1000 rating of each publication with the frequency with which it was subsequently cited. The mean and median citation frequencies of the 103 publications highlighted by F1000 was higher than for all 1530 publications, but not substantially so. Further, the F1000 procedure did not highlight any of the 11 publications that were each cited over 130 (and up to 497) times, while it did highlight 14 publications that were each cited between 4 and 9 times. Further, 46% and 31% of all manuscripts highlighted by F1000 were cited less often than the mean and median respectively of all 1530 publications. Possible reasons for the F1000 process failing to identify high impact publications may include uneven coverage by F1000 of different ecological topics, cronyism, and geographical bias favoring North American publications. As long as the F1000 process cannot identify those publications that subsequently have the greatest impact, it cannot be reliably used as a means of post-publication evaluation of the ecological literature.


bibliometric analysis; citations; F1000, faculty of 1000, impact factor

Full Text:


Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.