Readersourcing aims at taking advantage of reader opinions, in order to overcome referees shortage, and also to follow the mass collaboration,
collective intelligence, and wisdom of the crowd principles enabled and enhanced by Web 2.0.
This ecosystem is mainly conceived and built upon the concept of Crowdsourcing, a neologism coined by Jeff Howe in 2006 and currently described in Wikipedia as the process in which a task traditionally performed by an employee or contractor" is outsourced to an undefined, generally large group of people or community in the form of an open call. In particular, the term Readersourcing refers to a specific instantiation of such concept.
The final purpose of this ecosystem is to have readers of scholarly papers that also participate in the rating process of the content they read. There are several rationales for this approach. One of them is to overcome the always more frequent shortage of available competent referees, mainly caused by the increased rate at which scientific articles are written and submitted for review nowadays, without being actually paired by an equally significant growth rate of referees. Modern technologies and globalization have in fact provided several advantages to scientific writing, but they do not help the peer reviewing process to the same extent, finally unbalancing the existing equilibrium between scientific writers and reviewers. Another good reason is to exploit, for free, the opinions that readers of a scholarly paper do have after having read it: currently, these opinions are often wasted and forgotten, or spread in a very informal and not effective way.
The ecosystem is designed to collect the ratings expressed by readers and to calculate scores according to different models. The Readersourcing model presented by Mizzaro in "Readersourcing - A Manifesto" and the TrueReview model proposed by De Alfaro and Faella in "TrueReview: A Platform for Post-Publication Peer Review" are currently implemented, but further models can be added in the future.
We also provide a number of different ways for readers to express their ratings:
a web form through which you can evaluate a publication by providing the url of its pdf;
a pdf annotation service that adds a link at the end of the file so you can express the evaluation at a later time;
a Google Chrome extension that allows you to evaluate the publication while you are reading it with the browser;
Of course, simply allowing readers to express their judgement on the paper they read cannot be a reasonable approach, as not all readers can be considered equally prepared and reliable; that is why the proposed model also assigns a rating to each reviewer, so that judgments from those who have proven to be good reviewers do count more than those who should not be trusted. Such a rating is implicitly and dynamically generated by the system, through the continuous comparison of the judgments expressed by the readers on each paper with its current score; providing - or having provided - correct (wrong) judgments will therefore lead to higher (lower) reader ratings, hopefully generating a virtuous circle.
Soprano M., Mizzaro S. (2019) Crowdsourcing Peer Review: As We May Do. In: Manghi P., Candela L., Silvello G. (eds) Digital Libraries: Supporting Open Science. IRCDL 2019. Communications in Computer and Information Science, vol 988. Springer, Cham, doi:10.1007/978-3-030-11226-4_21.
Soprano, M., Mizzaro, S. Readersourcing 2.0: RS_Py, doi.org/10.5281/zenodo.3245208.
Soprano, M., Mizzaro, S. Readersourcing 2.0: RS_PDF, doi.org/10.5281/zenodo.1442597.
Soprano, M., Mizzaro, S. Readersourcing 2.0: RS_Rate, doi.org/10.5281/zenodo.1442599.
Soprano, M., Mizzaro, S. Readersourcing 2.0: RS_Server, doi.org/10.5281/zenodo.1442630.
Soprano, M., Mizzaro, S. Readersourcing 2.0: Technical Documentation, doi.org/10.5281/zenodo.1443371.
Mizzaro, S. Evaluation in Academic Publishing: Crowdsourcing Peer Review? ERCIM News 113, 11--12 (2018), European Research Consortium Informatics & Mathematics 2004.
De Alfaro, L., Faella, M.: TrueReview: A Platform for Post-Publication Peer Review. CoRR (2016), arxiv.org/abs/1608.07878.
Mizzaro, S.: Readersourcing - A Manifesto. JASIST 63(8), 1666--1672 (2012), doi:10.1002/asi.2266.
Alberto Cusinato, Vincenzo Della Mea, Francesco Di Salvatore, and Stefano Mizzaro. 2009. QuWi: quality control in Wikipedia. In Proceedings of the 3rd workshop on Information credibility on the web (WICOW '09). ACM, New York, NY, USA, 27-34. doi.org/10.1145/1526993.1527001.
Mizzaro, S.: Quality control in scholarly publishing: A new proposal. JA-SIST 54(11), 989--1005 (2003), doi:10.1002/asi.10296.