Readersourcing

This ecosystem is mainly conceived and built upon the concept of Crowdsourcing, a neologism coined by Jeff Howe in 2006 and currently described in Wikipedia as the process in which a task traditionally performed by an employee or contractor" is outsourced to an undefined, generally large group of people or community in the form of an open call. In particular, the term Readersourcingrefers to a specific instantiation of such concept.
The final purpose of this ecosystem is to have readers of scholarly papers that also participate in the rating process of the content they read. There are several rationales for this approach. One of them is to overcome the always more frequent shortage of available competent referees, mainly caused by the increased rate at which scientific articles are written and submitted for review nowadays, without being actually paired by an equally significant growth rate of referees. Modern technologies and globalization have in fact provided several advantages to scientific writing, but they do not help the peer reviewing process to the same extent, finally unbalancing the existing equilibrium between scientific writers and reviewers. Another good reason is to exploit, for free, the opinions that readers of a scholarly paper do have after having read it: currently, these opinions are often wasted and forgotten, or spread in a very informal and not effective way.
The Readersourcing model aims at taking advantage of reader opinions, in order to overcome referees shortage, and also to follow the mass collaboration, collective intelligence, and wisdom of the crowd principles enabled and enhanced by Web 2.0. Of course, simply allowing readers to express their judgement on the paper they read cannot be a reasonable approach, as not all readers can be considered equally prepared and reliable; that is why the proposed model also assigns a rating to each reviewer, so that judgments from those who have proven to be good reviewers do count more than those who should not be trusted. Such a rating is implicitly and dynamically generated by the system, through the continuous comparison of the judgments expressed by the readers on each paper with its current score; providing - or having provided - correct (wrong) judgments will therefore lead to higher (lower) reader ratings, hopefully generating a virtuous circle.

Additional Details

This ecosystem has been recently presented during the IRCDL 2019 Conference. The original paper can be found on Zenodo, while the code and the related documentation is available on GitHub. Take advantage of the badges below to exploit these resources.

References