Abstract |
In this research we want to determine if a fine-grained model of the scientific publishing workflow can help us make the reviewing processes more efficient and more accurate. For developing this model called Linkflows, we are using existing ontologies like
the SPAR suite ontologies, the FAIR Reviews ontology and others such as PROV-O and the Web Annotation Data Model. Our contributions include a novel model, Linkflows, that combines these already existing ontologies together with new classes and properties that are able to provide a more detailed, semantically rich view on the reviewing process. We evaluate the efficiency and accuracy of applying the Linkflows model in the reviewing context on a manually curated dataset from several recent open peer review computer science journals and conferences. We perform an initial analysis on the reviews in the dataset considering the Linkflows model and then perform a user experiment where we compare experts with the actual peer-reviewer answers when using Linkflows. The results of this user study are preliminary, and we want to see if we are able to express at a finer-grained level the reviewing workflow and the changes an article or a scientific text snippet undergoes. We also want to see how multiple automatic lexicon-based sentiment analysis tools perform when applied to scientific reviews. Our initial findings suggest that these sentiment detection tools are worse than experts, which are themselves not perfectly aligned with the ground truth. We also notice interesting correlations among different finer-grained aspects of the reviews when using the Linkflows model. |