Description

Title Trusting crowdsourced annotations
Abstract Being able to estimate the trustworthiness of crowdsourced contributions is crucial to maximise the quality of the results obtained, while reducing the uncertainty due to the reliance on vast amounts of unknown workers. I present two approaches for assessing trust in crowdsourced annotations, based on a combination of RDF, machine learning (graph kernel based SVM in particular), statistics and provenance analysis. Also, I will illustrate them by means of case studies from the cultural heritage and the maritime domain, and I will outline future directions for this work.