Description
Title | Trusting crowdsourced annotations |
Abstract | Being able to estimate the trustworthiness of crowdsourced contributions is crucial to maximise the quality of the results obtained, while reducing the uncertainty due to the reliance on vast amounts of unknown workers. I present two approaches for assessing trust in crowdsourced annotations, based on a combination of RDF, machine learning (graph kernel based SVM in particular), statistics and provenance analysis. Also, I will illustrate them by means of case studies from the cultural heritage and the maritime domain, and I will outline future directions for this work. |
Other presentations by Davide Ceolin
Date | Title |
---|---|
08 February 2010 | Thinking about trust |
21 February 2011 | As far as we know... - How to derive safe trust assertions from a limited amount of opinions |
12 December 2011 | |
22 April 2013 | Police Open Data Reliability Analyses. |
16 December 2013 | Estimating the trustworthiness of crowdsourced museum annotations |
23 June 2014 | Two Procedures for Analyzing the Reliability of Open Government Data |
09 March 2015 | Trusting crowdsourced annotations |
21 March 2016 | Web Data and Information Quality assessment |
10 October 2016 | Capturing the Ineffable: Collecting, Analysing, and Automating Web Document Quality Assessments |