Title : Users and recommenders: how their behaviours affect each other

Presenter Zoltan Szlavik
Abstract In my WAI talk, I'm going to discuss results of our recent work about simulating the behaviour of users and designers of recommender systems. One of the questions I'm going to be investigating is 'Will users be happier if I offer them personalised recommendations than if I offer them overall popular items?' I will also talk about the KDD competition that has started in the middle of March, and in case you become interested, our team still accepts new members!

Title : Let’s Agree to Disagree: On the Evaluation of Vocabulary Alignment

Presenter Anna Tordai
Abstract Gold standard mappings created by experts are at the core of alignment evaluation. At the same time, the process of manual evaluation is rarely discussed. While the practice of having multiple raters evaluate results is accepted, their level of agreement is often not measured. In this paper we describe three experiments in manual evaluation and study the way different raters evaluate mappings. We used alignments generated using different techniques and between vocabularies of different type. In each experiment, five raters evaluated alignments and talked through their decisions using the think aloud method. In all three experiments we found that inter-rater agreement was low and analyzed our data to find the reasons for it. Our analysis shows which variables can be controlled to affect the level of agreement including the mapping relations, the evaluation guidelines and the background of the raters. On the other hand, differences in the perception of raters, and the complexity of the relations between often ill-defined natural language concepts remain inherent sources of disagreement. Our results indicate that the manual evaluation of ontology alignments is by no means an easy task and that the ontology alignment community should be careful in the construction and use of reference alignments.