Abstract |
This talk will be about my PhD project proposal: how to represent and reason about normative (e.g. legal, regulatory and guideline) documents and incorporate this reasoning into Reinforcement Learning approaches for personalization. Significant efforts have been put onto formalization of normative documents and reasoning therewith. Existing work mostly focuses on offline (design-time or after-the-fact) compliance checking, however, whereas we want to integrate reasoning with learning. Conversely, a trend of increasingly invasive autonomous agents has resulted in societal and academic concerns on safety of such agents. ML researchers have responded with a variety of novel approaches, yet coming up with sensible constraints is often left to the user. I will briefly introduce the personalization case (a collaboration with ING), present a high-level overview of the proposed approach and and elaborate on its academic challenges. |