Active learning of preferences based on models in the context of recommender systems
-
TypeDoctorate
-
KeywordsRecommender systems, active learning, preference elicitation
Description
The goal of this work is to contribute to a more mathematical approach to recommender systems, through the study of active learning methods. The general principle of the latter is to ask information directly from the user to recommend the best possible solutions [1].
This work will rely on models for user preferences and for the error that the user can make while answering questions. It will also use principles such as bayesian elicitation [2] or the criteria of minimax regret optimization [3], in collaboration with Paolo Viappiani (LIP6, Paris).
This study has two main goals. First, to elaborate a questioning strategy to maximize after each question the information gained on the model parameters. Second, optimize the recommendation given the information gathered after questioning.
Other applications are for example strategic decisions in big industries, where it is important that the recomendation be explained by an explicit model.
. [1] B. Settles. “Active learning”. In: Synthesis Lectures on Artificial Intelligence and Machine Learn- ing 6.1 (2012), pp. 1–114.
. [2] P. Viappiani and C. Boutilier. “Optimal Bayesian recommendation sets and myopically optimal choice query sets”. In: Advances in Neural Information Processing Systems. 2010, pp. 2352–2360.
. [3] P. Viappiani and C. Boutilier. “Regret-based optimal recommendation sets in conversational rec- ommender systems”. In: Proceedings of the third ACM conference on Recommender systems. ACM. 2009, pp. 101–108.