Learning Action Embeddings for Off-Policy Evaluation

verfasst von
Matej Cief, Jacek Golebiowski, Philipp Schmidt, Ziawasch Abedjan, Artur Bekasov
Abstract

Off-policy evaluation (OPE) methods allow us to compute the expected reward of a policy by using the logged data collected by a different policy. However, when the number of actions is large, or certain actions are under-explored by the logging policy, existing estimators based on inverse-propensity scoring (IPS) can have a high or even infinite variance. Saito and Joachims [13] propose marginalized IPS (MIPS) that uses action embeddings instead, which reduces the variance of IPS in large action spaces. MIPS assumes that good action embeddings can be defined by the practitioner, which is difficult to do in many real-world applications. In this work, we explore learning action embeddings from logged data. In particular, we use intermediate outputs of a trained reward model to define action embeddings for MIPS. This approach extends MIPS to more applications, and in our experiments improves upon MIPS with pre-defined embeddings, as well as standard baselines, both on synthetic and real-world data. Our method does not make assumptions about the reward model class, and supports using additional action information to further improve the estimates. The proposed approach presents an appealing alternative to DR for combining the low variance of DM with the low bias of IPS.

Organisationseinheit(en)
Fachgebiet Datenbanken und Informationssysteme
Externe Organisation(en)
Technische Universität Brünn (VRT)
Kempelen Institute of Intelligent Technologies (KINIT)
Amazon Search
Amazon, London
Typ
Aufsatz in Konferenzband
Seiten
108-122
Anzahl der Seiten
15
Publikationsdatum
20.03.2024
Publikationsstatus
Veröffentlicht
Peer-reviewed
Ja
ASJC Scopus Sachgebiete
Theoretische Informatik, Informatik (insg.)
Elektronische Version(en)
https://doi.org/10.48550/arXiv.2305.03954 (Zugang: Offen)
https://doi.org/10.1007/978-3-031-56027-9_7 (Zugang: Geschlossen)