Logo image
Interaction-Grounded Learning for Contextual Markov Decision Processes with Personalized Feedback
Preprint   Open access

Interaction-Grounded Learning for Contextual Markov Decision Processes with Personalized Feedback

Mengxiao Zhang, Yuheng Zhang, Haipeng Luo and Paul Mineiro
ArXiv.org
Cornell University
02/09/2026
DOI: 10.48550/arxiv.2602.08307
url
https://doi.org/10.48550/arxiv.2602.08307View
Preprint (Author's original)This preprint has not been evaluated by subject experts through peer review. Preprints may undergo extensive changes and/or become peer-reviewed journal articles. Open Access

Abstract

In this paper, we study Interaction-Grounded Learning (IGL) [Xie et al., 2021], a paradigm designed for realistic scenarios where the learner receives indirect feedback generated by an unknown mechanism, rather than explicit numerical rewards. While prior work on IGL provides efficient algorithms with provable guarantees, those results are confined to single-step settings, restricting their applicability to modern sequential decision-making systems such as multi-turn Large Language Model (LLM) deployments. To bridge this gap, we propose a computationally efficient algorithm that achieves a sublinear regret guarantee for contextual episodic Markov Decision Processes (MDPs) with personalized feedback. Technically, we extend the reward-estimator construction of Zhang et al. [2024a] from the single-step to the multi-step setting, addressing the unique challenges of decoding latent rewards under MDPs. Building on this estimator, we design an Inverse-Gap-Weighting (IGW) algorithm for policy optimization. Finally, we demonstrate the effectiveness of our method in learning personalized objectives from multi-turn interactions through experiments on both a synthetic episodic MDP and a real-world user booking dataset.
Computer Science - Learning Statistics - Machine Learning

Details

Metrics

3 Record Views
Logo image