Hierarchical Imitation Learning with Contextual Bandits for Dynamic Treatment Regimes

Publication Date: 7/24/2021

Event: The Thirty-eighth International Conference on Machine Learning (ICML 2021)

Reference: pp. 1-12, 2021

Authors: Wenchao Yu, NEC Laboratories America, Inc.; Lu Wang, East China Normal University; Wei Cheng, NEC Laboratories America, Inc.; Bo Zong, Salesforce; Haifeng Chen, NEC Laboratories America, Inc.

Abstract: Imitation learning has been proved to be effective in mimicking experts’ behaviors from their demonstrations without access to explicit reward signals. Meanwhile, complex tasks, e.g., dynamic treatment regimes for patients with comorbidities, often suggest significant variability in expert demonstrations with multiple sub-tasks. In these cases, it could be difficult to use a single flat policy to handle tasks of hierarchical structures. In this paper, we propose the hierarchical imitation learning model, HIL, to jointly learn latent high-level policies and sub-policies (for individual sub-tasks) from expert demonstrations without prior knowledge. First, HIL learns sub-policies by imitating expert trajectories with the sub-task switching guidance from high-level policies. Second, HIL collects the feedback from its sub-policies to optimize high-level policies, which is modeled as a contextual multi-arm bandit that sequentially selects the best sub-policies at each time step based on the contextual information derived from demonstrations. Compared with state-of-the-art baselines on real-world medical data, HIL improves the likelihood of patient survival and provides better dynamic treatment regimes with the exploitation of hierarchical structures in expert demonstrations.

Publication Link: https://icml.cc/virtual/2021/13078