FedSkill: Privacy Preserved Interpretable Skill Learning via Imitation

Publication Date: 8/10/2023

Event: 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2023)

Reference: pp. 1010-1019, 2023

Authors: Yushan Jiang, University of Connecticut; Wenchao Yu, NEC Laboratories America, Inc.; Dongjin Song, University of Connecticut; Wei Cheng, NEC Laboratories America, Inc.; Haifeng Chen, NEC Laboratories America, Inc.; Lu Wang, East China Normal University

Abstract: Imitation learning that replicates experts’ skills via their demonstrations has shown significant success in various decision-making tasks. However, two critical challenges still hinder the deployment of imitation learning techniques in real-world application scenarios. First, existing methods lack the intrinsic interpretability to explicitly explain the underlying rationale of the learned skill and thus making learned policy untrustworthy. Second, due to the scarcity of expert demonstrations from each end user (client), learning a policy based on different data silos is necessary but challenging in privacy-sensitive applications such as finance and healthcare. To this end, we present a privacy-preserved interpretable skill learning framework (FedSkill) that enables global policy learning to incorporate data from different sources and provides explainable interpretations to each local user without violating privacy and data sovereignty. Specifically, our proposed interpretable skill learning model can capture the varying patterns in the trajectories of expert demonstrations, and extract prototypical information as skills that provide implicit guidance for policy learning and explicit explanations in the reasoning process. Moreover, we design a novel aggregation mechanism coupled with the based skill learning model to preserve global information utilization and maintain local interpretability under the federated framework. Thoroughly experiments on three datasets and empirical studies demonstrate that our proposed FedSkill framework not only outperforms state-of-the-art imitation learning methods but also exhibits good interpretability under a federated setting. Our proposed FedSkill framework is the first attempt to bridge the gaps among federated learning, interpretable machine learning, and imitation learning.

Publication Link: https://dl.acm.org/doi/10.1145/3580305.3599349