MARLIN: Multi-Agent Reinforcement Learning for Incremental DAG Discovery

Publication Date: 1/27/2026

Event: 40th AAAI Conference on Artificial Intelligence (AAAI-26)

Reference: Proceedings of the AAAI Conference on Artificial Intelligence, 40(27), 22869-22877

Authors: Dong Li, Tianjin University; Zhengzhang Chen, NEC Laboratories America, Inc.; Xujiang Zhao, NEC Laboratories America, Inc.; Linlin Yu, Augusta University; Zhong Chen, Southern Illinois University; Yi He, The College of William and Mary; Haifeng Chen, NEC Laboratories America, Inc.; Chen Zhao, Baylor University

Abstract: Uncovering causal structures from observational data is crucial for understanding complex systems and making informed decisions. While reinforcement learning (RL) has shown promise in identifying these structures in the form of a directed acyclic graph (DAG), existing methods often lack efficiency, making them unsuitable for online applications. In this paper, we propose MARLIN, an efficient multi-agent RL-based approach for incremental DAG learning. MARLIN uses a DAG generation policy that maps a continuous real-valued space to the DAG space as an intra-batch strategy, then incorporates two RL agents — state-specific and state-invariant — to uncover causal relationships and integrates these agents into an incremental learning framework. Furthermore, the framework leverages a factored action space to enhance parallelization efficiency. Extensive experiments on synthetic and real datasets demonstrate that MARLIN out-performs state-of-the-art methods in terms of both efficiency and effectiveness.

Publication Link: https://ojs.aaai.org/index.php/AAAI/article/view/39450