计算机科学
强化学习
人工智能
异步通信
建筑
编码(集合论)
图形
机器学习
理论计算机科学
程序设计语言
集合(抽象数据类型)
艺术
计算机网络
视觉艺术
作者
Xusheng Zhao,Qiong Dai,Xu Bai,Jia Wu,Hao Peng,Huailiang Peng,Zhengtao Yu,Philip S. Yu
标识
DOI:10.1109/tnnls.2024.3392575
摘要
Multiple instance learning (MIL) trains models from bags of instances, where each bag contains multiple instances, and only bag-level labels are available for supervision. The application of graph neural networks (GNNs) in capturing intrabag topology effectively improves MIL. Existing GNNs usually require filtering low-confidence edges among instances and adapting graph neural architectures to new bag structures. However, such asynchronous adjustments to structure and architecture are tedious and ignore their correlations. To tackle these issues, we propose a reinforced GNN framework for MIL (RGMIL), pioneering the exploitation of multiagent deep reinforcement learning (MADRL) in MIL tasks. MADRL enables the flexible definition or extension of factors that influence bag graphs or GNNs and provides synchronous control over them. Moreover, MADRL explores structure-to-architecture correlations while automating adjustments. Experimental results on multiple MIL datasets demonstrate that RGMIL achieves the best performance with excellent explainability. The code and data are available at https://github.com/RingBDStack/RGMIL.
科研通智能强力驱动
Strongly Powered by AbleSci AI