计算机科学
弹道
核(代数)
人工智能
控制(管理)
机器学习
人机交互
组合数学
数学
物理
天文
作者
Zhiwei Song,Xiang Zhang,Shuhang Chen,Jieyuan Tan,Yiwen Wang
出处
期刊:IEEE Transactions on Cognitive and Developmental Systems
[Institute of Electrical and Electronics Engineers]
日期:2024-10-24
卷期号:17 (3): 554-563
被引量:4
标识
DOI:10.1109/tcds.2024.3485078
摘要
Reinforcement learning (RL)-based brain–machine interfaces (BMIs) hold promise for restoring motor functions in paralyzed individuals. These interfaces interpret neural activity to control external devices through trial-and-error. In brain control (BC) tasks, subjects control the device continuously moving in space by imagining their own limb movement, in which the subject can change direction at any position before reaching the target. Such multistep BC tasks span a large space both in neural state and over a sequence of movements. However, conventional RL decoders face challenges in efficient exploration and limited guidance from delayed rewards. In this article, we propose a kernel-based actor–critic learning framework for multistep BC tasks. Our framework integrates continuous trajectory control (actor) and internal continuous state value estimation (critic) from medial prefrontal cortex (mPFC) activity. We evaluate our algorithm's performance in a BC three-lever discrimination task using data from two rats, comparing it to a kernel RL decoder with internal binary rewards and delayed external rewards. Experimental results show that our approach achieves faster convergence, shorter target-acquisition time, and shorter distances to targets. These findings highlight the potential of our algorithm for clinical applications in multistep BC tasks.
科研通智能强力驱动
Strongly Powered by AbleSci AI