人工智能
计算机科学
人工神经网络
显著性(神经科学)
深度学习
机器学习
模式识别(心理学)
作者
Bahareh Tolooshams,Sara Matias,Hao Wu,Simona Temereanca,Naoshige Uchida,Venkatesh N. Murthy,Paul Masset,Demba Ba
标识
DOI:10.1101/2024.01.05.574379
摘要
The widespread adoption of deep learning to build models that capture the dynamics of neural populations is typically based on "black-box" approaches that lack an interpretable link between neural activity and network parameters. Here, we propose to apply algorithm unrolling, a method for interpretable deep learning, to design the architecture of sparse deconvolutional neural networks and obtain a direct interpretation of network weights in relation to stimulus-driven single-neuron activity through a generative model. We characterize our method, referred to as deconvolutional unrolled neural learning (DUNL), and show its versatility by applying it to deconvolve single-trial local signals across multiple brain areas and recording modalities. To exemplify use cases of our decomposition method, we uncover multiplexed salience and reward prediction error signals from midbrain dopamine neurons in an unbiased manner, perform simultaneous event detection and characterization in somatosensory thalamus recordings, and characterize the heterogeneity of neural responses in the piriform cortex and in the striatum during unstructured, naturalistic experiments. Our work leverages the advances in interpretable deep learning to gain a mechanistic understanding of neural activity.
科研通智能强力驱动
Strongly Powered by AbleSci AI