代表(政治)
计算机科学
人工智能
人机交互
理论计算机科学
认知科学
心理学
政治学
政治
法学
作者
Weiqing Yan,S.-Z. Yao,Chang Tang,Wujie Zhou
标识
DOI:10.1109/tnnls.2025.3546660
摘要
Multiview data, characterized by rich features, are crucial in many machine learning applications. However, effectively extracting intraview features and integrating interview information present significant challenges in multiview learning (MVL). Traditional deep network-based approaches often involve learning multiple layers to derive latent. In these methods, the features of different classes are typically implicitly embedded rather than systematically organized. This lack of structure makes it challenging to explicitly map classes to independent principal subspaces in the feature space, potentially causing class overlap and confusion. Consequently, the capability of these representations to accurately capture the intrinsic structure of the data remains uncertain. In this article, we introduce an innovative multiview representation learning (MVRL) by maximizing two information-theoretic metrics: intraview coding rate reduction and interview mutual information. Specifically, in the intraview representation learning, we aim to optimize feature representations by maximizing the coding rate difference between the entire dataset and individual classes. This process expands the feature representation space while compressing the representations within each class, resulting in more compact feature representations within each viewpoint. Subsequently, we align and fuse these view-specific features through space transformation and cross-sample fusion to achieve consistent representation across multiple views. Finally, we maximize information transmission to maintain consistency and correlation among data representations across views. By maximizing mutual information between the consensus representations and view-specific representations, our method ensures that the learned representations capture more concise intrinsic features and correlations among different views, thereby enhancing the performance and generalization ability of MVL. Experiments show that the proposed methods have achieved excellent performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI