可解释性
解析
变压器
计算机科学
人工智能
人工神经网络
等级制度
代表(政治)
解析树
自然语言处理
理论计算机科学
机器学习
工程类
电压
电气工程
经济
政治
法学
市场经济
政治学
摘要
This article does not describe a working system. Instead, it presents a single idea about representation that allows advances made by several different groups to be combined into an imaginary system called GLOM.1 The advances include transformers, neural fields, contrastive representation learning, distillation, and capsules. GLOM answers the question: How can a neural network with a fixed architecture parse an image into a part-whole hierarchy that has a different structure for each image? The idea is simply to use islands of identical vectors to represent the nodes in the parse tree. If GLOM can be made to work, it should significantly improve the interpretability of the representations produced by transformer-like systems when applied to vision or language.
科研通智能强力驱动
Strongly Powered by AbleSci AI