计算机科学
变压器
卷积神经网络
人工智能
万花筒
归纳偏置
限制
人工神经网络
模式识别(心理学)
电压
多任务学习
工程类
经济
管理
物理
机械工程
程序设计语言
量子力学
任务(项目管理)
作者
Marlon Bran Lorenzana,Craig Engstrom,Feng Liu,Shekhar S. Chandra
出处
期刊:Cornell University - arXiv
日期:2022-01-01
标识
DOI:10.48550/arxiv.2203.12861
摘要
Convolutional neural networks (CNN) have demonstrated outstanding Compressed Sensing (CS) performance compared to traditional, hand-crafted methods. However, they are broadly limited in terms of generalisability, inductive bias and difficulty to model long distance relationships. Transformer neural networks (TNN) overcome such issues by implementing an attention mechanism designed to capture dependencies between inputs. However, high-resolution tasks typically require vision Transformers (ViT) to decompose an image into patch-based tokens, limiting inputs to inherently local contexts. We propose a novel image decomposition that naturally embeds images into low-resolution inputs. These Kaleidoscope tokens (KD) provide a mechanism for global attention, at the same computational cost as a patch-based approach. To showcase this development, we replace CNN components in a well-known CS-MRI neural network with TNN blocks and demonstrate the improvements afforded by KD. We also propose an ensemble of image tokens, which enhance overall image quality and reduces model size. Supplementary material is available: https://github.com/uqmarlonbran/TCS.git
科研通智能强力驱动
Strongly Powered by AbleSci AI