可扩展性
计算机科学
旅行商问题
启发式
变压器
成对比较
深度学习
图形
人工智能
分布式计算
理论计算机科学
机器学习
算法
电压
物理
量子力学
数据库
操作系统
作者
Chunsheng Zhao,Li‐Pei Wong
出处
期刊:PLOS ONE
[Public Library of Science]
日期:2025-04-07
卷期号:20 (4): e0319711-e0319711
标识
DOI:10.1371/journal.pone.0319711
摘要
Leveraging the Transformer architecture to develop end-to-end models for addressing combinatorial optimization problems (COPs) has shown significant potential due to its exceptional performance. Nevertheless, a multitude of COPs, including the Traveling Salesman Problem (TSP), displays typical graph structure characteristics that existing Transformer-based models have not effectively utilized. Hence, this study focuses on TSP and introduces two enhancements, namely closeness centrality encoding and spatial encoding, to strengthen the Transformer encoder’s capacity to capture the structural features of TSP graphs. Furthermore, by integrating a decoding mechanism that not only emphasizes the starting and most recently visited nodes, but also leverages all previously visited nodes to capture the dynamic evolution of tour generation, a Transformer-based structure-aware model is developed for solving TSP. Employing deep reinforcement learning for training, the proposed model achieves deviation rates of 0.03%, 0.16%, and 1.13% for 20-node, 50-node, and 100-node TSPs, respectively, in comparison with the Concorde solver. It consistently surpasses classic heuristics, OR Tools, and various comparative learning-based approaches in multiple scenarios while showcasing a remarkable balance between time efficiency and solution quality. Extensive tests validate the effectiveness of the improvement mechanisms, underscore the significant impact of graph structure information on solving TSP using deep neural networks, and also reveal the scalability and limitations.
科研通智能强力驱动
Strongly Powered by AbleSci AI