计算机科学                        
                
                                
                        
                            自动汇总                        
                
                                
                        
                            自然语言处理                        
                
                                
                        
                            情报检索                        
                
                                
                        
                            人工智能                        
                
                        
                    
            作者
            
                Haopeng Zhang,Philip S. Yu,Jiawei Zhang            
         
                    
        
    
            
        
                
            摘要
            
            Text summarization research has undergone several significant transformations with the advent of deep neural networks, pre-trained language models (PLMs), and recent large language models (LLMs). This survey thus provides a comprehensive review of the research progress and evolution in text summarization through the lens of these paradigm shifts. It is organized into two main parts: (1) a detailed overview of datasets, evaluation metrics, and summarization methods before the LLM era, encompassing traditional statistical methods, deep learning approaches, and PLM fine-tuning techniques, and (2) the first detailed examination of recent advancements in benchmarking, modeling, and evaluating summarization in the LLM era. By synthesizing existing literature and presenting a cohesive overview, this survey also discusses research trends, open challenges, and proposes promising research directions in summarization, aiming to guide researchers through the evolving landscape of summarization research.
         
            
 
                 
                
                    
                    科研通智能强力驱动
Strongly Powered by AbleSci AI