自动汇总
变压器
计算机科学
人工智能
数据挖掘
自然语言处理
工程类
电气工程
电压
作者
Abdelhalim A. Saadi,Hacene Belhadef,Akram Guessas,Oussama Hafirassou
摘要
This study evaluates the performance of transformer-based models such as BERT, RoBERTa, and XLNet for fake news detection. Using supervised and unsupervised deep learning techniques, we optimized classification accuracy while reducing computational costs through text summarization. The results show that RoBERTa, fine-tuned with summarized content, achieves 98.39% accuracy, outperforming the other models. Additionally, we assessed AI-generated misinformation using GPT-2, confirming that transformer models effectively distinguish real from synthetic news. We utilized the GPT-2 model instead of more recent models like GPT-4, as our objective was to generate fake news locally and compare it with pretrained models from the same time period.
科研通智能强力驱动
Strongly Powered by AbleSci AI