口译(哲学)
代表(政治)
地质学
地震学
图像(数学)
计算机科学
人工智能
政治学
政治
程序设计语言
法学
作者
Nam Trung Pham,Haibin Di,Tao Zhao,Aria Abubakar
出处
期刊:The leading edge
[Society of Exploration Geophysicists]
日期:2025-02-01
卷期号:44 (2): 96-106
标识
DOI:10.1190/tle44020096.1
摘要
In this paper, we propose training a representation model (denoted as SeisBERT) over a large database of migrated seismic images and applying it to extract features to accelerate machine learning-based seismic interpretation downstream tasks in new seismic volumes. More specifically, SeisBERT has a vision transformer architecture, and its training is achieved as self-supervised learning by employing the technique of masked image modeling. Our model is not exactly the BERT for 1D sequence. It is a bidirectional transformer for 2D seismic data that treats each 2D patch of a seismic image as a component in a sequence. We demonstrate the versatility of SeisBERT in multiple downstream tasks including seismic image similarity, facies classification, salt-body detection, fault detection, and image conditioning in seismic volumes that are not included in the training database of SeisBERT. Improvements are observed in both prediction accuracy and generalization, compared to results from the baseline models trained on each specific task.
科研通智能强力驱动
Strongly Powered by AbleSci AI