计算机科学
人工智能
机器学习
学习迁移
任务(项目管理)
领域(数学)
特征(语言学)
模式识别(心理学)
工程类
数学
语言学
哲学
系统工程
纯数学
作者
Ramdhan Wibawa,Rosyadi Rosyadi,M. Nancy,Raden Irfani Hasya Fulki
标识
DOI:10.2523/iptc-23026-ea
摘要
Abstract Dynamometer card is one of the vital surveillances for Sucker Rod Pump (SRP) performance monitoring in Duri field. Even though the field produces a massive number of cards, they come with no label or interpretation about the pump conditions based on the card shape. Self-supervised learning (SSL) consists of a pretext task that trains feature extractors by using unlabeled data as opposed to supervised learning, that requires a lot of effort in labeling data which is time consuming and costly. This paper evaluates the performance of a feature extractor, Alexnet, that is trained by using several pretext task techniques. This study used around 660,000 unlabeled cards while a small amount of labeled data was used for evaluation purposes using linear evaluation protocol. The result showed that the trained Alexnet using Pretext-Invariant Representation Learning (PIRL) with jigsaw has better performance by 6% compared to the pre-trained ImageNet model. Further fine-tuning process by using labeled data could achieve 93% accuracy. The model was also tested using fresh data and the result was compared to the expert's interpretation. This approach can potentially add more types of rod pump problems to detect in the Duri field with considerable precision. In addition, the new approach could improve the current method of detecting more SRP with valve leaking problems.
科研通智能强力驱动
Strongly Powered by AbleSci AI