计算机科学
深度学习
人工智能
人气
机器学习
数据科学
领域(数学分析)
卷积神经网络
范围(计算机科学)
建筑
数学
程序设计语言
艺术
视觉艺术
心理学
社会心理学
数学分析
作者
Aditya Khamparia,Karan Mehtab Singh
摘要
Abstract The amount of digital data in the universe is growing at an exponential rate, doubling every 2 years, and changing how we live in the world. The information storage capacity and data requirement crossed the zettabytes. With this level of bombardment of data on machine learning techniques, it becomes very difficult to carry out parallel computations. Deep learning is broadening its scope and gaining more popularity in natural language processing, feature extraction and visualization, and almost in every machine learning trend. The purpose of this study is to provide a brief review of deep learning architectures and their working. Research papers and proceedings of conferences from various authentic resources ( Institute of Electrical and Electronics Engineers , Wiley , Nature , and Elsevier ) are studied and analyzed. Different architectures and their effectiveness to solve domain specific problems are evaluated. Various limitations and open problems of current architectures are discussed to provide better insights to help researchers and student to resume their research on these issues. One hundred one articles were reviewed for this meta‐analysis of deep learning. From this analysis, it is concluded that advanced deep learning architectures are combinations of few conventional architectures. For example, deep belief network and convolutional neural network are used to build convolutional deep belief network, which has higher capabilities than the parent architectures. These combined architectures are more robust to explore the problem space and thus can be the answer to build a general‐purpose architecture.
科研通智能强力驱动
Strongly Powered by AbleSci AI