支持向量机
计算机科学
机器学习
人工智能
功率(物理)
量子力学
物理
作者
K Saravanan,R.Banu Prakash,C. Balakrishnan,Gade Venkata Prasanna Kumar,R. Sıva Subramanıan,M. Anita
标识
DOI:10.1109/icimia60377.2023.10426542
摘要
Support vector machines, or SVMs, have become a really big deal in machine learning because of how good they are at classification and regression problems. This article explores in-depth knowledge about SVMs in ML algorithms. First from the history of SVMs, starting with when they were first thought up and addresses some important stuff different researchers have done with them over time. Next, get into the math and theory behind how SVMs work things like margins support vectors, and optimization problems and also discuss different ways SVMs have been tweaked and changed, like versions for multiple classes, support vector regression one-class SVMs, and twin SVMs. Another key part of SVMs is kernel functions here we spend a bunch of time breaking those down and explaining what they do to transform data so SVMs can work with it better. Further on, we look at some real-world uses for SVMs, like in image recognition natural language processing, bioinformatics, and finance. Even though SVMs have been around for a while they're still super relevant today, so here summarize the most important discoveries about them and think about ways SVMs might keep evolving or being used differently as machine learning keeps moving fast. Overall, the goal here is to give you a deep dive into Support Vector Machines - where they began, the technical details, how they are applied, and where they might go in the future helpful in better understanding the SVM in ML Algorithm.
科研通智能强力驱动
Strongly Powered by AbleSci AI