A Survey on Hyperparameter Optimization of Machine Learning Models
超参数
计算机科学
机器学习
人工智能
作者
Mônica Mônica,Parul Agrawal
标识
DOI:10.1109/icdt61202.2024.10489732
摘要
Hyperparameters in machine learning are those variables that are set before the training process starts and regulate several aspects of the behavior of the learning algorithm. In contrast to model parameters, which are determined by data during training, hyperparameters are outside factors that affect how the model discovers and generalizes patterns from the data. The selection of hyperparameters can have a considerable impact on a model's performance, rate of convergence, and ability to prevent overfitting. An effective hyperparameter optimization strategy can improve the performance of a machine learning model. A number of hyperparameter optimization techniques for different machine learning models are reviewed in this paper, including grid search, random search, Bayesian optimization, and genetic algorithm. Along with this different Evaluation tools and datasets used for machine learning model hyperparameter optimization in literature are examined. Various machine learning models like SVM, Naïve Baye's, KNN, ANN etc. whose hyperparameters are optimized in literature are also reported.