Hyperparameter tuning

In machine learning, hyperparameter optimization or tuning is the problem of choosing a set of optimal hyperparameters for a learning algorithm. A hyperparameter is a parameter whose value is used to control the learning process. By contrast, the values of other parameters (typically node weights) are learned. .

Hyperparameter tuning optimizes a single target variable, also called the hyperparameter metric, that you specify. The accuracy of the model, as calculated from an evaluation pass, is a common metric. The metric must be a numeric value, and we can specify whether tuning on the model to maximize or minimize your metric. Starting of a job with hyperparameter tuning, establish the name of hyperparameter metric. The default name of the metric is training/hptuning/metric.

These hyperparameters address below model design questions :
  • What degree of polynomial features should I use for my linear model?
  • What should be the maximum depth allowed for my decision tree?
  • What should be the minimum number of samples required at a leaf node in my decision tree?
  • How many trees should I include in my random forest?
  • How many neurons should I have in my neural network layer?
  • How many layers should I have in my neural network?
  • What should I set my learning rate to for gradient descent?

Tuning Strategies

  • Grid search
  • Random search
  • Bayesian optimization
  • Gradient-based optimization
  • Evolutionary optimization
  • Population-based