Objectives – Vertex AI Custom Model Hyperparameter and Deployment
In the previous chapter, we started working on the custom model building on Google Cloud Platform (GCP) using Vertex AI components. In this chapter, we will see how to create a custom job with hyperparameter tuning, and how to initiate the training job using Python code.
In this chapter, we will cover the following topics:
- Hyperparameter in machine learning
- Working of hyperparameters tuning
- Vertex AI Vizier
- Data for building custom model
- Creation of workbench
- Creation of Dockerfile
- Model building code
- Image creation
- Submitting the custom model training job
- Completion of custom model training job
- Model importing
- Model deployment and predictions
- Submitting training job with Python SDK
- Deletion of resources
By the end of this chapter, the users will be able to create custom jobs with hyperparameter tuning with Graphical User Interface (GUI) and using the Python SDK.
Hyperparameter in machine learning
The values of the hyperparameters regulate the learning process, while also deciding the values of the model parameters that a learning algorithm ultimately learns. The learning process and the model parameters that are produced as a consequence of it are both under the control of hyperparameters which are high-level parameters.
Before machine learning model training can even begin, data scientists must first choose and configure the hyperparameter values that the learning algorithm will utilize. This is done as part of the machine learning model training process. In this context, hyperparameters are considered to exist outside of the model because the model’s values cannot be altered during the learning or training process.
The learning algorithm makes use of hyperparameters while it is learning, but they are not included in the model that is produced as a consequence of this learning. At the conclusion of the learning process, we will have the trained model parameters, which are, in a practical sense, what we mean when we talk about the model. The model does not include the hyperparameters that were adjusted throughout the training process. We are only aware of the model parameters that were learned about; we are unable to determine, for example, the hyperparameter values that were used to train a model based on the model itself.
Examples of hyperparameters include learning rate, number of hidden layers in the neural network and n_estimators, max depth, and max features for tree-based algorithms.
On the other hand, parameters are model-specific. They are learned or inferred solely from training data when the algorithm attempts to map input characteristics to labels or targets.
Model training begins with initializing parameters (random values or set to zeros). During training/learning, an optimization algorithm updates starting values (for example, gradient descent). As learning progresses, the learning algorithm updates parameter values but not hyperparameters.
Examples of parameters are weights and biases of a neural network and the cluster centroids in clustering.
You may also like
Archives
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- September 2023
- August 2023
- June 2023
- May 2023
- April 2023
- February 2023
- January 2023
- November 2022
- October 2022
- September 2022
- August 2022
- June 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
Calendar
M | T | W | T | F | S | S |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | ||
6 | 7 | 8 | 9 | 10 | 11 | 12 |
13 | 14 | 15 | 16 | 17 | 18 | 19 |
20 | 21 | 22 | 23 | 24 | 25 | 26 |
27 | 28 | 29 | 30 | 31 |
Leave a Reply