
Online or onsite, instructor-led live MLOps training courses demonstrate through interactive hands-on practice how to use MLOps tools to automate and optimize the deployment and maintenance of ML systems in production.
MLOps training is available as "online live training" or "onsite live training". Online live training (aka "remote live training") is carried out by way of an interactive, remote desktop. Onsite live MLOps training can be carried out locally on customer premises in Finland or in NobleProg corporate training centers in Finland.
NobleProg -- Your Local Training Provider
Testimonials
I enjoyed participating in the Kubeflow training, which was held remotely. This training allowed me to consolidate my knowledge for AWS services, K8s, all the devOps tools around Kubeflow which are the necessary bases to properly tackle the subject. I wanted to thank Malawski Marcin for his patience and professionalism for training and advice on best practices. Malawski approaches the subject from different angles, different deployment tools Ansible, EKS kubectl, Terraform. Now I am definitely convinced that I am going into the right field of application.
Guillaume Gautier - OLEA MEDICAL | Improved diagnosis for life™
Course: Kubeflow
Adjusting to our needs
Sumitomo Mitsui Finance and Leasing Company, Limited
Course: Kubeflow
Very very competent trainer who know how to adapt to his audience, and to solve problems Interactive presentation
OLEA MEDICAL
Course: MLflow
the ML ecosystem not only MLFlow but Optuna, hyperops, docker , docker-compose
Guillaume GAUTIER - OLEA MEDICAL
Course: MLflow
MLOps Course Outlines in Finland
- Install and configure Kubernetes, Kubeflow and other needed software on AWS.
- Use EKS (Elastic Kubernetes Service) to simplify the work of initializing a Kubernetes cluster on AWS.
- Create and deploy a Kubernetes pipeline for automating and managing ML models in production.
- Train and deploy TensorFlow ML models across multiple GPUs and machines running in parallel.
- Leverage other AWS managed services to extend an ML application.
- Install and configure Kubernetes, Kubeflow and other needed software on Azure.
- Use Azure Kubernetes Service (AKS) to simplify the work of initializing a Kubernetes cluster on Azure.
- Create and deploy a Kubernetes pipeline for automating and managing ML models in production.
- Train and deploy TensorFlow ML models across multiple GPUs and machines running in parallel.
- Leverage other AWS managed services to extend an ML application.
- Install and configure Kubernetes, Kubeflow and other needed software on GCP and GKE.
- Use GKE (Kubernetes Kubernetes Engine) to simplify the work of initializing a Kubernetes cluster on GCP.
- Create and deploy a Kubernetes pipeline for automating and managing ML models in production.
- Train and deploy TensorFlow ML models across multiple GPUs and machines running in parallel.
- Leverage other GCP services to extend an ML application.
- Install and configure Kubernetes, Kubeflow and other needed software on IBM Cloud Kubernetes Service (IKS).
- Use IKS to simplify the work of initializing a Kubernetes cluster on IBM Cloud.
- Create and deploy a Kubernetes pipeline for automating and managing ML models in production.
- Train and deploy TensorFlow ML models across multiple GPUs and machines running in parallel.
- Leverage other IBM Cloud services to extend an ML application.
- Install and configure various MLOps frameworks and tools.
- Assemble the right kind of team with the right skills for constructing and supporting an MLOps system.
- Prepare, validate and version data for use by ML models.
- Understand the components of an ML Pipeline and the tools needed to build one.
- Experiment with different machine learning frameworks and servers for deploying to production.
- Operationalize the entire Machine Learning process so that it's reproduceable and maintainable.
- Install and configure Kubeflow on premise and in the cloud using AWS EKS (Elastic Kubernetes Service).
- Build, deploy, and manage ML workflows based on Docker containers and Kubernetes.
- Run entire machine learning pipelines on diverse architectures and cloud environments.
- Using Kubeflow to spawn and manage Jupyter notebooks.
- Build ML training, hyperparameter tuning, and serving workloads across multiple platforms.
- By the end of this training, participants will be able to:
- Install and configure Kubernetes and Kubeflow on an OpenShift cluster.
- Use OpenShift to simplify the work of initializing a Kubernetes cluster.
- Create and deploy a Kubernetes pipeline for automating and managing ML models in production.
- Train and deploy TensorFlow ML models across multiple GPUs and machines running in parallel.
- Call public cloud services (e.g., AWS services) from within OpenShift to extend an ML application.
- Install and configure Kubeflow on premise and in the cloud.
- Build, deploy, and manage ML workflows based on Docker containers and Kubernetes.
- Run entire machine learning pipelines on diverse architectures and cloud environments.
- Using Kubeflow to spawn and manage Jupyter notebooks.
- Build ML training, hyperparameter tuning, and serving workloads across multiple platforms.
- Install and configure MLflow and related ML libraries and frameworks.
- Appreciate the importance of trackability, reproducability and deployability of an ML model
- Deploy ML models to different public clouds, platforms, or on-premise servers.
- Scale the ML deployment process to accommodate multiple users collaborating on a project.
- Set up a central registry to experiment with, reproduce, and deploy ML models.
Last Updated: