In this course, you will learn to:
- Describe machine learning operations
- Understand the key differences between DevOps and MLOps
- Describe the machine learning workflow
- Discuss the importance of communications in MLOps
- Explain end-to-end options for automation of ML workflows
- List key Amazon SageMaker features for MLOps automation
- Build an automated ML process that builds, trains, tests, and deploys models
- Build an automated ML process that retrains the model based on change(s) to the model code
- Identify elements and important steps in the deployment process
- Describe items that might be included in a model package, and their use in training or inference
- Recognize Amazon SageMaker options for selecting models for deployment, including support for ML frameworks and built-in algorithms or bring-your-own-models
- Differentiate scaling in machine learning from scaling in other applications
- Determine when to use different approaches to inference
- Discuss deployment strategies, benefits, challenges, and typical use cases
- Describe the challenges when deploying machine learning to edge devices
- Recognize important Amazon SageMaker features that are relevant to deployment and inference
- Describe why monitoring is important
- Detect data drifts in the underlying input data
- Demonstrate how to monitor ML models for bias
- Explain how to monitor model resource consumption and latency
- Discuss how to integrate human-in-the-loop reviews of model results in production