Traditional MLOps is a set of practices to productionize traditional ML systems for enterprise applications. Generative AI raises a new set of challenges in managing and productionizing applications at scale. The field of generative AI operations, or GenAIOps, seeks to address these new challenges.
On this one-day course, you'll learn what it takes to manage the experimentation process and tuning of your LLMs, and discuss how to deploy, test and maintain your LLM-powered applications.
Toward the end of the session, you will discuss best practices for logging and monitoring your LLM-powered applications in production.
Our Generative AI in Production course is available as a private training session that can be delivered virtually or at a location of your choice in the US.
Course overview
Who should attend:
This beginner-level course is suitable for:
- Developers and machine learning engineers who wish to operationalize GenAI-based applications.
What you'll learn:
By the end of this course, you will be able to:
- Describe the challenges in productionizing applications using generative AI
- Manage experimentation and evaluation for LLM-powered applications
- Productionize LLM-powered applications
- Implement logging and monitoring for LLM-powered applications
Prerequisites
To get the most out of this course, you should have completed our Application Development with LLMs on Google Cloud course, or have equivalent knowledge of the subject matter.
Course agenda
- Traditional MLOps vs. GenAIOps
- Generative AI operations
- Components of an LLM System
- Datasets and prompt engineering
- RAG & ReAct Architectures
- LLM Model Evaluation (metrics and framework)
- Tracking Experiments
- Lab: Evaluating ROUGE-L Text Similarity Metric
- Deployment, packaging and versioning (GenAIOps)
- Testing LLM Systems (unit and integration)
- Maintenance and updates (operations)
- Lab: Unit Testing Generative AI Applications
- Cloud logging
- Prompt versioning, evaluation and generalization
- Monitoring for evaluation-serving skew
- Continuous validation
- Lab: Use model monitoring for benchmarking, automated evaluation and training-prediction skew