This course is designed to empower your organisation to fully harness the transformative potential of Google’s Vertex AI and generative AI (Gen AI) technologies, with a strong emphasis on security.
Tailored for AI practitioners and security engineers, it provides targeted knowledge and hands-on skills to navigate and adopt AI safely and effectively.
Participants will gain practical insights and develop a security-conscious approach, ensuring a secure and responsible integration of Gen AI within their organisation.
This Vertex AI and Generative AI Security course is available as a private session that can be delivered via Virtual Classroom, at our training centre in The Shard, London, or at a location of your choice across the UK.
Course overview
Who should attend:
This course is designed for AI practitioners, security professionals, and cloud architects.
What you'll learn:
By the end of this course, you will be able to:
- Establish foundational knowledge of Vertex AI and its security challenges
- Implement identity and access control measures to restrict access to Vertex AI resources
- Configure encryption strategies and protect sensitive information
- Enable logging, monitoring, and alerting for real-time security oversight of Vertex AI operations
- Identify and mitigate unique security threats associated with generative AI
- Implement best practices for securing data sources and responses within Retrieval-Augmented Generation (RAG) systems
- Establish foundational knowledge of AI Safety
Prerequisites
To get the most out of this course, participants should have fundamental knowledge of machine learning, in particular generative AI, and basic understanding of security on Google Cloud.
Course agenda
- Google Cloud Security
- Vertex AI components
- Vertex AI Security concerns
- Control access with Identity Access Management
- Simplify permission using organization hierarchies and policies
- Use service accounts for least privileged access
- Data encryption
- Protecting sensitive data
- VPC Service Controls
- Disaster recovery planning
- Network security
- Securing model endpoint
- Logging
- Monitoring
- Overview of Gen AI security risks
- Overview of AI safety
- Prompt security
- LLM safeguards
- Testing Gen AI model responses
- Evaluating model responses
- Fine-tuning LLMs
- Fundamentals of Retrieval-Augmented Generation
- Security in RAG systems