COURSE OVERVIEW

  • icon3 day course
  • iconPartner of the Year
  • iconPrivate
    info-icon
  • icon Certificate of Attendance

As a Google Cloud Partner, Jellyfish has been selected to deliver this three-day course, which will help you meet day-to-day data processing needs within your business.

Our expert practitioner will start with the foundations, showing you how Apache Beam and Dataflow work together to meet your data processing needs efficiently without the risk of vendor lock-in.

The section on developing pipelines will show you how you convert your business logic into data processing applications that can run on Dataflow. Toward the end of the session, you’ll focus on operations, reviewing the most important lessons for operating a data application on Dataflow, including monitoring, troubleshooting, testing, and reliability.

Our Serverless Data Processing with Dataflow course is available as a private training session that can be delivered via Virtual Classroom, at our training centre in The Shard, or at a location of your choice in the UK.

What you’ll learn

By the end of this course, you will be able to:

  • iconDemonstrate how Apache Beam and Dataflow work together to fulfill your organization's data processing needs
  • iconEnable Shuffle and Streaming Engine, for batch and streaming pipelines respectively, for maximum performance
  • iconSelect the right combination of IAM permissions for your Dataflow job
  • iconSelect and tune the I / O of your choice for your Dataflow pipeline
  • iconDevelop a Beam pipeline using SQL and DataFrames
  • iconSummarize the benefits of the Beam Portability Framework and enable it for your Dataflow pipelines
  • iconEnable Flexible Resource Scheduling for more cost-efficient performance
  • iconImplement best practices for a secure data processing environment
  • iconUse schemas to simplify your Beam code and improve the performance of your pipeline
  • iconPerform monitoring, troubleshooting, testing and CI / CD on Dataflow pipelines

Course agenda

Module 1: Introduction

  • Introduce the course objectives
  • Demonstrate how Apache Beam and Dataflow work together to fulfill your organization's data processing needs

Module 2: Beam Portability

  • Summarize the benefits of the Beam Portability Framework
  • Customize the data processing environment of your pipeline using custom containers
  • Review use cases for cross-language transformations
  • Enable the Portability framework for your Dataflow pipelines

Module 3: Separating Compute & Storage with Dataflow

  • Enable Shuffle and Streaming Engine, for batch and streaming pipelines respectively, for maximum performance
  • Enable Flexible Resource Scheduling for more cost-efficient performances

Module 4: IAM, Quotas & Permissions

  • Select the right combination of IAM permissions for your Dataflow job
  • Determine your capacity needs by inspecting the relevant quotas for your Dataflow jobs

Module 5: Security

  • Select your zonal data processing strategy using Dataflow, depending on your data locality needs
  • Implement best practices for a secure data processing environment

Module 6: Beam Concepts Review

  • Review main Apache Beam concepts (Pipeline, PCollections, PTransforms, Runner, reading / writing, Utility PTransforms, side inputs) bundles and DoFn Lifecycle

Module 7: Windows, Watermarks, Triggers

  • Implement logic to handle your late data
  • Review different types of triggers
  • Review cores streaming concepts (unbounded PCollections, windows)

Module 8: Sources & Sinks

  • Write the I / O of your choice for your Dataflow pipeline
  • Tune your source / sink transformation for maximum performance
  • Create custom sources and sinks using SDF

Module 9: Schemas

  • Introduce schemas, which give developers a way to express structured data in their Beam pipeliness
  • Use schemas to simplify your Beam code and improve the performance of your pipeline

Module 10: State & Timers

  • Identify use cases for state and timer API implementations
  • Select the right type of state and timers for your pipeline

Module 11: Best Practices

  • Implement best practices for Dataflow pipelines

Module 12: Dataflow SQL & DataFrames

  • Develop a Beam pipeline using SQL and DataFrames

Module 13: Beam Notebooks

  • Prototype your pipeline in Python using Beam notebooks
  • Launch a job to Dataflow from a notebook

Module 14: Monitoring

  • Navigate the Dataflow Job Details UI
  • Interpret Job Metrics charts to diagnose pipeline regressions
  • Set alerts on Dataflow jobs using Cloud Monitoring

Module 15: Logging & Error Reporting

  • Use the Dataflow logs and diagnostics widgets to troubleshoot pipeline issues

Module 16: Troubleshooting & Debug

  • Use a structured approach to debug your Dataflow pipelines
  • Examine common causes for pipeline failures

Module 17: Performance

  • Understand performance considerations for pipelines
  • Consider how the shape of your data can affect pipeline performance

Module 18: Testing & CI / CD

  • Testing approaches for your Dataflow pipeline
  • Review frameworks and features available to streamline your CI / CD workflow for Dataflow pipelines

Module 19: Reliability

  • Implement reliability best practices for your Dataflow pipelines

Module 20: Flex Templates

  • Using flex templates to standardize and reuse Dataflow pipeline code

Module 21: Summary

  • Summary of all modules

Who it's for

This course is suitable for data engineers, data analysts and data scientists aspiring to develop data engineering skills.

Prerequisites

To get the most out of this course, you should have an understanding of building batch data pipelines and building resilient streaming analytics systems.

BOOK THIS COURSE

Booking for a team or large group (5+ people)

For private sessions call our sales team

We will use the information you submit via this form in line with our Privacy Policy.

Call us020 7993 4556

GET IN TOUCH

We will use the information you submit via this form in line with our Privacy Policy.

020 7993 4556