Building Pipelines with Dynamic Tables
With Dynamic Tables, Snowflake simplifies the creation and maintenance of data pipelines. Gone are the days when users had to manually consolidate their data. But how do you use Dynamic Tables exactly? What are the design decisions? How do you deal with errors and performance regressions?
In this video, you'll learn how to set up a full data pipeline. We'll navigate common issues using the many metric sources that come with Dynamic Tables. Where necessary, we'll change our setup on the fly to ensure our business requirements are met and explain the effect of the target lag parameter, the difference between full and incremental refreshes, and the reasons for skipped refresh.
We also invite you to watch the webinar "Batch and Streaming Data Ingestion Best Practices with Snowflake" to learn how to leverage COPY, Snowpipe, and Snowpipe Streaming to build batch and streaming data pipelines at scale. Register now: 👉 https://tinyurl.com/2cezbtp9
To learn more, enroll in the Introduction to Modern Data Engineering with Snowflake course on Coursera:
👉 https://www.coursera.org/learn/data-engineering-snowflake/?utm_medium=institutions&utm_source=snowflake&utm_campaign=yt-promo
❄ Join our YouTube community ❄
👉 https://bit.ly/3lzfeeB
Learn how to build your application on Snowflake:
👉 developers.snowflake.com
Join the Snowflake Community:
👉 community.snowflake.com