Data Pipelines: Using CDC to Ingest Data into Kafka

Data Pipelines: Using CDC to Ingest Data into Kafka

54.800 Lượt nghe
Data Pipelines: Using CDC to Ingest Data into Kafka
https://cnfl.io/data-pipelines-module-3 | Using change data capture (CDC), you can stream data from a relational database into Apache Kafka®. This video explains the two types of CDC: log-based and query-based. Follow along as Tim Berglund (Senior Director of Developer Experience, Confluent) covers all of this in detail. Use the promo code PIPELINES101 to get $25 of free Confluent Cloud usage: https://cnfl.io/try-cloud-with-data-pipelines-course Promo code details: https://cnfl.io/promo-code-disclaimer-data-pipelines-course LEARN MORE ► No More Silos: Integrating Databases and Apache Kafka: https://rmoff.dev/no-more-silos ► Kafka Connect Deep Dive – JDBC Source Connector: https://www.confluent.io/blog/kafka-connect-deep-dive-jdbc-source-connector/?utm_source=youtube&utm_medium=video&utm_campaign=tm.devx_ch.cd-building-data-pipelines-with-apache-kafka-and-confluent_content.pipelines ► Add Key to Data Ingested Through Kafka Connect: https://kafka-tutorials.confluent.io/connect-add-key-to-source/kafka.html?utm_source=youtube&utm_medium=video&utm_campaign=tm.devx_ch.cd-building-data-pipelines-with-apache-kafka-and-confluent_content.pipelines ► JDBC Source and Sink: https://www.confluent.io/hub/confluentinc/kafka-connect-jdbc?utm_source=youtube&utm_medium=video&utm_campaign=tm.devx_ch.cd-building-data-pipelines-with-apache-kafka-and-confluent_content.pipelines ► Oracle CDC Source: https://www.confluent.io/hub/confluentinc/kafka-connect-oracle-cdc?utm_source=youtube&utm_medium=video&utm_campaign=tm.devx_ch.cd-building-data-pipelines-with-apache-kafka-and-confluent_content.pipelines ► Debezium MongoDB CDC Source: https://www.confluent.io/hub/debezium/debezium-connector-mongodb?utm_source=youtube&utm_medium=video&utm_campaign=tm.devx_ch.cd-building-data-pipelines-with-apache-kafka-and-confluent_content.pipelines ► Debezium MySQL CDC Source: https://www.confluent.io/hub/debezium/debezium-connector-mysql?utm_source=youtube&utm_medium=video&utm_campaign=tm.devx_ch.cd-building-data-pipelines-with-apache-kafka-and-confluent_content.pipelines ► Debezium SQL Server CDC Source: https://www.confluent.io/hub/debezium/debezium-connector-sqlserver?utm_source=youtube&utm_medium=video&utm_campaign=tm.devx_ch.cd-building-data-pipelines-with-apache-kafka-and-confluent_content.pipelines ► Debezium PostgreSQL CDC Source: https://www.confluent.io/hub/debezium/debezium-connector-postgresql?utm_source=youtube&utm_medium=video&utm_campaign=tm.devx_ch.cd-building-data-pipelines-with-apache-kafka-and-confluent_content.pipelines ABOUT CONFLUENT Confluent is pioneering a fundamentally new category of data infrastructure focused on data in motion. Confluent’s cloud-native offering is the foundational platform for data in motion – designed to be the intelligent connective tissue enabling real-time data, from multiple sources, to constantly stream across the organization. With Confluent, organizations can meet the new business imperative of delivering rich, digital front-end customer experiences and transitioning to sophisticated, real-time, software-driven backend operations. To learn more, please visit www.confluent.io. #kafka #datapipeline #ksqldb #confluent #apachekafka