
Abstract: Practically all business data is produced as an infinite stream of events: sensor measurements, website engagements, geo-location data from industrial IoT devices, database modifications, stock trades, and financial transactions, to name a few. Successful data-driven organizations go beyond just discovering valuable insights once but do so continuously and in real-time. But how do you leap analytics-based historical data to real-time insights on streams?
This talk will introduce Apache Flink as a general data processor for all these use cases on both finite and infinite streams. We demonstrate Flink's SQL engine as a changelog processor, with an ecosystem tailored to process CDC data and maintain materialized views. We will use Kafka as an upsert log, Debezium, to connect to databases and enrich streams of various sources using different kinds of joins. Finally, we illustrate how to combine Flink's Table API with DataStream API for event-driven applications beyond SQL.
Bio: Bio Coming Soon!

Seth Wiesman
Title
Senior Solutions Architect & Apache Flink Committer | Ververica
