I have written many a post about the importance of keeping your data moving. As Mathew Wilder sang in “Break My Stride,” it’s “got to keep on moving” to enable you to act within the business moment. Real-time data analytic pipelines are the solid foundation to achieving the state of Active Intelligence, and you can only start that when you work on the freshest data available to you.
However, getting data from a database management system in real time can be challenging. It is even more challenging when you have multiple data sources, including both traditional databases and mainframes, such as SAP and Oracle. Extracting data in batch for transfer and replication purposes is slow and often incurs significant performance penalties. Running analytical queries on production transactional databases are often even more resource intensive and are prohibitively expensive to run.
Change data capture technology (CDC) is the answer to freeing your data from locked in sources and keeping your data moving. One company that helps you to keep your data in motion is Confluent, offering their Cloud-native service for Apache Kafka®, called Confluent Cloud – a fully managed, cloud-native Kafka service for data streaming for both analytical and operational use cases.
Qlik data streaming CDC technology enables the capture of source operations as a sequence of incrementing events, converting the data into events. These events can then be written to systems like Kafka to be consumed and modelled for analytics by individual groups across your organization, they can also power microservice applications and help ensure data governance, all without impacting your production data source.
Qlik is proud to work with our technical partners to enable organizations like yours work in the business moment, from the major cloud vendors to the streaming platforms like Confluent Cloud.
I was privileged to join my colleague and Kafka guru, John Neal, to discuss all things CDC with Tim Berglund from Confluent on a recent podcast. We talked about use cases where we see real-time Kafka data ingestion, processing and analytics moving the needle—including real-time customer predictions, supply-chain optimizations, as well as operational reporting. We also covered how capturing and tracking data changes are critical for your machine learning model to enrich data quality.
Listen to the podcast with Confluent to learn how Qlik and Confluent can help you with data ingestion to enable real-time data-driven insights.
You'll be pleased you did. When it comes to your data, it won’t be long until you're signing: “Ain't nothin' gonna to break my stride. Nobody gonna slow me down, oh no, I got to keep [my data] moving."
Don’t let slow data break your stride: @Qlik's @adammayerwrk talks Change #Data Capture with John Neil & @Confluent to keep data moving
In this article:
Data Integration