Accelerate Cloud Initiatives with Simple, Automated Cloud Data Integration
Qlik Replicate – branded as Qlik CloudBeam in partner cloud marketplaces like Amazon – makes cloud data integration simpler and more agile by enabling source-to-target data migration and synchronization without any manual coding. Data administrators and analysts can use the intuitive Qlik console to configure, execute, and monitor cloud data migration jobs. Qlik automatically generates the needed code, sharply reducing reliance on development staff and accelerating time-to-value for cloud data initiatives.
Qlik also enables IT agility by serving as a single, unified solution for all of an organization's diverse cloud data integration needs. Qlik enables integration and data movement between nearly any type of source data system – such as relational databases, mainframe systems, data warehouses, and data lakes – and any of the leading cloud data platforms including Amazon (S3, EC2, RDS, Redshift, EMR), Azure, and Google. As a universal data integration tool, Qlik eliminates the complexities, maintenance costs, and risks associated with employing multiple disparate integration scripts and utilities.
While making it easy and fast to implement bulk data transfers to and among cloud data platforms, Qlik Replicate/CloudBeam also supports real-time incremental cloud data replication. With Qlik agentless change data capture (CDC) technology you can easily keep on-premises and cloud data stores in sync, ensuring that your cloud initiatives benefit from the freshest possible data. You can also securely transfer data across Wide Area Networks (WANs) by leveraging NSA-approved AES-256 encrypted multi-pathing. To maximize utilization of available bandwidth, large tables can be compressed and split into multiple, configurable streams, and small tables or CDC streams can be batched together.
To address unpredictable events like network outages, Qlik Replicate offers seamless recovery from transfer interruptions from the exact point of failure. This is accomplished by staging source data in a temporary target directory, then validating the data before loading it into the target database.