Attunity Compose for Data Lakes

Attunity Compose automates your data pipelines to create analytics-ready data sets. By automating data ingestion, schema creation, and continual updates, organizations realize faster time-to-value from their existing data lake investments. 


https://www.qlik.com/us/products/qlik-compose-data-lakes



Automate analytics-ready data pipelines

Qlik Compose for Data Lakes (formerly Attunity Compose) automates your data pipelines to create analytics-ready data sets. By automating data ingestion, schema creation, and continual updates, organizations realize faster time-to-value from their existing data lake investments.



Universal Data Ingestion

Universal data ingestion
Supporting one of the broadest ranges of data sources, Qlik Compose for Data Lakes ingests data into your data lake whether it’s on-premises, in the cloud, or in a hybrid environment. Sources include:

* RDBMS: DB2, MySQL, Oracle, PostgreSQL, SQL Server, Sybase
* Data warehouses: Exadata, IBM Netezza, Pivotal, Teradata, Vertica
* Legacy systems: DB2 z/OS, IMS/DB, RMS, VSAM
* Cloud: Amazon Web Services, Microsoft Azure, Google Cloud
* Enterprise applications: SAP
* Messaging systems: Apache Kafk

Easy data structuring and transformation
Easy data structuring and transformation

An intuitive and guided user interface helps you build, model and execute data lake pipelines.




Continuous updates
    Continuous updates

Be confident that your ODS and HDS accurately represent your source systems.
* Use change data capture (CDC) to enable real-time analytics with less administrative and processing overhead
* Efficiently process initial loading with parallel threading.
* Leverage time-based partitioning with transactional consistency to ensure that only transactions completed within a specified time are processed.