Staging / Ingestion Module

Build your own integrated data model by fetching data from various data sources – internal and external data.

Databases

Connect to all databases and other sources providing a JDBC driver.

Databases

Oracle, MSSQL, Big Query, MySQL, Azure Synapse Analytics, DB2, Amazon Redshift, Maria DB, Postgres, SAP Hana and many more....

Files

Connect to all possible file formats as CSV, Excel, Iceberg and Parquet files.

Files

CSV, JSON, XML, TSV, fixed width files, Parquet, Iceberg and most important: Excel files 🙂

No SQL

Connect to NoSQL sources as Delta Lake, Kafka or Mongo DB

No SQL

Delta Lake, Elasticsearch, Hive, Kafka, Kinesis, MongoDB, Prometheus, Redis, Thrift and others....

Python Classes

Connect to online applications providing a Python class for connection

Python Classes

Google Analytics, Salesforce, Exact Online and whatever you find as Python class

Compatible with other ingestion tools

Combine Datavault Builder with the database built in ingestion tools or 3th party ETL tools.

Compatible with other ingestion tools

Use ingestion by Snowflake, Exasol or Microsoft Synapse Analytics dbs or use ETL tools like Informatica, Matillion or Talend

Batch Loading

Connect to any data source out of the box and let Datavault Builder calculate which data did change since the last load. 

Delta Loads

Based on a source column Datavault Builder can filter the data already before staging.

CDC Loads

Datavault builder is able to consume CDC streams from MSSQL Servers, Qlik Replicate, Golden Gate, Kafka and others and interpret the ingestion time order correctly.

Near Real Time

Data can also be consumed from enterprise service buses like Kafka.

Get your personal presentation