Update/Upgrade Installation

License Replacement

When your license expires, you will receive a new license key from Datavault Builder License Management. This file is usually called datavault_builder_license.lic and is located in the same directory as your docker-compose.yml file.


Recreating the core will terminate any active processes. Please make sure, that no loading processes are performed while executing the replacement.


  1. Replace the existing license file:

    Navigate into the folder where the license file is located and overwrite the file (default name: datavault_builder_license.lic)

  2. Update Core
    docker compose up -d --force-recreate core
  3. Restart the API
    docker compose restart api


If you had any patches applied, make sure to reapply them after the license is replaced and the system is back up.

Version Change

Before Updating/Upgrading your environment to a new version, make sure to back it up.

  • Create a copy of the database of your environment

  • Create a copy of the configuration files (docker-compose, secrets, …)

The recommendation for any update is to perform it as green-blue deployment. Therefore, the recommended processes is as follows. Some steps might not be necessary depending on the update/upgrade.


  1. Change the version number

    In the .env-file, modify the following line and save the change.

  2. Start pulling the new images
    docker-compose pull
  3. Stop the application part of the Datavault Builder if it is running.

    This means, that also no loading processes should be going on during the update procedure.

    docker-compose down
  4. Optional: Release notes mention a manual update script

    Open Update Scripts Section in the Portal and get all update Scripts for your database-type (Called DVB 4.*.*.* to 4.*.*.* Databasetype Update).

    All means: The scripts between your current version number to the target version to install!

    Apply the update scripts one after the other onto the database.

  5. Start up the environment.

    Some additional updates might automatically be applied during startup of the new version (mainly to the data model, if this is the case, then it is mentioned in the release notes).

    docker-compose up -d


  • There can be overlapping update files. This happens, if some update script is actually included within another cumulative update. In this case, you can take the cumulative update. For example: dvb_4.0.0rc19_to_4.0.1.0_oracle_update is included in dvb_4.0.0.11_to_4.0.2.0_oracle_update.

  • Check the current release notes for new parameters, which can be required.

  • If in the release notes a model update is mentioned for the version, depending on the upgrade complexity, the size of the model and the resources this process can take up to a couple of hours. You can see the progress in the log of the core container (docker-compose logs -tf core)

  • Upgrades within a major release (e.g. 6) can be done from any lower to higher version directly. Intermediate versions can be skipped and do not need to be started.

Major Upgrade

For Major upgrades, some additional steps can be necessary to be performed.

6.X.X.X to 7.X.X.X

  1. Upgrade to the latest version of Datavault Builder 6.

  2. Follow the regular steps for an Update of a Datavault Builder environment as described above in chapter “Version Change”.

  3. When the environment is stopped, modify the docker-compose.yml:

  1. REMOVE the following configuration for service cicd as its functionalities have been migrated into main core engine.

      image: ${DVB_REGISTRY}${DVB_PROJECT}/cicd:${DVB_TAG}
  2. REMOVE the configuration for service scheduler as its functionalities have been migrated into main core/connection pool engine.

      image: ${DVB_REGISTRY}${DVB_PROJECT}/scheduler:${DVB_TAG}
  3. REMOVE the secret for scheduler_password as it is no longer needed.

        file: secrets/scheduler_password.txt
  4. Increase the parameters for the connection_pool environment parameters if they are currently set to lower than 70 - otherwise the parallelization in the deployment module will be limited.

  5. Increase the parameters for the core environment parameters if they are currently set to lower than -Xss8M - otherwise deployment of larger models may not work and lead to a stackoverflow error.

    - 'PLJAVA_VMOPTIONS=-Djava.security.egd=file:///dev/urandom -Xms128M -Xss8M'


Breaking Changes: Previous CICD APIs will be deprecated and have been replaced with compatiblity APIs:

  • cicd apis do not accept username + password anymore for login but require regular authentication.

  • cicd apis are now reached under regular url-path /rpc and not /cicd anymore.

  • limitation: API “exportModel” is only be able to return new export format 2.0.

  • legacy apis won’t return a junit response anymore.


Usage Hints / Limitations:

  • History: History of deployments only shows latest 20 deployments and will not be visible in the UI anymore after performing a recreation of the core service (docker-compose down & up).

  • Export: Staging table column order may be different on different environments and lead to changed order in the export.

  • Deployment: Direct comparison is only possible between environments on Version 7+.

  • New deployment state and packages can not be deployed against Versions below 7.

  • Unaltered default BR is dependent on BO: If the corresponding BO gets deployed, it will also update the Unaltered default BR (also if it is not selected for deployment)

  • Make sure files generated over the api are checked into enterprise versioning system using linux line breaks.


Scripts calling (cicd)-APIs at path /cicd/ can be adjusted to use compatibility APIs:

  • instead, call path /rpc/<api>

  • remove credentials from these payloads as well instead:

  • add /rpc/login API call to generate an authentication token and pass this in as for other calls made against /rpc/ endpoints.

  • the output of these compatiblity APIs may be slightly different, as they are based on new cicd backend.

5.X.X.X to 6.X.X.X

Hints to consider before you begin
  • MS SQL:
    • Truncate the content of your larger staging tables. This can significantly reduce the upgrade duration.

  1. Upgrade to the latest version of Datavault Builder 5.

  2. Follow the regular steps for an Update of a Datavault Builder environment as described above in chapter “Version Change”.

  3. To make use of the newly offered CICD APIs, you will need to extend your docker-compose.yml for an additional cicd service.

    # make sure to use correct indentation
        image: ${DVB_REGISTRY}${DVB_PROJECT}/cicd:${DVB_TAG}
            - dvbnet
        restart: always

    After performing these steps and the environment is up again, there is manually triggered maintenance functions available. These are not automatically called to reduce downtime. It can be executed by performing the following function on the host in the directory of your docker-compose.yml.

    docker-compose exec core /opt/datavaultbuilder/pgsql/bin/psql -U dbadmin -d datavaultbuilder_core -c "SELECT dvb_core.f_manual_fix_6_mssql_clustered_indices(force_execution => true);"

    This will update the clustering of indices for the hubs, hub load tracking satellites, links and link load tracking satellites to improve load performance. The function must be run with force_execution => true. It additionally accepts the following parameters, so that the upgrade can also be performed partially:

    • object_type: <text> (‘hub’, ‘hub_load’, ‘link’, ‘link_load’)

    • object_filter: <text> (if a filter is provided, also an object type must be provided)

    Sample for partial execution of for one specific hub:

    docker-compose exec core /opt/datavaultbuilder/pgsql/bin/psql -U dbadmin -d datavaultbuilder_core -c "
    SELECT dvb_core.f_manual_fix_6_mssql_clustered_indices(
    force_execution => true,
    object_type => 'hub',
    object_filter => 'hub_id = ''h_some_hub'''