Upgrading from 5.0 heading-link-icon

This section describes how to upgrade a Corda cluster from 5.0 to 5.1. It lists the required prerequisites and describes the following steps required to perform an upgrade:

  1. Back Up the Corda Database
  2. Test the Migration
  3. Scale Down the Running Corda Worker Instances
  4. Migrate the Corda Cluster Database
  5. Update the Database Connection Configuration Table
  6. Migrate the Virtual Node Databases
  7. Update Kafka Topics
  8. Launch the Corda 5.1 Workers

For information about how to roll back an upgrade, see Rolling Back.

Following a platform upgrade, Network Operators should upgrade their networks. For more information, see Upgrading an Application Network.

This documentation assumes you have full administrator access to the Corda CLI A command line tool that supports various Corda-related tasks, including Corda Package Installer (CPI) creation and Corda cluster management. 5.1 and Kafka. You must ensure that you can create a connection to your Kafka deployment. You can check this by confirming you can list Kafka topics by running a command such as the following:

kafka-topics --bootstrap-server=prereqs-kafka.test-namespace:9092 --list

You must create a backup of all schemas in your database:

  • Cluster — name determined at bootstrap. For example, CONFIG.
  • Crypto — name determined at bootstrap. For example, CRYPTO.
  • RBAC — name determined at bootstrap. For example, RBAC.
  • Virtual Nodes schemas:
    • vnode_crypto_<holding_id>
    • vnode_uniq_<holding_id>
    • vnode_vault_<holding_id>

Follow the steps in Migrate the Corda Cluster Database and Update the Database Connection Configuration Table on copies of your database backups to ensure that the database migration stages are successful before proceeding with an upgrade of a production instance of Corda.

This reveals any issues with migrating the data before incurring any downtime. It will also indicate the length of downtime required to perform a real upgrade, allowing you to schedule accordingly.

For information about rolling back the Corda 5.0 to Corda 5.1 upgrade process, see Rolling Back.

You can scale down the workers using any tool of your choice. For example, the run the following commands if using kubectl:

kubectl scale --replicas=0 deployment/corda-crypto-worker -n <corda_namespace>
kubectl scale --replicas=0 deployment/corda-db-worker -n <corda_namespace>
kubectl scale --replicas=0 deployment/corda-flow-worker -n <corda_namespace>
kubectl scale --replicas=0 deployment/corda-membership-worker -n <corda_namespace>
kubectl scale --replicas=0 deployment/corda-p2p-gateway-worker -n <corda_namespace>
kubectl scale --replicas=0 deployment/corda-p2p-link-manager-worker -n <corda_namespace>
kubectl scale --replicas=0 deployment/corda-rest-worker -n <corda_namespace>

If you are scripting these commands, you can wait for the workers to be scaled down using something similar to the following:

while [ "$(kubectl get pods --field-selector=status.phase=Running -n corda | grep worker | wc -l | tr -d ' ')" != 0 ]
do
  sleep 1
done

To migrate the database schemas, do the following:

  1. Generate the required SQL scripts using the spec sub-command of the Corda CLI database command. For example:

    corda-cli.sh database spec -c -l /sql_updates -s="config,rbac,crypto,statemanager" \
    -g="config:config,rbac:rbac,crypto:crypto,statemanager:state_manager" --jdbc-url=<DATABASE-URL> -u postgres
    
    corda-cli.cmd database spec -c -l /sql_updates -s="config,rbac,crypto,statemanager" `
    -g="config:config,rbac:rbac,crypto:crypto,statemanager:state_manager" --jdbc-url=<DATABASE-URL> -u postgres
    
  2. Verify the generated SQL scripts and apply them to the Postgres database. For example:

    psql -h localhost -p 5432 -f ./sql_updates/config.sql -d cordacluster -U postgres
    psql -h localhost -p 5432 -f ./sql_updates/crypto.sql -d cordacluster -U postgres
    psql -h localhost -p 5432 -f ./sql_updates/rbac.sql -d cordacluster -U postgres
    psql -h localhost -p 5432 -f ./sql_updates/statemanager.sql -d cordacluster -U postgres
    
  3. Grant the necessary permissions to the following new database tables:

    • Cluster database — configured by the Helm chart by the property db.cluster.username.value; corda in the example below.
    • RBAC database — configured by the Helm chart by the property db.rbac.username.value; rbac_user in the example below.
    • Crypto database — configured by the Helm chart from the property db.crypto.username.value; crypto_user in the example below.
    • State manager database — configured by the Helm chart from the property db.cluster.username.value; corda in the example below.

    For example:

    psql -h localhost -c "GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA config TO corda" -p 5432 -d cordacluster -U postgres
    psql -h localhost -c "GRANT USAGE, SELECT ON ALL SEQUENCES IN SCHEMA config TO corda" -p 5432 -d cordacluster -U postgres
    psql -h localhost -c "GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA rbac TO rbac_user" -p 5432 -d cordacluster -U postgres
    psql -h localhost -c "GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA crypto TO crypto_user" -p 5432 -d cordacluster -U postgres
    psql -h localhost -c "GRANT USAGE ON SCHEMA state_manager TO corda" -p 5432 -d cordacluster -U postgres
    psql -h localhost -c "GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA state_manager TO corda" -p 5432 -d cordacluster -U postgres
    psql -h localhost -c "GRANT USAGE, SELECT ON ALL SEQUENCES IN SCHEMA state_manager TO corda" -p 5432 -d cordacluster -U postgres
    

Corda 5.0 only used one concurrent connection for each virtual node. As of 5.1, this is configurable in corda.db. As part of the upgrade process, you must remove the 5.0 setting to enable the new Corda 5.1 default of 10 maximum virtual node connections.

To remove the 5.0 setting, issue a SQL statement that removes "pool":{"max_size":1}, from the JSON config in each row in the cluster db_connections table. In the following example, CONFIG in CONFIG.db_connection is the name of the cluster schema, while config is a column name that you must specify in lowercase:

psql -h localhost -c "UPDATE CONFIG.db_connection SET config = REPLACE(config,'\"pool\":{\"max_size\":1},','')" -p 5432 -d cordacluster -U postgres

Migrating virtual node databases requires the short hash holding ID of each virtual node. For more information, see Retrieving Virtual Nodes.

To migrate the virtual node databases, do the following:

  1. Create a file containing the short hash holding IDs of the virtual nodes to migrate.

  2. Generate the required SQL scripts using the platform-migration sub-command of the Corda CLI vnode command. For example, if you save the holding IDs in /sql_updates/holdingIds:

    corda-cli.sh vnode platform-migration --jdbc-url=jdbc:postgresql://host.docker.internal:5432/cordacluster -u postgres -i /sql_updates/holdingIds -o /sql_updates/vnodes.sql
    
    corda-cli.cmd vnode platform-migration --jdbc-url=jdbc:postgresql://host.docker.internal:5432/cordacluster -u postgres -i /sql_updates/holdingIds -o /sql_updates/vnodes.sql
    
  3. Review the generated SQL and apply it as follows:

    psql -h localhost -p 5432 -f ./sql_updates/vnodes.sql -d cordacluster -U postgres
    
  4. Grant the required permissions for the user for each virtual node for the three database schemas. Corda creates these users when it creates the schemas and so you must extract the credentials from the database using the previously created file of holding IDs. R3 recommends that you script this stage, as follows:

    while read HOLDING_ID; do
       # In Corda 5.0 all virtual node schemas and users are created by Corda, so we need to extract their names from the db
    
       # Grab the schema names for this holding Id
       VAULT_SCHEMA=$(psql -h localhost -c "SELECT schema_name FROM information_schema.schemata WHERE schema_name LIKE 'vnode_vault%'" -p 5432 -d cordacluster -U postgres | tr -d ' ' | grep -i $HOLDING_ID | grep vault )
       CRYPTO_SCHEMA=$(psql -h localhost -c "SELECT schema_name FROM information_schema.schemata WHERE schema_name LIKE 'vnode_crypto%'" -p 5432 -d cordacluster -U postgres | tr -d ' ' | grep -i $HOLDING_ID | grep crypto )
       UNIQ_SCHEMA=$(psql -h localhost -c "SELECT schema_name FROM information_schema.schemata WHERE schema_name LIKE 'vnode_uniq%'" -p 5432 -d cordacluster -U postgres | tr -d ' ' | grep -i $HOLDING_ID | grep uniq)
    
       # Get the vault users associated with this holding id
       VAULT_DDL_USER=$(psql -h localhost -c "select usename from pg_catalog.pg_user" -p 5432 -d cordacluster -U postgres | grep -i $HOLDING_ID | tr -d ' ' | grep vault | grep ddl)
       VAULT_DML_USER=$(psql -h localhost -c "select usename from pg_catalog.pg_user" -p 5432 -d cordacluster -U postgres | grep -i $HOLDING_ID | tr -d ' ' | grep vault | grep dml)
    
       # Get the crypto users associated with this holding id
       CRYPTO_DDL_USER=$(psql -h localhost -c "select usename from pg_catalog.pg_user" -p 5432 -d cordacluster -U postgres | grep -i $HOLDING_ID | tr -d ' ' | grep crypto | grep ddl)
       CRYPTO_DML_USER=$(psql -h localhost -c "select usename from pg_catalog.pg_user" -p 5432 -d cordacluster -U postgres | grep -i $HOLDING_ID | tr -d ' ' | grep crypto | grep dml)
    
       # Get the uniqueness users associated with this holding id
       UNIQ_DDL_USER=$(psql -h localhost -c "select usename from pg_catalog.pg_user" -p 5432 -d cordacluster -U postgres | grep -i $HOLDING_ID | tr -d ' ' | grep uniq | grep ddl)
       UNIQ_DML_USER=$(psql -h localhost -c "select usename from pg_catalog.pg_user" -p 5432 -d cordacluster -U postgres | grep -i $HOLDING_ID | tr -d ' ' | grep uniq | grep dml)
    
       # Update priviledges for any new tables in the crypto schema with the crypto users
       psql -h localhost -c "GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA $CRYPTO_SCHEMA TO $CRYPTO_DDL_USER" -p 5432 -d cordacluster -U postgres
       psql -h localhost -c "GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA $CRYPTO_SCHEMA TO $CRYPTO_DML_USER" -p 5432 -d cordacluster -U postgres
    
       # Update priviledges for any new tables in the vault schema with the vault users
       psql -h localhost -c "GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA $VAULT_SCHEMA TO $VAULT_DDL_USER" -p 5432 -d cordacluster -U postgres
       psql -h localhost -c "GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA $VAULT_SCHEMA TO $VAULT_DML_USER" -p 5432 -d cordacluster -U postgres
    
       # Update priviledges for any new tables in the uniqueness schema with the uniqueness users
       psql -h localhost -c "GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA $UNIQ_SCHEMA TO $UNIQ_DDL_USER" -p 5432 -d cordacluster -U postgres
       psql -h localhost -c "GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA $UNIQ_SCHEMA TO $UNIQ_DML_USER" -p 5432 -d cordacluster -U postgres
    done <./sql_updates/holdingIds
    

Corda 5.1 contains new Kafka topics and also revised Kafka ACLs. You can apply these changes in one of the following ways:

Use the connect and create sub-commands of the Corda CLI topic command to connect to the Kafka broker and create any required topics. For example:

corda-cli.sh topic -b=prereqs-kafka:9092 -k=/kafka_config/props.txt create connect
corda-cli.cmd topic -b=prereqs-kafka:9092 -k=/kafka_config/props.txt create connect

Alternatively, the preview and create sub-commands of the Corda CLI topic command can generate a preview of the required Kafka configuration in YAML. You can save, and if required modify, this content before using the Corda CLI to execute it, as follows:

  1. Use the preview sub-command of the Corda CLI create sub-command to generate a preview of the configuration. For example:

    corda-cli.sh topic create -u crypto=CRYPTO_USER -u db=DB_USER -u flow=FLOW_USER -u membership=MEMBERSHIP_USER \
    -u p2pGateway=P2P_GATEWAY_USER -u p2pLinkManager=P2P_LINK_MANAGER_USER -u rest=REST_USER \
    -u uniqueness=UNIQUENESS_WORKER -u flowMapper=FLOW_MAPPER_USER -u persistence=PERSISTENCE_USER \
    -u verification=VERIFICATION_WORKER preview
    
    corda-cli.cmd topic create -u crypto=CRYPTO_USER -u db=DB_USER -u flow=FLOW_USER -u membership=MEMBERSHIP_USER `
    -u p2pGateway=P2P_GATEWAY_USER -u p2pLinkManager=P2P_LINK_MANAGER_USER -u rest=REST_USER `
    -u uniqueness=UNIQUENESS_WORKER -u flowMapper=FLOW_MAPPER_USER -u persistence=PERSISTENCE_USER `
    -u verification=VERIFICATION_WORKER preview
    
  2. Review the output and make any necessary changes.

    The YAML generated by the Corda CLI represents the required state of Kafka topics for Corda 5.1. The Corda CLI does not connect to any running Kafka instance and so the Kafka instance administrator must use the preview to decide the required changes for your cluster.

To complete the upgrade to 5.1 and launch the Corda 5.1 workers, upgrade the Helm chart:

helm upgrade corda -n corda oci://corda-os-docker.software.r3.com/helm-charts/release/os/5.1/corda --version 5.1.0 -f values.yaml

For more information about the values in the deployment YAML file, see Configure the Deployment.

Was this page helpful?

Thanks for your feedback!

Chat with us

Chat with us on our #docs channel on slack. You can also join a lot of other slack channels there and have access to 1-on-1 communication with members of the R3 team and the online community.

Propose documentation improvements directly

Help us to improve the docs by contributing directly. It's simple - just fork this repository and raise a PR of your own - R3's Technical Writers will review it and apply the relevant suggestions.

We're sorry this page wasn't helpful. Let us know how we can make it better!

Chat with us

Chat with us on our #docs channel on slack. You can also join a lot of other slack channels there and have access to 1-on-1 communication with members of the R3 team and the online community.

Create an issue

Create a new GitHub issue in this repository - submit technical feedback, draw attention to a potential documentation bug, or share ideas for improvement and general feedback.

Propose documentation improvements directly

Help us to improve the docs by contributing directly. It's simple - just fork this repository and raise a PR of your own - R3's Technical Writers will review it and apply the relevant suggestions.