Guides → Upgrade from 4.9.x to 5.0
Prior to upgrade, please review the Release Model. The Release Model describes the various release statuses: Preview, Early Review, and Generally Available. Refer to the Release Model to identify the release version and status that is most suitable for upgrading your Sandbox, Developer, User Acceptance Testing (UAT), or Production environment.
Upgrade from Incorta 4.9.x to 5.0
This guides details how to upgrade a standalone Incorta cluster. Upgrading your Incorta cluster to Release 5.0 requires team resources:
- a System Administrator with root access to the host or hosts running Incorta Nodes, the host running the Cluster Management Console (CMC), and the host or hosts running Apache Spark
- a CMC Administrator
- a Database Administrator
- a SuperUser that can access each tenant in the Incorta environment
- an Incorta Developer to resolve identified issues with formula expressions, schema alias, joins between tables, and dependencies between objects such as dashboards and business schemas
It also requires time as these general timelines for various procedures and processes indicate:
Stage | Estimated Time |
---|---|
Prepare for Upgrade Readiness | 15 minutes to 3 hours |
Achieve Upgrade Readiness | 2 hours to 3 days |
Stop the Incorta cluster | 5 minutes to 30 minutes |
Create backups | 15 minutes to 3 hours |
Upgrade the Incorta cluster | 15 minutes to 3 hours |
Start the Incorta cluster | 5 minutes to 5 hours |
Upgrade the Incorta Metadata database | 5 minutes to 3 hours |
Run Critical Scripts | 15 minutes to 8 hours |
Verify the successful upgrade | 15 minutes to 1 day |
Prepare for Upgrade Readiness
To prepare for Upgrade Readiness requires:
- a System Administrator with root access to the host or hosts running Incorta Nodes as well as the host running the Cluster Management Console (CMC)
- a CMC Administrator
- a Database Administrator
The estimated time to complete the following is from 15 minutes to 3 hours:
- Pause all scheduled jobs
- Export all tenants
- Add a Create View database grant
Pause all scheduled jobs in the CMC
Enable this setting to pause active scheduled schema loads, dashboards, and data alerts. This is helpful when importing or exporting an existing tenant. Here are the steps to enable this option as default tenant configuration:
- In the Navigation bar, select Clusters.
- In the cluster list, select a Cluster name.
- In the canvas tabs, select Cluster Configurations.
- In the panel tabs, select Default Tenant Configurations.
- In the left pane, select Data Loading.
- Enable the Pause Scheduled Jobs setting.
- Select Save.
Export of all tenants with the Tenant Management Tool
A System Administrator with root access to the host running the Cluster Management Console (CMC) is able to run the Tenant Management Tool (TMT). Here are the steps:
- Secure shell in to the CMC host.
- As the incorta user, navigate to the installation path of the TMT. The default installation path for the TMT is:
<CMC_INSTALLATION_PATH>/IncortaAnalytics/cmc/tmt
- Export ALL tenants
./exportAlltenants.sh -c <CLUSTER_NAME> -f False /tmp/<TENANT_EXPORT>.zip
Add a Create View database grant
A Database Administrator with root access to the MySQL or Oracle database server that runs the Incorta Metadata database is able to add the Create View database grant. The estimated time to complete the following is from 5 minutes.
MySQL
Here are the steps for MySQL:
- Sign in to the MySQL Incorta metadata database as the root user.
mysql -h0 -uroot -proot_password incorta_metadata
-h = host, where 0 is a shorthand reference for localhost
-u = user, where root is the user
-p = password, where the password is rootpassword
incortametadata is the database
- Verify the
incorta
database user for theincorta_metadata
database.
SELECT User, Host FROM mysql.user WHERE user = 'incorta';
- Verify the current grants for all users.
SHOW GRANTS for 'incorta'@'locahost';
SHOW GRANTS for 'incorta'@'127.0.0.1';
SHOW GRANTS for 'incorta'@'192.168.128.101';
- If needed, add the
CREATE VIEW
grant to the allincorta
users.
GRANT CREATE VIEW ON `incorta_metadata`.* TO 'incorta'@'localhost';
GRANT CREATE VIEW ON `incorta_metadata`.* TO 'incorta'@'127.0.0.1';
GRANT CREATE VIEW ON `incorta_metadata`.* TO 'incorta'@'192.168.128.101';
Oracle
To add grants for a user in an Oracle database, please refer to Oracle dDatabase SQL Language Reference.
Achieve Upgrade Readiness
Please review Concepts → Upgrade Readiness. Achieving Upgrade Readiness requires:
- a System Administrator with root access to the host or hosts running Incorta Nodes as well as the host running the Cluster Management Console (CMC)
- a CMC Administrator
- a SuperUser that can access each tenant in the Incorta environment
- an Incorta Developer to resolve identified issues with formula expressions
The estimated time to complete the following is from 2 hours to 3 days:
- Resolve alias issues with the Alias Sync Tool
- Resolve issues with formula expressions that the Formula Validation Tool identifies
- Resolve Severity-1 issues that the Inspector Tool identifies
Resolve alias issues with the Alias Sync Tool
Here are the resources required to run the Alias Sync Tool:
- A System Administrator with root access to the host running an Incorta Node is able to run the Alias Sync Tool.
To resolve issues with Alias tables, you must download the alias_sync.py
file, secure copy the file to IncortaNode/bin
directory, and the run the script for each tenant in your cluster.
To learn more, please review Tools → Alias Sync Tool.
Resolve issues with formula expressions that the Formula Validation Tool identifies
Here are the resources required to run the Formula Validation Tool and identify outstanding issues with formula expressions:
- A CMC Administrator to export tenants in a cluster or a System Administrator who can export tenants using the Tenant Management Tool (TMT).
- A System Administrator with root access to the host running an Incorta Node will need to run the Formula Validation Tool.
- A SuperUser that can access each tenant in the Incorta environment.
- An Incorta Developer to resolve the identified issues with formula expressions.
For a given tenant export, the Formula Validation Tool checks the formula syntax in dashboards, business schemas, and schemas. For example, the tool identifies issues with formula columns and runtime security filters in a schema table. One such issue is with aggregation formula expressions that are missing commas between input values of the type integers, longs, and doubles. In Release 5.0 release, commas must separate input parameter values for built-in functions. In previous releases, the Formula Builder accepted spaces between input values. The Formula Validation Tool identifies this and other issues with formula syntax.
Resolving issues with formula expressions is an iterative process. The Formula Validation Tool requires a tenant export file. You must resolve all issues in the failedFormulas.tsv
file. In many cases, resolving one instance of an issue will resolve issues in dependent objects. After resolving issues with a formula expression, you will need to repeat the process. This means exporting the tenant again, running the Formula Validation Tool using the new tenant file, and again resolving outstanding issues. This iterative approach may take several hours or days.
To learn more, please review Tools → Formula Validation Tool.
Resolve Severity-1 issues that the Inspector Tool identifies
Here are the resources required to run the Inspector Tool:
- For a given tenant, a CMC Administrator enables the Inspector Tool Scheduler and schedules an Inspector Tools job. A CMC Administrator also downloads the Inspector Tool related schema, business schema, and dashboards files for all tenants.
- A SuperUser that can access each tenant in the Incorta cluster.
- An Incorta Developer to resolve the identified issues in the 1- Validation UseCases dashboard.
For a given tenant, the Inspector Tool checks the lineage references of Incorta metadata objects including tables, schemas, business schemas, business schema views, dashboards, and session variables. It also checks for inconsistencies and validation errors in joins, tables, views, formulas, and dashboards.
Prior to upgrading Incorta, you must enable and configure the Inspector Tool for all tenants. In addition, you must resolve all Severity-1 issues.
To learn more, please review Tools → Inspector Tool.
Stop the Incorta cluster
Here are the resources required to stop all the services in the Incorta cluster:
- a System Administrator with root access to the host or hosts running Incorta Nodes, the host running the Cluster Management Console (CMC), and the host or hosts running Apache Spark
The estimated time to stop the Incorta cluster and all related services is from 5 minutes to 30 minutes. Here are the steps involved in stopping the Incorta cluster:
- Stop the Notebook Add-On Service
- Stop the Analytics Service
- Stop the Loader Service
- Stop Apache Spark
- Stop the CMC
- Stop the Node Agent
- Stop the Export Server
- Stop Apache Zookeeper
Stop the Notebook Add-on Service
Your Incorta cluster may not have enabled and configured the Notebook Add-on Service for a given tenant. You enable the Notebook Add-on as an Incorta Labs feature.
In order to stop the Notebook Add-on Service, you need to know the name of the service. You can read the services.index
file to find out the name of Notebook Add-on running on an Incorta Node that is running the Analytics Service.
cat <INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/notebooks/services/services.index
Once you know the name of the Notebook Add-on Service, then execute the following:
NOTEBOOK_ADD_ON=<SERVICE_NAME>
<INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/stopNotebook.sh ${NOTEBOOK_ADD_ON}
Stop the Analytics Service
In order to stop the Analytics Service, you need to know the name of the service. You can read the services.index
file to find out the name of the services running on an Incorta Node.
cat <INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/services/services.index
Once you know the name of the Analytics Service, you can then execute the following:
ANALYTICS_SERVICE=<SERVICE_NAME>
<INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/stopService.sh ${ANALYTICS_SERVICE}
Stop the Loader Service
In order to stop the Loader Service, you need to know the name of the service. You can read the services.index
file to find out the name of the services running on an Incorta Node.
cat <INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/services/services.index
Once you know the name of the Loader Service, you can then execute the following:
LOADER_SERVICE=<SERVICE_NAME>
<INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/stopService.sh ${LOADER_SERVICE}
Stop Apache Spark
You can stop Apache Spark using the stopSpark.sh
shell script:
<INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/stopSpark.sh
Stop the CMC
The default directory for the CMC is ~/IncortaAnalytics/cmc
. Stop the CMC with the stop-cmc.sh
shell script:
<CMC_INSTALLATION_PATH>/cmc/stop-cmc.sh
Stop the Node Agent
For each Incorta Node, run the following:
<INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/nodeAgent/agent.sh stop
Stop the Export Server
To stop the Export Server, run the following:
<INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/stop-exportserver.sh
Stop Apache Zookeeper
To stop Apache Zookeeper, run the following:
<INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/stop-zookeeper.sh
Create backups
Here are the resources required to create a various backups:
- A Database Administrator with root access to the MySQL or Oracle database server that runs the Incorta Metadata database.
- A System Administrator with root access to the host or hosts running Incorta Nodes, the Cluster Management Console (CMC), and Apache Spark.
The estimated time to complete the following is from 30 minutes to 3 hours:
- Create a backup of the Incorta Metadata database
- Create a backup of the IncortaAnalytics directory
- Create a backup of the Apache Spark configuration files
Create a backup of the Incorta Metadata database
Here are the resources required to create a backup of the Incorta Metadata database:
- A Database Administrator with root access to the MySQL or Oracle database server that runs the Incorta Metadata database.
MySQL
To create a backup of the incorta metadata database, use mysqldump command line utility:
mysqldump -u [user] -p [database_name] > [filename].sql
Example
Here is example with the MySql user as root with the password incorta_root:
mysqldump -uroot -pincorta_root incorta_metadata > /tmp/incorta_metadata.sql
Oracle
To create a backup of the incorta metadata database, please refer to Oracle documentation.
Create a backup of the Incorta installation directory
To create a backup of the Incorta installation directory, use the following command:
zip -r IncortaAnalytics_Backup.zip <INCORTA_NODE_INSTALLATION_PATH>
Create a backup of the Apache Spark configuration files
Create a backup of the following spark configuration files present in the $SPARK_HOME/conf
directory:
spark-defaults.conf
spark-env.sh
SPARK_HOME=<INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/spark
cd $SPARK_HOME/conf
zip -r Spark_Conf_Backup.zip spark-defaults.conf spark-env.sh
Upgrade the Incorta cluster
Here are the resources required to upgrade the Incorta cluster:
- a System Administrator with root access to the host or hosts running Incorta Nodes, the host running the Cluster Management Console (CMC), and the host or hosts running Apache Spark
To begin, run the incorta-installer.jar
file from the shell:
java -jar incorta-installer.jar -i console
In the Incorta Installer console, enter these values for a standalone (Typical) upgrade:
Welcome : Enter
License Agreement/Copyright : Enter
License Agreement/Copyright : Y
Installation Type : 2- Upgrade
Installation Set : 1- Typical
Choose Installation Folder : Enter- Default
Installation Status : Enter
Start CMC : 3- Finish without starting CMC
Kill unwanted processes
After upgrading, you will want to kill any processes related to Incorta as you will start Incorta manually. To kill any unwanted processes, run the following commands:
sudo kill -9 $(ps -aux | grep '[n]odeAgent.jar' | awk '{print $2}')
sudo kill -9 $(ps -aux | grep '[d]erby' | awk '{print $2}')
sudo kill -9 $(ps -aux | grep '[e]xportServer' | awk '{print $2}')
sudo kill -9 $(ps -aux | grep '[z]ookeeper' | awk '{print $2}')
sudo kill -9 $(ps -aux | grep '[c]mc' | awk '{print $2}')
sudo kill -9 $(ps -aux | grep '[s]park' | awk '{print $2}')
sudo kill -9 $(ps -aux | grep '[h]adoop' | awk '{print $2}')
sudo kill -9 $(ps -aux | grep '[p]ostgres' | awk '{print $2}')
sudo kill -9 $(ps -aux | grep '[I]ncortaNode' | awk '{print $2}')
Upgrade an external Apache Spark environment
If the Incorta cluster is using an external Apache Spark environment, you must also upgrade the Apache Spark environment by following these steps:
- Zip the bundled
spark
directory under IncortaNode:
zip -r Incorta-Bundled-Spark.zip <INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/spark
- Zip the bundled
hadoop
directory under IncortaNode:
zip -r Incorta-Bundled-Hadoop.zip <INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/hadoop
- Copy
Incorta-Bundled-Spark.zip
andIncorta-Bundled-Hadoop.zip
to the external Apache Spark environment. - In the external Apache Spark environment, remove the
spark
directory. - Unzip
Incorta-Bundled-Spark.zip
to recreate the Spark environment. - Unzip
Incorta-Bundled-Hadoop.zip
to recreate the Hadoop environment.
Review the Upgrade logs
Check to see if there are any critical errors with the upgrade in the following log files and directories:
- Installer log
cat /tmp/DebuggingLog.log
- Incorta Node upgrade logs
cd <INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/logs/
- CMC logs
ls -l <CMC>/logs/
Remove unused SSO jars
If you are using Single-Sign On (SSO) authentication for your Incorta cluster, a System Administrator with root access to the host or hosts running the Incorta Nodes must remove one of the two related SSO JAR files. The two files are:
incorta.onelogin.valv-1.0.jar
incorta-sso.jar
If your Single-Sign On provider is Okta, you need to delete the incorta.onelogin.valv-1.0.jar
file. For all other SAML-compliant providers, such as OneLogin or Azure Active Directory (Azure AD), you need to delete the incorta-sso.jar
file.
Here are the steps to remove the unused SSO JAR:
- Determine the universally unique identifier (UUID) of the Analytics Service.
INCORTA_NODE_INSTALLATION_PATH=<INCORTA_NODE_INSTALLATION_PATH>
cat ${INCORTA_NODE_INSTALLATION_PATH}/IncortaNode/services/services.index
- Once you know the UUID the Analytics Service, grep the
server.xml
file forOkta
.
ANALYTICS_SERVICE_UUID=<UUID>
cat ${INCORTA_NODE_INSTALLATION_PATH}/IncortaNode/services/${ANALYTICS_SERVICE_UUID}/conf/server.xml | grep 'Okta'
-
If grep finds a match for
Okta
in theserver.xml
file, follow the steps:- Confirm that the
<Valve />
element is not commented out as in<!-- <Valve /> -->
in theserver.xml
file. - Delete the
incorta.onelogin.valv-1.0.jar
file.
- Confirm that the
sudo rm -f ${INCORTA_NODE_INSTALLATION_PATH}/IncortaNode/runtime/lib/incorta.onelogin.valv-1.0.jar
-
If grep does not find a match for
Okta
, follow the steps:- Confirm that the
<Valve />
element is not commented out as in<!-- <Valve /> -->
- Delete the
incorta-sso.jar
file.
- Confirm that the
sudo rm -f ${INCORTA_NODE_INSTALLATION_PATH}/IncortaNode/runtime/lib/incorta-sso.jar
Start the Incorta cluster
Here are the resources required to start all the services in the Incorta cluster:
- a System Administrator with root access to the host or hosts running Incorta Nodes, the host running the Cluster Management Console (CMC), and the host or hosts running Apache Spark
The estimated time to start the Incorta cluster and all related services is from 5 minutes to 5 hours. Depending on schema data size and various tenant configurations, it may take the Incorta Analytics Service several hours to load schemas into memory.
After performing an upgrade and in the case of using a Single Sign-On (SSO) provider to secure access to the Incorta Direct Data Platform™, you need to manually remove one of the following jar files, before starting the Analytics Service, from the <INCORTA_NODE_INSTALLATION_PATH>/runtime/lib
directory depending upon your SSO provider.
Here are the steps to start the Incorta cluster:
- Start Apache Zookeeper
- Start Apache Spark
- Start the Export Server
- Start the Node Agent
- Start the Loader Service
- Start the Analytics Service
- Start the Notebook Add-On Service
- Start the CMC
Start Apache Zookeeper
To start Apache Zookeeper, run the following:
<INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/start-zookeeper.sh
Start Apache Spark
You can start Apache Spark using the startSpark.sh
shell script:
<INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/startSpark.sh
Start the Export Server
To start the Export Server, run the following:
<INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/start-exportserver.sh
Start the Node Agent
For each Incorta Node, run the following to start the node agent:
<INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/nodeAgent/agent.sh start
Start the Loader Service
In order to start the Loader Service, you need to know the name of the service. You can read the services.index
file to find out the name of the services running on an Incorta Node.
cat <INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/services/services.index
Once you know the name of the Loader Service, you can then execute the following:
LOADER_SERVICE=<SERVICE_NAME>
<INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/startService.sh ${LOADER_SERVICE}
Start the Analytics Service
In order to start the Analytics Service, you need to know the name of the service. You can read the services.index
file to find out the name of the services running on an Incorta Node.
cat <INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/services/services.index
Once you know the name of the Analytics Service, you can then execute the following:
ANALYTICS_SERVICE=<SERVICE_NAME>
<INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/startService.sh ${ANALYTICS_SERVICE}
Start the Notebook Add-on Service
Your Incorta cluster may not have enabled and configured the Notebook Add-on Service for a given tenant. You enable the Notebook Add-on as an Incorta Labs feature.
In order to start the Notebook Add-on Service, you need to know the name of the service. You can read the services.index
file to find out the name of Notebook Add-on running on an Incorta Node that is running the Analytics Service.
cat <INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/notebooks/services/services.index
Once you know the name of the Notebook Add-on Service, then execute the following:
NOTEBOOK_ADD_ON=<SERVICE_NAME>
<INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/startNotebook.sh ${NOTEBOOK_ADD_ON}
Start the CMC
The default directory for the CMC is ~/IncortaAnalytics/cmc
. Start the CMC with the start-cmc.sh
shell script:
<CMC_INSTALLATION_PATH>/cmc/start-cmc.sh
Upgrade the Incorta metadata database
A CMC Administrator is able to upgrade the Incorta metadata database. Depending on the number of tenants and schemas in your Incorta cluster, the process can take between 5 minutes and 3 hours.
To sign in to the Cluster Management Console (CMC), visit your CMC host at one of the following:
http://<Public_IP>:6060/cmc
http://<Public_DNS>:6060/cmc
http://<Private_IP>:6060/cmc
http://<Private_DNS>:6060/cmc
The default port for the CMC is 6060. Sign in to the CMC using your CMC administrator username and password.
To upgrade the Cluster Metadata database, follow the steps:
- In the Navigation bar, select Clusters.
- For each cluster name in the Cluster list, in the Actions column, select Upgrade Cluster Metadata.
A dialog indicates to restart the Incorta Services. In the dialog, select OK.
Run critical Scripts
There are several scripts that need to be run when upgrading from Release 4.9.0 to Release 5.0.x. These scripts are:
- PK Migration Tool
- Hash Migration Script
- Only for WAF: Run the Alias Sync for WAF Script
A System Administrator with root access to the host or hosts running Incorta Nodes as well as the host running the Cluster Management Console (CMC) can run these scripts.
Run the PK Migration Tool
The PK Migration Tool is a shell script that recreates the primary key index files for schema tables. You can recreate the index files for all schema tables in all tenants or you can specify which tenants and schema tables that you want to recreate the index file for.
It is only necessary to recreate the primary key index files for tables that contain a key. The PK Migration Tool will skip processing tables without a defined primary key. The PK Migration Tool provides a log report that details the successful migration of tenant schema tables, a list of skipped tenant schema tables, and a list of tenant schema tables where the migration failed. The log file is available in the migration
folder.
The tools requires the following details:
- Incorta Metadata Database JDBC connection string
- Incorta Metadata Database username and password
- Maximum number of worker threads
- Maximum size of Off-Heap memory in Gigabytes
- All tenants, a list of tenants, or a specific tenant
- All Tables or a regular expression that describes a list of schema tables
The PK Migration Tool is available in the installation path of an Incorta Node:
<INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/migratePKIndex.sh
Here are the steps to execute migratePKIndex.sh
:
- Switch to the user that runs the Incorta process
sudo su incorta
- Change the directory to that of the Incorta Node installation directory
cd <INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/
- Execute the
migratePKIndex.sh
script./migratePKIndex.sh
- Enter the JDBC string for the Incorta Metadata Database.
- Enter the Username for the Incorta Metadata Database.
- Enter the Password for the Incorta Metadata Database.
- Enter the maximum number of worker threads.
- Enter the maximum Off-Heap memory in Gigabytes.
Specify the Tenant(s)
The tools outputs a list of all tenants in the cluster. You can migrate a single tenant, a list of tenants, or all tenants and all physical scheme tables.
- To migrate ALL tenants and ALL schema tables, enter
*/*
. -
To migrate one or more specific tenants, enter the name of one or more tenants. To specify a list of tenants, use a space or a comma separated string. Here are some examples:
Accounting
Accounting Finance CRM
Accounting, Finance, CRM
Specify the Schema(s)
For each tenant, the tool outputs all the schemas. You can migrate all schemas, one specific table, or a regular expression that describes a group of similarly named tables.
- To migrate ALL schema tables, enter
*/*
. -
To migrate one or more specific schemas, enter the name of one or more schemas. To specify a list of schemas, use a space or a comma separated string. Here are some examples:
sch_DATE
sch_DATE sch_CUSTOMERS sch_SALES
sch_DATE, sch_CUSTOMERS, sch_SALES
Specify the Table(s)
The tool does not output a list of tables; and you must specify a table name or regular expression that matches one or more table names.
- To migrate ALL schema tables, just press enter.
- To migrate one or more specific tables, enter a regular expression such as
.*\.EMP.*
Log Folder
The PK Migration Tool generates a log file. The directory for the log file is ~/IncortaAnalytics/IncortaNode/migration
. Log files are in the format incorta-migration.YYYY-MM-DD.log
.
Run the Hash Migration Script
To address issues with multi-source tables with differing incremental time logic, you must run the time_log_update.py
script after upgrade from 4.9.0 to 5.0.x.
For a given tenant, the script identifies multi-source tables with incremental logic, and recalculates a new hash.
time_log_update.py
requires the following:
- Analytics Host
- Tenant Name
- Login Username
- Password
- Tenant Absolute Path
Optionally, you can specify a backup path for old hashes:
./time_log_update_backup
In order to successfully run time_log_update.py
, the Analytics Service must be in a Running state.
Here are the required steps:
- Download time_log_update.py from Google Drive.
- Using Secure Copy File (SCP) or similar, upload
time_log_update.py
to the Incorta Node running the Analytics Service. - Using Secure Shell (SSH), connect to the Incorta Node running the Analytics Service.
- Change the ownership of the file to the incorta Linux user or other user that runs the Incorta Analytics Service in Linux.
sudo chown incorta:incorta time_log_update.py
- Copy the file to
~/IncortaAnalytics/IncortaNode/bin
directory.
mv time_log_update.py ~/IncortaAnalytics/IncortaNode/bin/time_log_update.py
- Change the directory to
/IncortaNode/bin
.
cd ~/IncortaAnalytics/IncortaNode/bin/
- Execute the script replacing the shell variables with the required values as follows:
INCORTA_ANALYTICS_HOST_URL=http://127.0.0.1:8080/incorta/
TENANT_NAME=demo
USERNAME=admin
PASSWORD=password123
TENANT_DIRECTORY=<INCORTA_NODE_INSTALLATION_PATH>/Tenants/demo/
python time_log_update.py ${INCORTA_ANALYTICS_HOST_URL} ${TENANT_NAME} ${USERNAME} ${PASSWORD} ${TENANT_DIRECTORY}
- Review the console output:
Updating time log file for schema: SALES
backing up the time.log file
----------------
For all tenants in your Incorta cluster, run the time_log_update.py
script.
WAF Only: Run the Alias Sync for WAF Script
For an Incorta cluster that requires the use of Web Application Firewall (WAF) endpoints, you must run the Alias Sync Tool for WAF for all tenants in your Incorta cluster.
The name of the file is alias_sync_WAF.py
.
In order to successfully run alias_sync_WAF.py
, the Analytics Service must be in a Running state.
Here are the required steps:
- Download alias_sync_WAF.py from Google Drive.
- Using Secure Copy File (SCP) or similar, upload
alias_sync_WAF.py
to the Incorta Node running the Analytics Service. - Using Secure Shell (SSH), connect to the Incorta Node running the Analytics Service.
-
Change the ownership of the file to the incorta Linux user or other user that runs the Incorta Analytics Service in Linux.
sudo chown incorta:incorta alias_sync_WAF.py
- Copy the file to
~/IncortaAnalytics/IncortaNode/bin
directory.
mv alias_sync_WAF.py <INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/bin/alias_sync_WAF.py
- Change the directory to
/IncortaNode/bin
.
cd <INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/bin/
- Execute the script replacing the shell variables with the required values as follows:
INCORTA_ANALYTICS_HOST_URL=http://127.0.0.1:8080/incorta/
TENANT_NAME=demo
USERNAME=admin
PASSWORD=password123
TENANT_DIRECTORY=<INCORTA_NODE_INSTALLATION_PATH>/Tenants/demo/
python alias_sync_WAF.py ${INCORTA_ANALYTICS_HOST_URL} ${TENANT_NAME} ${USERNAME} ${PASSWORD}
- Review the console output:
syncing schema: SALES
Schema Already in Sync
=========================
For all tenants in your Incorta cluster, run the alias_sync_WAF.py
script.
Verify the successfully upgrade
Next, verify the successful upgrade. Here are the resources required:
- a System Administrator with root access to the host or hosts running Incorta Nodes, the host running the Cluster Management Console (CMC), and the host or hosts running Apache Spark
- a CMC Administrator
- a SuperUser that can access each tenant in the Incorta environment
- an Incorta Developer to resolve identified issues with formula expressions, schema alias, joins between tables, and dependencies between objects such as dashboards and business schemas
Unpause all scheduled jobs
A CMC Administrator is able unpause scheduled jobs. Here are the steps to disable this option as a default tenant configuration:
- In the Navigation bar, select Clusters.
- In the cluster list, select a Cluster name.
- In the canvas tabs, select Cluster Configurations.
- In the panel tabs, select Default Tenant Configurations.
- In the left pane, select Data Loading.
- Disable the Pause Scheduled Jobs setting.
- Select Save.
Review and Monitor scheduled jobs
As the tenant SuperUser, sign in to each each tenant and review the scheduled jobs.