Connectors → Custom SQL

About the Custom SQL Connector

The Custom SQL Connector enables Incorta to access data stored in any SQL database. It is recommended that this connector be used when a dedicated Incorta database connector does not already exist. You can access the data you want with a standard SQL query. The Custom SQL connector supports the following Incorta specific functionality:

Feature Supported
Chunking
Data Agent
Encryption at Ingest
Incremental Load
Multi-Source
OAuth
Performance Optimized
Remote
Single-Source
Spark Extraction
Webhook Callbacks

Deploy the JAR file

The Custom SQL connector requires that you have a custom JDBC jar file for the database vendor. The following is a generic example

  • company.jdbc.jar

To use the Custom SQL connector, a System Administrator with root access will need to copy the JDBC driver JAR file of the database to each Incorta Node in an Incorta cluster. A CMC Administrator will need to restart the Analytics and Loader Services in the cluster.

Here are the steps to copy the JAR file to standalone Incorta cluster:

  • Download the JDBC driver JAR file from the database vendor.
  • Secure copy the company.jdbc.jar file to the host. Here is an example using scp:
INCORTA_NODE_HOST=100.101.102.103
INCORTA_NODE_HOST_PEM_FILE="host_key.pem"
INCORTA_NODE_HOST_USER="incorta"
CUSTOM_JDBC_JAR_FILE="company.jdbc.jar"

cd ~/Downloads
scp -i ~/.ssh/${INCORTA_NODE_HOST_PEM_FILE}  ${CUSTOM_JDBC_JAR_FILE} ${INCORTA_NODE_HOST_USER}@${INCORTA_NODE_HOST}:/tmp/
  • Secure shell into the host
ssh -i ~/.ssh/${INCORTA_NODE_HOST_PEM_FILE} ${INCORTA_NODE_HOST_USER}@${INCORTA_NODE_HOST}
  • Copy the company.jdbc.jar to the IncortaNode/runtime/lib/ directory in bash shell
INCORTA_INSTALLATION_PATH=/home/incorta/IncortaAnalytics
CUSTOM_JDBC_JAR_FILE="company.jdbc.jar"
cp /tmp/${CUSTOM_JDBC_JAR_FILE} $INCORTA_INSTALLATION_PATH/IncortaNode/runtime/lib/${CUSTOM_JDBC_JAR_FILE}

Restart the Analytics and Loader Services

Here are the steps to restart the Analytics and Loader Services in an Incorta Cluster from the Cluster Management Console (CMC).

  • As the CMC Administrator, sign in to the CMC.
  • In the Navigation bar, select Clusters.
  • In the cluster list, select a Cluster name.
  • Select the Details tab, if not already selected.
  • In the footer, select Restart.

Steps to connect a Custom SQL Database and Incorta

To connect a Custom SQL Database and Incorta, here are the high level steps, tools, and procedures:

Create an external data source

Here are the steps to create a external data source with the Custom SQL connector:

  • Sign in to the Incorta Direct Data Platform™.
  • In the Navigation bar, select Data.
  • In the Action bar, select + NewAdd Data Source.
  • In the Choose a Data Source dialog, in Custom, select Custom SQL.
  • In the New Data Source dialog, specify the applicable connector properties.
  • To test, select Test Connection.
  • Select Ok to save your changes.

Custom SQL connector properties

Here are the properties for the Custom SQL connector:

Property Control Description
Data Source Name text box Enter the name of the data source
Username text box Enter the database username
Password text box Enter the database password
Connection Pool text box Enter the connection pool. The default is 30.
Driver Class text box Enter the driver class for the database. For example, the Informix driver class is com.informix.jdbc.IfxDriver.
Connection String text box Enter the database connection string. For example, the Informix database connection string is: jdbc:informix-sqli://<host>:<port>/<database>:informixserver=<dbservername>
Connection Properties text box Optionally enter connector properties for a custom connection to the database in the format: propertyName=propertyValue, where each connector property is on a new line.

The available connector properties are specified by the database JDBC driver.
Validation Query text box Optional. Enter the database-specific validation query. The Validation query is a SQL query that can be used by the pool to validate connections before they are returned to the application. This query must be a SQL SELECT statement that returns at least one row.
Examples of validation queries:
  • IBM Informix: SELECT COUNT(*) FROM SYSTABLES
  • Apache Derby: VALUES 1 or SELECT 1 FROM SYSIBM.SYSDUMMY1
  • HSQLDB: SELECT 1 FROM INFORMATION_SCHEMA.SYSTEM_USERS
  • Firebird: SELECT 1 FROM rdb$database
  • Current Time Query text box Optional. Enter the database-specific Current Time query. The Current Time query is a SQL query that Incorta can use to get the database server current time.
    Examples of Current Time queries:
  • IBM Informix: SELECT CURRENT or SELECT SYSDATE
  • Apache Derby: VALUES CURRENT TIMESTAMP
  • HSQLDB: VALUES (current_timestamp) or CALL current_timestamp
  • Firebird: Select timestamp 'NOW' from rdb$database or Select CURRENT_TIMESTAMP From rdb$database
  • Use Data Agent toggle Enable using a data agent to securely ingest data from an external data source that is behind a firewall.
    For more information, please review Tools → Data Agent and Tools → Data Manager.
    Data Agent drop down list Enable Use Data Agent to configure this property. Select from the data agents created in the tenant, if any.
    Important: Data Agent

    A data agent is a service that runs on a remote host. It is also a data agent object in the Data Manager for a given tenant. An authentication file shared between the data agent object and the data agent service enables an authorized connection without using a VPN or SSH tunnel. With a data agent, you can securely extract data from one or more databases behind a firewall to an Incorta cluster. Your Incorta cluster can reside on-premises or in the cloud. A CMC Administrator must enable and configure an Incorta cluster to support the use of Data Agents. Only a Tenant Administrator (SuperUser) or user that belongs to a group with the SuperRole role for a given tenant can create a data agent that connects to a data agent service. To learn more, see Concepts → Data Agent and Tools → Data Agent.

    Create a physical schema with the Schema Wizard

    Here are the steps to create a Custom SQL physical schema with the Schema Wizard:

    • Sign in to the Incorta Direct Data Platform™.
    • In the Navigation bar, select Schema.
    • In the Action bar, select + New → Schema Wizard
    • In (1) Choose a Source, specify the following:

      • For Enter a name, enter the physical schema name.
      • For Select a Datasource, select the Custom SQL external data source.
      • Optionally create a description.
    • In the Schema Wizard footer, select Next.
    • In (2) Manage Tables, in the Data Panel, navigate the directory tree as necessary to select the Custom SQL tables. You can either check the Select All checkbox or select individual sheets.
    • In the Schema Wizard footer, select Next.
    • In (3) Finalize, in the Schema Wizard footer, select Create Schema.

    Create a physical schema with the Schema Designer

    Here are the steps to create a custom SQL physical schema using the Schema Designer:

    • Sign in to the Incorta Direct Data Platform™.
    • In the Navigation bar, select Schema.
    • In the Action bar, select + New → Create Schema.
    • In Name, specify the physical schema name, and select Save.
    • In Start adding tables to your schema, select SQL Database.
    • In the Data Source dialog, specify the Custom SQL table data source properties.
    • Select Add.
    • In the Table Editor, in the Table Summary section, enter the table name.
    • To save your changes, select Done in the Action bar.

    Custom SQL table data source properties

    For a physical schema table in Incorta, you can define the following Custom SQL specific data source properties as follows:

    Property Control Description
    Type drop down list Default is SQL Database
    Data Source drop down list Select the Custom SQL external data source
    Incremental toggle Enable the incremental load configuration for the physical schema table
    Fetch Size text box Used for performance improvement, fetch size defines the number of records that will be retrieved from the database in each batch until all records are retrieved. The default is 5000.
    Query text box Enter the SQL query to retrieve data from the Custom SQL database table
    Update Query text box Enable Incremental to configure this property. Enter the SQL query to retrieve data updates from the Custom SQL database table.
    Incremental Field Type drop down list Enable Incremental to configure this property. Select the format of the table date column:
  • Timestamp
  • Unix Epoch (seconds)
  • Unix Epoch (milliseconds)
  • Chunking Method drop down list Chunking methods allow for parallel extraction of large tables. The default is No Chunking. There are two chunking methods:
  • By Size of Chunking (Single Table)
  • By Date/Timestamp
  • Chunk Size text box Select By Size of Chunking for the Chunking Method to set this property. Enter the number of records to extract in each chunk in relation to the Fetch Size. The default is 3 times the Fetch Size.
    Order Column drop down list Select By Size of Chunking for the Chunking Method to set this property. Select a column in the source table you want to order by before chunking. It’s typically an ID column and it must be numeric.
    Upper Bound for Order Column text box Optional. Enter the maximum value for the order column.
    Lower Bound for Order Column text box Optional. Enter the minimum value for the order column.
    Order Column [Date/Timestamp] drop down list Select By Date/Timestamp for the Chunking Method to set this property. Select a column in the source table you want to order by before chunking. It should be a Date/Timestamp column.
    Chunk Period drop down list Select the chunk period that will be used in dividing chunks:
  • Daily
  • Weekly (default)
  • Monthly
  • Yearly
  • Custom
  • Number of days text box Select Custom for the Chunk Period to set this property. Enter the chunking period in days
    Enable Spark Based Extraction toggle Enable a Spark job to parallelize the data ingest
    Max Number of Parallel Queries text box Enable Spark Based Extraction to configure this property. Enter the maximum number of parallel queries to run at a time
    Column to Parallelize Queries on drop down list Enable Spark Based Extraction to configure this property. Select a numerical column in the source table that you want Spark to parallelize the extraction queries on.
    Memory per Extractor text box Enable Spark Based Extraction to configure this property. Enter the numerical amount of memory to use per extractor in gigabytes (GB).
    Callback toggle Enable this option to call back on the source data set
    Callback URL text box Enable Callback to configure this property. Specify the URL.

    View the physical schema diagram with the Schema Diagram Viewer

    Here are the steps to view the physical schema diagram using the Schema Diagram Viewer:

    • Sign in to the Incorta Direct Data Platform™.
    • In the Navigation bar, select Schema.
    • In the list of schemas, select the Custom SQL schema.
    • In the Schema Designer, in the Action bar, select Diagram.

    Load the physical schema

    Here are the steps to perform a Full Load of the Custom SQL physical schema using the Schema Designer:

    • Sign in to the Incorta Direct Data Platform™.
    • In the Navigation bar, select Schema.
    • In the list of schemas, select the Custom SQL schema.
    • In the Schema Designer, in the Action bar, select Load → Load Now → Full.
    • To review the load status, in Last Load Status, select the date.

    Explore the physical schema

    With the full load of the Custom SQL physical schema complete, you can use the Analyzer to explore the schema, create your first insight, and save the insight to a new dashboard.

    To open the Analyzer from the schema, follow these steps:

    • In the Navigation bar, select Schema.
    • In the Schema Manager, in the List view, select the Custom SQL schema.
    • In the Schema Designer, in the Action bar, select Explore Data.

    © Incorta, Inc. All Rights Reserved.