Skip to main content

External Data Package

Overview

Provides tools for loading and managing external data in Snowflake, including schema inference, data copying, and unloading, API integration, and parsing of Excel files.

Installation

  • Copy the Package ID  @coalesce/external-data-package
  • In Coalesce, open the Workspace where you wish to install the package.
  • Go to the Build Setting of the Workspace, tab Packages, and click the Install button on the top right of the page.
  • Paste the Package ID, and proceed with the installation process.

Description

External Data Package

The Coalesce External Data Package includes:

CopyInto

CopyInto Node Configuration

The Copy-Into node type the following configurations available:

CopyInto - Node Properties

There are four configs within the Node Properties group.

PropertyDescription
Storage LocationStorage Location where the Materialized View will be created
Node TypeName of template used to create node objects
DescriptionA description of the node's purpose
Deploy EnabledIf TRUE the node will be deployed / redeployed when changes are detected
If FALSE the node will not be deployed or will be dropped during redeployment

CopyInto - General Options

InferSchema-True

CopyInto

InferSchema-False

Copy-Into

OptionDescription
Create AsSelect from the options to create as Table or Transient Table
- Transirnt table
-Table
TruncateBefore(Disabled when Inferschema is true)True / False toggle that determines whether or not a table is to be truncated before reloading
- True: Table is truncated and Copy-Into statement is executed to reload the data into target table
- False: Data is loaded directly into target table and no truncate action takes place.
InferSchemaTrue / False toggle that determines whether or not to infer the columns of file before loading
- True: The node is created with the inferred columns
- False: No infer table step is executed

CopyInto - Source Data

InferSchema-true

CopyInto-options

InferSchema-false

CopyInto-File location

Internal or External Stage
SettingDescription
Coalesce Storage Location of StageA storage location in Coalesce where the stage is located.
Stage Name (Required)Internal or External stage where the files containing data to be loaded are staged
File Name(s)(Ex:'a.csv','b.csv')Specifies a list of one or more files names (separated by commas) to be loaded
File Pattern (Ex:'.hea.[.]csv')A regular expression pattern string, enclosed in single quotes, specifying the file names or paths to match

CopyInto-file location2

External location
SettingDescription
External URIEnter the URI of the External location
Storage IntegrationSpecifies the name of the storage integration used to delegate authentication responsibility for external cloud storage to a Snowflake identity and access management (IAM) entity

CopyInto - File Format

InferSchema-true

Copy-Into format

InferSchema-false

copy-into-file-format

File Format Definition - File Format Name
SettingDescription
* File Format NameSpecifies an existing named file format to use for loading data into the table
* File TypeCSV
JSON
ORC
AVRO
PARQUET
XML
File Format Definition - File Format Values
SettingDescription
File Format ValuesProvides file format options for the File Type chosen
File TypeEach file type has different configurations available
CSVCompression- String (constant) that specifies the current compression algorithm for the data files to be loaded
Record delimiter-Characters that separate records in an input file
Field delimiter- One or more singlebyte or multibyte characters that separate fields in an input file
Field optionally enclosed by- Character used to enclose strings
Number of header lines to skip- Number of lines at the start of the file to skip
Skip blank lines- Boolean that specifies to skip any blank lines encountered in the data files
Trim Space- Boolean that specifies whether to remove white space from fields
Replace invalid characters- Boolean that specifies whether to replace invalid UTF-8 characters with the Unicode replacement character.
Date format- String that defines the format of date values in the data files to be loaded.
Time format- String that defines the format of time values in the data files to be loaded
Timestamp format- String that defines the format of timestamp values in the data files to be loaded
JSONCompression- String (constant) that specifies the current compression algorithm for the data files to be loaded
Replace invalid characters - Boolean that specifies whether to replace invalid UTF-8 characters with the Unicode replacement character
Trim Space - Boolean that specifies whether to remove white space from fields
Strip Outer Array- Boolean that instructs the JSON parser to remove outer brackets [ ]
Date format- String that defines the format of date values in the data files to be loaded
Time format- String that defines the format of time values in the data files to be loaded
Timestamp format- String that defines the format of timestamp values in the data files to be loaded
ORCTrim Space - Specifies whether to remove white space from fields
Replace invalid characters - Boolean that specifies whether to replace invalid UTF-8 characters with the Unicode replacement character
AVROTrim Space - Boolean that specifies whether to remove white space from fields
Replace invalid characters - Boolean that specifies whether to replace invalid UTF-8 characters with the Unicode replacement character.
PARQUETTrim Space - Boolean that specifies whether to remove white space from fields
Replace invalid characters - Boolean that specifies whether to replace invalid UTF-8 characters with the Unicode replacement character.
XMLReplace invalid characters - Boolean that specifies whether to replace invalid UTF-8 characters with the Unicode replacement character

CopyInto - Copy Options

SettingDescription
On Error Behavior-String (constant) that specifies the error handling for the load operation
* CONTINUE
* SKIP_FILE
* SKIP_FILE_num
* SKIP_FILE_num%
* ABORT_STATEMENT
Specify the number of errors that can be skipped-Required when On Error Behavior is either SKIP_FILE_num or SKIP_FILE_num%.Specify the number of errors that can be skipped.
Size Limit-Number (> 0) that specifies the maximum size (in bytes) of data to be loaded for a given COPY statement
Purge Behavior-Boolean that specifies whether to remove the data files from the stage automatically after the data is loaded successfully
Return Failed Only-Boolean that specifies whether to return only files that have failed to load in the statement result
Force-Boolean that specifies to load all files, regardless of whether they’ve been loaded previously and have not changed since they were loaded
Load Uncertain Files-Boolean that specifies to load files for which the load status is unknown. The COPY command skips these files by default
Enforce Length-Boolean that specifies whether to truncate text strings that exceed the target column length
Truncate Columns-Boolean that specifies whether to truncate text strings that exceed the target column length

CopyInto - System Columns

The set of columns which has source data and file metadata information.

  • SRC - The data from the file is loaded into this variant column.
  • LOAD_TIMESTAMP - Current timestamp when the file gets loaded.
  • FILENAME - Name of the staged data file the current row belongs to. Includes the full path to the data file.
  • FILE_ROW_NUMBER - Row number for each record in the staged data file.
  • FILE_LAST_MODIFIED - Last modified timestamp of the staged data file the current row belongs to
  • SCAN_TIME - Start timestamp of operation for each record in the staged data file. Returned as TIMESTAMP_LTZ.

Key points to use CopyInto Node

  • CopyInto node can be created by just clicking on Create node from browser if we want the data from the file to be loaded into single variant column in target table.
  • CopyInto node can be added on top of an inferred table(table created by running the inferschema node) if you want to load data into specific columns as defined in the files.Refer to Inferschema to know more on how to use the node and add Copy-Into on top of it.
  • The data can be reloaded into the table by truncating the data in the table before load using the TruncateBefore option in node config or reload parameter

Use CopyInto node with InferSchema option

  • Set Infer Schema toggle to true
  • Hit Create button to Infer Schema
  • Click on Re-Sync Columns button
  • If all looks good, set Infer Schema button to false
  • Hit Create button to execute create table based on inferred schema
  • This is mainly a test to make sure create will work
  • Hit Run button to execute DML

image

If the above works, it should be deployable as is. Deploy will simply take the columns and execute a create table.

CopyInto Deployment

CopyInto Deployment Parameters

The CopyInto node typeincludes an environment parameter that allows you to specify if you want to perform a full load or a reload based on the load type when you are performing a Copy-Into operation.

The parameter name is loadType and the default value is ``.

{
"loadType": ""
}

When the parameter value is set to Reload, the data is reloaded into the table regardless of whether they’ve been loaded previously and have not changed since they were loaded.As a alternate option we can use TruncateBefore option in node config

CopyInto Initial Deployment

When deployed for the first time into an environment the Copy-into node of materialization type table will execute the below stage:

Deployment BehaviorLoad TypeStages Executed
Initial Deployment``Create Table/Transient Table
Initial DeploymentReloadCreate Table/Transient Table

CopyInto Redeployment

Altering the CopyInto Tables

There are few column or table changes like Change in table name,Dropping existing column, Alter Column data type,Adding a new column if made in isolation or all-together will result in an ALTER statement to modify the Work Table in the target environment.

The following stages are executed:

  • Rename Table| Alter Column | Delete Column | Add Column | Edit table description: Alter table statement is executed to perform the alter operation.

Copy-Into change in materialization type

When the materialization type of Copy-Into node is changed from table to transient table or viceversa,the below stages are executed:

  • Drop table/transient table
  • Create transient table/table

CopyInto Undeployment

If the CopyInto node is deleted from a Workspace, that Workspace is committed to Git and that commit deployed to a higher-level environment then the target table in the target environment will be dropped.

  • Drop table/transient table: Target table in Snowflake is dropped

Snowpipe

The Coalesce Snowpipe node is a node that performs two operations. It can be used to load historical data using
CopyInto. Also the Snowpipe node can be used to create a pipe to auto ingest files from AWS, GCP, or Azure.

Snowpipe enables loading data from files as soon as they’re available in a stage.

This means you can load data from files in micro-batches, making it available to users within minutes, rather than manually executing COPY statements on a schedule to load larger batches.

Snowpipe Node Configuration

The Snowpipe node type the following configurations available:

InferSchema-true image

InferSchema-false image

Snowpipe Node Properties

There are four configs within the Node Properties group.

PropertyDescription
Storage LocationStorage Location where the Materialized View will be created
Node TypeName of template used to create node objects
DescriptionA description of the node's purpose
Deploy EnabledIf TRUE the node will be deployed / redeployed when changes are detected
If FALSE the node will not be deployed or will be dropped during redeployment

General Options

SettingDescription
Create AsDropdown that helps us to create a table or Transiet table to load data from external stage.
* Table
* Transient Table
InferSchemaTrue / False toggle that determines whether or not to infer the columns of file before loading
* True -The node is created with the inferred columns
* False -No infer table step is executed

Snowpipe Options

snowpipe-snowpipe-options

SettingDescription
Enable SnowpipeDrop down that helps us to create a pipe to auto ingest files from external stage or validate the Copy-Into statement.
* Enable Snowpipe - Load data auto ingesting files from AWS, Azure, or GCP. Choose your
Cloud Provider. * AWS - AWS SNS Topic. Specifies the Amazon Resource Name (ARN) for the SNS topic for your S3 bucket.
* Azure - Integration. Specifies the existing notification integration used to access the storage queue.
* GCP - Integration. Specifies the existing notification integration used to access the storage queue.
* Test Copy Statement - To validate the Copy-into statement before we use it to create PIPE
Load historical dataLoads the historic data into the target table by executing a COPY_INTO statement

Snowpipe File Location

InferSchema-true image

InferSchema-false snowpipe-file-location

SettingDescription
Coalesce Storage Location of StageA storage location in Coalesce where the stage is located.
Stage Name (Required)Internal or External stage where the files containing data to be loaded are staged
File Name(s)(Ex:'a.csv','b.csv')Enabled when 'Enable Snowpipe' under Snowpipe Options is toggled off. Specifies a list of one or more files names (separated by commas) to be loaded. For example, 'a.csv','b.csv'
File Pattern (Ex:'.hea.[.]csv')A regular expression pattern string, enclosed in single quotes, specifying the file names or paths to match. For example, *hea.*[.]csv'

Snowpipe File Format

InferSchema-true image

InferSchema-false Snowpipe-file-format

File Format Definition - File Format Name
SettingDescription
* File Format NameSpecifies an existing named file format to use for loading data into the table
* File TypeCSV
JSON
ORC
AVRO
PARQUET
XML
File Format Definition - File Format Values
SettingDescription
File Format ValuesProvides file format options for the File Type chosen
File TypeEach file type has different configurations available
CSVCompression- String (constant) that specifies the current compression algorithm for the data files to be loaded
Record delimiter-Characters that separate records in an input file
Field delimiter- One or more singlebyte or multibyte characters that separate fields in an input file
Field optionally enclosed by- Character used to enclose strings
Number of header lines to skip:Number of lines at the start of the file to skip
Skip blank lines- Boolean that specifies to skip any blank lines encountered in the data files
Trim Space- Boolean that specifies whether to remove white space from fields
Replace invalid characters- Boolean that specifies whether to replace invalid UTF-8 characters with the Unicode replacement character.
Date format- String that defines the format of date values in the data files to be loaded.
Time format- String that defines the format of time values in the data files to be loaded
Timestamp format-String that defines the format of timestamp values in the data files to be loaded
JSONCompression- String (constant) that specifies the current compression algorithm for the data files to be loaded
Replace invalid characters - Boolean that specifies whether to replace invalid UTF-8 characters with the Unicode replacement character.
Trim Space - Boolean that specifies whether to remove white space from fields
Strip Outer Array- Boolean that instructs the JSON parser to remove outer brackets [ ].
Date format- String that defines the format of date values in the data files to be loaded
Time format- String that defines the format of time values in the data files to be loaded
Timestamp format- String that defines the format of timestamp values in the data files to be loaded
ORCTrim Space - Specifies whether to remove white space from fields
Replace invalid characters - Boolean that specifies whether to replace invalid UTF-8 characters with the Unicode replacement character
AVROTrim Space - Boolean that specifies whether to remove white space from fields
Replace invalid characters - Boolean that specifies whether to replace invalid UTF-8 characters with the Unicode replacement character.
PARQUETTrim Space - Boolean that specifies whether to remove white space from fields
Replace invalid characters - Boolean that specifies whether to replace invalid UTF-8 characters with the Unicode replacement character.
XMLReplace invalid characters - Boolean that specifies whether to replace invalid UTF-8 characters with the Unicode replacement character

Snowpipe Copy Options

If you toggle Enable Snowpipe under Snowpipe Options to ON, these configuration options are available.

SettingDescription
On Error BehaviorString (constant) that specifies the error handling for the load operation
* CONTINUE
* SKIP_FILE
* SKIP_FILE_num
* Specify the number of errors that can be skipped.
* SKIP_FILE_num%
* Specify the number of errors that can be skipped
Enforce LengthBoolean that specifies whether to truncate text strings that exceed the target column length
Truncate ColumnsBoolean that specifies whether to truncate text strings that exceed the target column length

If you toggle Enable Snowpipe under Snowpipe Options to OFF, these configuration options are available.

SettingDescription
On Error BehaviorString (constant) that specifies the error handling for the load operation
* CONTINUE
* SKIP_FILE
* SKIP_FILE_num
* SKIP_FILE_num%
* ABORT_STATEMENT
Specify the number of errors that can be skippedRequired when On Error Behavior is either SKIP_FILE_num or SKIP_FILE_num%.Specify the number of errors that can be skipped
Size LimitNumber (> 0) that specifies the maximum size (in bytes) of data to be loaded for a given COPY statement
Purge BehaviorBoolean that specifies whether to remove the data files from the stage automatically after the data is loaded successfully
Return Failed OnlyBoolean that specifies whether to return only files that have failed to load in the statement result
ForceBoolean that specifies to load all files, regardless of whether they’ve been loaded previously and have not changed since they were loaded
Load Uncertain FilesBoolean that specifies to load files for which the load status is unknown. The COPY command skips these files by default
Enforce LengthBoolean that specifies whether to truncate text strings that exceed the target column length
Truncate ColumnsBoolean that specifies whether to truncate text strings that exceed the target column length

Snowpipe System Columns

The set of columns which has source data and file metadata information.

  • SRC - The data from the file is loaded into this variant column.
  • LOAD_TIMESTAMP - Current timestamp when the file gets loaded.
  • FILENAME - Name of the staged data file the current row belongs to. Includes the full path to the data file.
  • FILE_ROW_NUMBER - Row number for each record in the staged data file.
  • FILE_LAST_MODIFIED - Last modified timestamp of the staged data file the current row belongs to
  • SCAN_TIME - Start timestamp of operation for each record in the staged data file. Returned as TIMESTAMP_LTZ.

Key points to use Snowpipe node

  • Snowpipe node can be created by just clicking on Create node from browser if we want the data from the file to be loaded into single variant column in target table.
  • Snowpipe node can be added on top of an inferred table(table created by running the inferschema node) if you want to load data into specific columns as defined in the files.Refer to Inferschema to know more on how to use the node and add Snowpipe node on top of it.

Use Snowpipe node with InferSchema option

  • Set Infer Schema toggle to true
  • Hit Create button to Infer Schema
  • Click on Re-Sync Columns button
  • If all looks good, set Infer Schema button to false
  • Hit Create button to execute create table based on inferred schema and pipe to ingest data

image

If the above works, it should be deployable as is. Deploy will simply take the columns and execute a create table and snowpipe

Snowpipe Deployment

Snowpipe Deployment Parameters

The Snowpipe includes twp environment parameters.The parameter loadType allows you to specify if you want to perform a full load or a reload based on the load type when you are performing a Copy-Into operation. The another parameter targetIntegrationOrAwsSNSTopic allows you to specify the AWS SNS Topic or Integration name to be used for auto ingesting files from Clous providers

The parameter name is loadType and the default value is ``.

{
"loadType": ""
}

When the parameter value is set to Reload, the data is reloaded into the table regardless of whether they’ve been loaded previously and have not changed since they were loaded.

The parameter name is targetIntegrationOrAwsSNSTopic and the default value is DEV ENVIRONMENT.

When set to DEV ENVIRONMENT the value entered in the Snowpipe Options config AWS SNS TOpic/Integration is used for ingesting files from Cloud providers.

{
"targetIntegrationOrAwsSNSTopic": "DEV ENVIRONMENT"
}

When set to any value other than DEV ENVIRONMENT the node will use the specified value for AWS SNS TOpic/Integration.

For example, the Snowpipe node will use the specific value for auto ingestion.

{
"targetIntegrationOrAwsSNSTopic": "arn:aws:sqs:us-east-1:832620633027:sf-snowpipe-AIDA4DXAOTPB7IF437EPD-lZe8ciYpPqGgC9KwWLmiIQ"
}

Snowpipe Initial Deployment

When deployed for the first time into an environment the Snowpipe node will execute the below stages depending on if Enable Snowpipe is enabled,'Load Historical Data' is enabled and the loadType parameter.

Deployment BehaviorEnable SnowpipeHistorical LoadLoad TypeStages Executed
Initial DeploymentEnable Snowpipetrue``Create Table

Historical full load using CopyInto

Create Pipe

Alter Pipe
Initial DeploymentEnable SnowpipetrueReloadCreate table

Truncate Target table

Historical full load using CopyInto

Create Pipe

Alter Pipe
Initial DeploymentEnable SnowpipefalseReload or EmptyCreate table

Truncate Target table

Create Pipe
Initial DeploymentTest Copy StatementfalseReload or EmptyCreate table

Test Copy Statement-No pipe creation

Snowpipe Redeployment

Altering the Snowpipe node

There are few column or table changes like Change in table name, Dropping existing column, Alter Column data type, Adding a new column if made in isolation or all-together will result in an ALTER statement to modify the target Table in the target environment.Any table level changes or node config changes results in recreation of pipe

The following stages are executed:

  • Rename Table| Alter Column | Delete Column | Add Column| Edit table description |: Alter table statement is executed to perform the alter operation.
  • Truncate target table:The target table is truncated in case the Load Type parameter is set to 'Reload'
  • Historical full load using CopyInto:Historical data are loaded if 'Load Historical' toggle is on.
  • Create Pipe: Pipe is recreated if enable snowpipe option is true
  • Alter Pipe:Pipe is refreshed if 'Load Historical' toggle is on.

Snowpipe Undeployment

If the Snowpipe node is deleted from a Workspace, that Workspace is committed to Git and that commit deployed to a higher-level environment then the target table in the target environment will be dropped.

This is executed in two stages:

  • Drop Table: Target table is dropped
  • Drop Pipe: Pipe is dropped

External Tables

The Coalesce External Table nodes create a new external table in the current/specified schema or replaces an existing external table.

An external table reads data from a set of one or more files in a specified external stage which can point to AWS,GCP or Azure cloud providers.

External Tables Node Configuration

The External table node type has four configuration groups:

External Tables Node Properties

There are four configs within the Node Properties group.

PropertyDescription
Storage LocationStorage Location where the Materialized View will be created
Node TypeName of template used to create node objects
DescriptionA description of the node's purpose
Deploy EnabledIf TRUE the node will be deployed / redeployed when changes are detected
If FALSE the node will not be deployed or will be dropped during redeployment

General Options

image

SettingsDescription
InferSchemaTrue / False toggle that determines whether or not to infer the columns of file before loading
* True -The node is created with the inferred columns
* False -No infer table step is executed
Partition byA partition column must evaluate as an expression that parses the path and/or filename information in the METADATA$FILENAME.By default the partitioning is done by FILENAME column
Partition by Expression(Partition by toggle true)A partition by expression based on METADATA$FILENAME to partition External table

External Tables Node File Location

InferSchema-true

image

InferSchema-false

Filelocation

SettingsDescription
Coalesce Stage Storage Location of Stage(Required)A storage location in Coalesce where the stage is located
Stage Name (Required)Internal or External stage where the files containing data to be loaded are staged
File Name(s)(Ex:'a.csv','b.csv')Enabled when InferSchema toggle is true. Specifies a list of one or more files names (separated by commas) to be loaded. For example, 'a.csv','b.csv'
File PatternA regular expression pattern string, enclosed in single quotes, specifying the file names or paths to match. For example, *hea.*[.]csv'

External Tables File Format

InferSchema-true

image

InferSchema-false

fileformat

File Format Definition - File Format Name
SettingsDescription
Coalesce Storage Location of File FormatLocation in Coalesce pointing to the database and schema,the file format resides.Mandatory when File Format Name is chosen
File Format NameSpecifies an existing named file format to use for loading data into the table
File Type* CSV
* JSON
* ORC
* AVRO
* PARQUET
File Format Definition - File Format Values
SettingsDescription
File Format Values-Provides file format options for the File Type chosen
File TypeEach file type has different configurations available
CSVCompression- String (constant) that specifies the current compression algorithm for the data files to be loaded
Record delimiter-Characters that separate records in an input file
Field delimiter- One or more singlebyte or multibyte characters that separate fields in an input file
Field optionally enclosed by- Character used to enclose strings
Number of header lines to skip- Number of lines at the start of the file to skip
Skip blank lines- Boolean that specifies to skip any blank lines encountered in the data files
Trim Space- Boolean that specifies whether to remove white space from fields
Replace invalid characters- Boolean that specifies whether to replace invalid UTF-8 characters with the Unicode replacement character.
Date format- String that defines the format of date values in the data files to be loaded.
Time format- String that defines the format of time values in the data files to be loaded
Timestamp format-String that defines the format of timestamp values in the data files to be loaded
JSONCompression- String (constant) that specifies the current compression algorithm for the data files to be loaded
Replace invalid characters - Boolean that specifies whether to replace invalid UTF-8 characters with the Unicode replacement character.
Trim Space - Boolean that specifies whether to remove white space from fields
Strip Outer Array- Boolean that instructs the JSON parser to remove outer brackets [ ].
Date format- String that defines the format of date values in the data files to be loaded
Time format- String that defines the format of time values in the data files to be loaded
Timestamp format- String that defines the format of timestamp values in the data files to be loaded
ORCTrim Space - Specifies whether to remove white space from fields
Replace invalid characters - Boolean that specifies whether to replace invalid UTF-8 characters with the Unicode replacement character
AVROTrim Space - Boolean that specifies whether to remove white space from fields
Replace invalid characters - Boolean that specifies whether to replace invalid UTF-8 characters with the Unicode replacement character.
PARQUETTrim Space - Boolean that specifies whether to remove white space from fields
Replace invalid characters - Boolean that specifies whether to replace invalid UTF-8 characters with the Unicode replacement character.
XMLReplace invalid characters - Boolean that specifies whether to replace invalid UTF-8 characters with the Unicode replacement character

External Tables Additional Options

SettingsDescription
Auto refresh-True/False toggle that specifies whether Snowflake should enable triggering automatic refreshes of the external table metadata when new or updated data files are available in the named external stage
* False - Auto refresh does not take place
* True - Auto refresh takes place. Setting this to true, give you the option to choose a Cloud Provider from AWS, Azure, and GCP.
* AWS - AWS SNS Topic, Enabled only when Cloud Provider is AWS and auto refresh is true.Required only when configuring AUTO_REFRESH for Amazon S3 stages using Amazon Simple Notification Service (SNS).
* GCP - Integration. Enabled when Cloud Provider is GCP/Azure. Specifies the existing notification integration used to access the storage queue
* Azure - Integration. Enabled when Cloud Provider is GCP/Azure. Specifies the existing notification integration used to access the storage queue

System Columns

The set of columns which has source data and file metadata information.

  • VALUE - The data from the file is loaded into this variant column
  • FILENAME - Name of the staged data file the current row belongs to. Includes the full path to the data file.

Key points to use External table node

  • External table node can be created by just clicking on Create node from browser if we want the data from the file to be loaded into single variant column in target table.
  • External node can be added on top of an inferred table(table created by running the inferschema node) if you want to load data into specific columns as defined in the files.Refer to Inferschema to know more on how to use the node and add External node on top of it.

Use External table with InferSchema option

  • Set Infer Schema toggle to true
  • Hit Create button to Infer Schema
  • Click on Re-Sync Columns button
  • If all looks good, set Infer Schema button to false
  • Hit Create button to execute create table based on inferred schema and pipe to ingest data

image

If the above works, it should be deployable as is. Deploy will simply take the columns and execute a create table and snowpipe

External table Deployment

Initial Deployment

When deployed for the first time into an environment the External table node will execute the below stage:

  • Create External Table - This will execute a CREATE OR REPLACE statement and create a table in the target environment.

Redeployment

Recreating the External table

The subsequent deployment of External table with changes in config options will recreate the table.

The following stages are executed:

  • Create External Table

Undeployment

If the External table node is deleted from a Workspace, that Workspace is committed to Git and that commit deployed to a higher-level environment then the target table in the target environment will be dropped.

  • Drop External Table

InferSchema

The Coalesce InferSchema UDN is a versatile node infers schema of the file in internal or external stage and dynamically creates the target table of the same name as inferschema node.

InferSchema in Snowflake automatically detects the file metadata schema in a set of staged data files that contain semi-structured data and retrieves the column definitions.

Key points InferSchema

  • A sample file in the internal or external stage.
  • An existing fileformat to parse the file
  • The file format is also expected to be in the same location as stage
  • The mapping grid after creating the inferred node can be Re-Synced with the exact structure of the table/transient table using the Re-Sync Columns button in the mapping grid

InferSchema Node Configuration

The InferSchema node type has two configuration groups:

Go to the node and select the Config tab to see the Node Properties, Dynamic Table Options and General Options.

InferSchema Node Properties

There are four configs within the Node Properties group.

PropertyDescription
Storage LocationStorage Location where the Materialized View will be created
Node TypeName of template used to create node objects
DescriptionA description of the node's purpose
Deploy EnabledIf TRUE the node will be deployed / redeployed when changes are detected
If FALSE the node will not be deployed or will be dropped during redeployment

InferSchema Source Data

SettingsDescription
Create AsSelect from the options to create as Table or Transient Table
* Transient Table
* Table
Coalesce Storage Location of Stage(Required)A storage location in Coalesce where the stage is located
Stage Name (Required)Internal or External stage where the files containing data to be loaded are staged
File Names (Required)Use commas to seperate multiple files. File whose metatdata is to be inferred
Coalesce Storage Location of File FormatLocation in Coalesce pointing to the database and schema,the file format resides
File Format Name (Required)Name of the file format object that describes the data contained in the staged files.It is expected in the same location as Stage
Redeployment BehaviorCREATE OR REPLACE: Dynamically creates target table based on the inferred schema from file in staging area
ALTER EXISTING TABLE: Dynamically alters inferred table by comparing the inferred schema of the same file (with changes if any)and created table
DROP EXISTING TABLE: Drops the table inferred

InferSchema Usage

Option1-Using API-NODEUPDATE

  • Add a InferSchema node, for example INFER_JSON and hit create.'Infer and Create table' stage runs and creates a table with the same name as InferSchema node.
  • Use API-NODEUPDATE node to update the columns of the infer node created in the above step.
  • Now we can add a Copy-Into,Snowpipe or External table node on top of the inferred table to load staged files.
  • Once we get the required metadata columns in our downstream node(Copy-Into,Snowpipe or External table),bulk edit the source in the node to be blank.Also delete the from clause in the Join tab.These information are not required for Copy-Into,Snowpipe or External table to load data from files.
  • Delete the API-NODEUPDATE node ,we created in step2 as we have updated the columns required for the downstream nodes and the data for the same are derived from files.
  • Eventually API-NODEUPDATE nodes are not required to be deployed.It is sufficient to deploy the Copy-into,Snowpipe or External table nodes which holds the data from staged files

Option2-Using Re-Sync Columns button

  • Add a InferSchema node, for example INFER_JSON and hit create.'Infer and Create table' stage runs and creates a table with the same name as InferSchema node.

  • Use Re-Sync Columns button in the mapping grid to update the columns of the infer node created in the above step.

  • Now we can add a Copy-Into,Snowpipe or External table node on top of the inferred table to load staged files.

    image

InferSchema Deployment

InferSchema Initial Deployment

When deployed for the first time into an environment the InferSchema node will execute the stage:

  • Stage executed: Infer and Create target table

InferSchema Redeployment

Redeployment Behavior: Create or Replace

Redeployment BehaviorStage Executed
Create or ReplaceInfer and Create target table

If any of the Source Data options like Stage storage location, Stage name or filename are modified. Then you can redeploy the Infer Schema node with redeployment behaviour “Create or Replace”.

📘 Info

You can go back to the browser and Re-sync columns of Inferred table, re-execute Copy-Into and redeploy

Redeployment Behavior: Alter Existing Table

Redeployment BehaviorStage Executed
Alter existing tableInfer and Alter target table

If all Source Data options remain same and only there are changes in the existing file structure, you can redeploy the Infer Schema node with redeployment behaviour “Alter existing table”.

Redeployment Behavior: Drop Existing Table

Redeployment BehaviorStage Executed
Drop existing tableDrop inferred table

If you want to drop the inferred table you can redeploy the Infer Schema node with redeployment behaviour “Drop existing table”.

InferSchema Undeployment

If the InferSchema node is deleted from a Workspace, that Workspace is committed to Git and that commit deployed to a higher-level environment then no action takes place.

CopyUnload

The Coalesce Copy Unload node unloads the table into internal stage or external stage location in various file formats supported by Snowflake

A Copy Unload node loads data from a table (or query) into one or more files in the internal or external location mentioned

Copy Unload Node Configuration

The Copy Unload node type has four configuration groups:

Copy Unload Node Properties

There are four configs within the Node Properties group.

PropertyDescription
Storage LocationStorage Location where the Materialized View will be created
Node TypeName of template used to create node objects
DescriptionA description of the node's purpose
Deploy EnabledIf TRUE the node will be deployed / redeployed when changes are detected
If FALSE the node will not be deployed or will be dropped during redeployment

Copy Unload Node File Location

copy-unload-file

|Coalesce Stage Storage Location of Stage(Required)| A storage location in Coalesce where the stage is located| |Stage (Required)| Location in Snowflake(internal stage) or external stage or external location where the data files are unloaded| |Partition by (Optional)| Unload operation splits the table rows based on the partition expression and determines the number of files to create based on the number of unique values in a particular column (only sinlge column name expected)| |Allow Expressions in partition by clause| A regular expression pattern string, enclosed in single quotes, for example: ('date=' || to_varchar(dt, 'YYYY-MM-DD') || '/hour=' || to_varchar(date_part(hour, ts))- Concatenates labels and column values to output meaningful filenames|

Copy Unload File Format

FileFormat

File Format Definition - File Format Name

|Coalesce Storage Location of File Format| File format location in snowflake that is mapped as storage location in Coalesce| |File Format Name|Specifies an existing named file format in Snowflake to use for unloading data into the table| |File Type|* CSV
* JSON
* PARQUET|

File Format Definition - File Format Values

|File Format Values|-Provides different file format options for the File Type chosen to unload| |File Type|Each file type has different configurations available| |CSV|Compression: String (constant) that specifies the current compression algorithm for the data files to be unloaded
Record delimiter: Characters that separate records in an unloaded file
Field delimiter: One or more singlebyte or multibyte characters that separate fields in an unloaded file
Field optionally enclosed by: Character used to enclose strings. (Default is \042)
File Extension: String that specifies the extension for files unloaded to a stage. Accepts any extension
Date Format: String that defines the format of date values in the unloaded data files.(Default is AUTO)
Time Format: String that defines the format of time values in the unloaded data files.(Default is AUTO)
Timestamp Format: String that defines the format of timestamp values in the unloaded data files.(Default is AUTO)| |JSON|Compression: String (constant) that specifies the current compression algorithm for the data files to be unloaded
File extension - Boolean that specifies whether to replace invalid UTF-8 characters with the Unicode replacement character| |PARQUET|Compression: String (constant) that specifies the current compression algorithm for the data files to be unloaded|

Copy Unload Copy Options

CopyOptions

|Overwrite Flag|Toogle overwrites existing files with matching names, if any, in the location where files are unloaded| |Header|| |Single File Flag|Toggle generates a single file when true, else generates multiple files if partition by enabled| |Max File Size (MB)|Number (> 0) that specifies the upper size limit (in bytes) of each file to be generated in parallel per thread. Note that the actual file size and number of files unloaded are determined by the total amount of data and number of nodes available for parallel processing| |Include Query ID|When true provides unique identifier for unloaded files by including a universally unique identifier (UUID) in the filenames of unloaded data files| |Detailed Output|When true includes a row for each file unloaded to the specified stage. Columns show the path and name for each file, its size, and the number of rows that were unloaded to the file|

Copy Unload Deployment

Deployment not applicable for this node

API

The Coalesce API Node loads external data which is external to snowflake with the help of EXTERNAL ACCESS INTEGRATION Name which has a network rule which allows access to external network locations using procedure handler

API Node Configuration

The API Node Configuration type has two configuration groups:

API Node Properties

There are four configs within the Node Properties group.

PropertyDescription
Storage LocationStorage Location where the Materialized View will be created
Node TypeName of template used to create node objects
DescriptionA description of the node's purpose
Deploy EnabledIf TRUE the node will be deployed / redeployed when changes are detected
If FALSE the node will not be deployed or will be dropped during redeployment

API Node Options

Options

|Snowflake EXTERNAL ACCESS INTEGRATION Name (Required)|EXTERNAL ACCESS INTEGRATION Name has a network rule which allows access to external network locations external to snowflake using procedure| |Method(Required)|HTTP Request methods Get,Put,Post
Get: API Call to retrieve data
Put: API Call to update existing data
Post: API Call to create new data|
|URI(Required)|Uniform Resoure Identifier to be provided to locate and interact with resources within a specific API| |Headers|headers are used to provide additional context and information about the request| |Payload|when making HTTP requests to a URI, the request may include a payload (also known as a body)| |Note:|Payload applicable only to Put & Post API method calls|

API-NODEUPDATE

API-NODEUPDATE Node Configuration

The API-NODEUPDATE Node Configuration type has two configuration groups:

API-NODEUPDATE Node Properties

PropertyDescription
Storage LocationStorage Location where the Materialized View will be created
Node TypeName of template used to create node objects
DescriptionA description of the node's purpose
Deploy EnabledIf TRUE the node will be deployed / redeployed when changes are detected
If FALSE the node will not be deployed or will be dropped during redeployment

API-NODEUPDATE Node Options

API-Options

|Snowflake EXTERNAL ACCESS INTEGRATION Name (Required)| EXTERNAL ACCESS INTEGRATION Name has a network rule which allows access to external network locations external to snowflake using procedure| |Snowlake secret for Coalesce API token (Required)|SNOWFLAKE SECRET to allow access to Coalesce API| |Workspace-Node details|Information on the list of nodes for which columns need to be updated
Workspace ID: This is the id of workspace where node belongs
Node name: Name of the node whose columns needs to be updated
Storage Location (E.g DBName.SchemaName):: Enter storage location of the table, with database name and schema|

JDBC LOAD

The Coalesce JDBC Load Node allows to connect, interact, and retrieve data from various database management systems into Snowflake

JDBC Load Node Configuration

The JDBC Load Node Configuration type has two configuration groups:

JDBC Node Properties

There are four configs within the Node Properties group.

PropertyDescription
Storage LocationStorage Location where the Materialized View will be created
Node TypeName of template used to create node objects
DescriptionA description of the node's purpose
Deploy EnabledIf TRUE the node will be deployed / redeployed when changes are detected
If FALSE the node will not be deployed or will be dropped during redeployment

JDBC Load Options

JDBC_load_options |JDBC Driver(Required)|Required to establish the connection to external database| |JAR file external stage location(Required)|Location of the external stage where JAR file is located| |JAR file name(Required)|Name of JAR file located in external stage| |EXTERNAL ACCESS INTEGRATION(Required)|EXTERNAL ACCESS INTEGRATION Name has a network rule which allows access to external network locations external to snowflake using procedure| |External database credentials|Username, Password of the external Database in the form snowflake secret object| |JDBC URL(Required)|URL of the database with port number (ex: "sqlserver-tests.database.windows.net:1433" )| |SQL(Required)|SQL query to retrieve the required data| |Truncate Before|When enabled,the table is truncated before data load|

Parse Excel

The Coalesce Parse Excel node parses an large excel file in stage location to json format on the target location

Parse Excel Node Configuration

The Parse Excel node type has three configuration groups:

Parse Excel Node Properties

There are four configs within the Node Properties group.

PropertyDescription
Storage LocationStorage Location where the Materialized View will be created
Node TypeName of template used to create node objects
DescriptionA description of the node's purpose
Deploy EnabledIf TRUE the node will be deployed / redeployed when changes are detected
If FALSE the node will not be deployed or will be dropped during redeployment

Parse Excel Stored Procedure Options

StoredProcedureOptions

|Stored Procedure Storage Location(Required)|Location in Snowflake to be provided where Stored Procedure will be loaded by script| |Note|Stored Procedure location and Storage Location (i.e. destination) must be in same Schema|

Excel File Processing Options

ExcelFileProcessingOptions

|Source File Storage Location(REQUIRED)|Storage location name in Coalesce which points to the database and schema where stage is located (source file has to be located under stages)| |Source File Stage(REQUIRED)|Stage name in Snowflake where source file is located| |Source File Name(REQUIRED)|Excel filename to be parsed|

Initial Deployment

When deployed for the first time into an environment the Parse Excel node will execute the below stage:

  • Create OR Replace Table - This will execute a CREATE OR REPLACE statement and create a table in the target environment.
  • Create OR Replace Stored Procedure - This will create a Stored Procedure in the target environment

Redeployment

Recreating the Parse Excel

There are few column or table changes like Change in table name, Dropping existing column, Alter Column data type, Adding a new column if made in isolation or all-together will result in an ALTER statement to modify the target Table in the target environment.Any table level changes or node config changes results in recreation of Stored Procedure

The following stages are executed:

  • Rename Table| Alter Column | Delete Column | Add Column| Edit table description - Alter table statement is executed to perform the alter operation.
  • Create OR Replace Stored Procedure - This will create a Stored Procedure in the specified target environment

Undeployment

If the Parse Excel node is deleted from a Workspace, that Workspace is committed to Git and that commit deployed to a higher-level environment then the target table in the target environment will be dropped.

  • Drop Table
  • Drop Procedure

Parse Json

The Coalesce parses large json file containing json array to the provided target location

Parse Json Node Configuration

The Parse Excel node type has three configuration groups:

Parse json Node Properties

There are four configs within the Node Properties group.

PropertyDescription
Storage LocationStorage Location where the Materialized View will be created
Node TypeName of template used to create node objects
DescriptionA description of the node's purpose
Deploy EnabledIf TRUE the node will be deployed / redeployed when changes are detected
If FALSE the node will not be deployed or will be dropped during redeployment

Parse JSON UDTF Procedure Options

UDTFProcedureOptions |User Defined Table Function Location (Required)|Location in Snowflake to be provided where UDTF will be loaded by script| |Note| UDTF location and Storage Location (i.e. destination) must be in same Schema|

Parse JSON JSON File Processing Options

JSONFileProcessingOptions

|Source File Storage Location(REQUIRED)|Schema name in Snowflake where source file is located (source file has to be located under stages)| |Source File Stage(REQUIRED)|Stage name in Snowflake where source file is located| |Source File Name(REQUIRED)|Json filename to be parsed| |JSON Array Name(REQUIRED)|Array name of the json|

Initial Deployment

When deployed for the first time into an environment the Parse Json node will execute the below stage:

  • Create OR Replace Table - This will execute a CREATE OR REPLACE statement and create a table in the target environment.
  • Create OR Replace Function - This will create a UDTF function in the target environment

Redeployment

Recreating the Parse Json

There are few column or table changes like Change in table name, Dropping existing column, Alter Column data type, Adding a new column if made in isolation or all-together will result in an ALTER statement to modify the target Table in the target environment.Any table level changes or node config changes results in recreation of Stored Procedure

The following stages are executed:

  • Rename Table| Alter Column | Delete Column | Add Column| Edit table description - Alter table statement is executed to perform the alter operation.
  • Create OR Replace Stored Procedure - This will create a Stored Procedure in the specified target environment

Undeployment

If the Parse Json node is deleted from a Workspace, that Workspace is committed to Git and that commit deployed to a higher-level environment then the target table in the target environment will be dropped.

  • Drop Table
  • Drop Procedure

Code

InferSchema

CopyInto

Snowpipe

External Tables

Copy Unload

API

JDBC

Parse Excel

Parse Json

Versions

Available versions of the package.

Version #Release DateNotes
1.1.8December 13, 2024
 Inferschema,Re-Sync columns enabled and Partition by  clause added to external table 
1.1.7December 06, 2024
 Re-Sync Columns config enabled and InferSchema function added to Copy-Into,Snowpipe 
1.1.6November 26, 2024
 API Node Type Added to  update mapping grid after creating External Iceberg Tables or using Infer Schema 

Support

If you need help, please see our support section or contact us.