Refreshing an Environment

Refresh is responsible for running the data transformations defined in your data warehouse metadata. This typically involves DML (Data Manipulation Language) SQL statements such as MERGE, INSERT, UPDATE, and TRUNCATE which will perform transformations on the actual data. Use refresh when you want to update your pipeline with any new changes from your data warehouse.


Deploy Before Refreshing

You can only refresh if you've deployed your pipeline.

Before You Begin

You'll need to get an authentication token. Review the steps in Connecting to the API.


Refresh jobs can be triggered with the Start Job endpoint. Trigger Job to Run. Below is an example Job.

curl --request POST \
     --url \
     --header 'Authorization: Bearer <YOUR-TOKEN> ' \
     --header 'accept: application/json' \
     --header 'content-type: application/json' \
     --data '
  "runDetails": {
    "parallelism": 16,
    "environmentID": "4",
    "excludeNodesSelector": "{ location: SAMPLE name: CUSTOMER } OR { location: SAMPLE name: LINEITEM } OR { location: SAMPLE name: NATION } OR { location: SAMPLE name: ORDERS } OR { location: SAMPLE name: PART } OR { location: SAMPLE name: PARTSUPP } OR { location: SAMPLE name: REGION } OR { location: SAMPLE name: SUPPLIER } OR { location: QA name: STG_PARTSUPP } OR { location: PROD name: STG_PARTSUPP }",
    "jobID": "4"
  "userCredentials": {
    "snowflakeAuthType": "Basic",
    "snowflakeRole": "ACCOUNTADMIN",
    "snowflakeWarehouse": "COMPUTE_WH",
    "snowflakeUsername": "SOMEUSER",
    "snowflakePassword": "SOMEPASS"


Refresh jobs can be triggered via our CLI tool coa using coa refresh. Learn more in our CLI Commands documentation.


To refresh a subset of data, you can create a Job that will run at each Refresh. Learn how to create Jobs.