Skip to main content

Data Type Changes in Snowflake

Learn how to change column data types in Snowflake when those changes are unsupported by ALTER TABLE … ALTER COLUMN, and how to keep Coalesce metadata in sync afterward. You may see errors such as SQL compilation error: cannot change column COLUMN_NAME from type DATE to TIMESTAMP_NTZ(9) when deployment attempts an unsupported conversion. This workflow preserves column order and is suitable for Type 2 SCD tables and other cases where a simple drop-and-recreate would lose history or break dependencies.

When To Use This Workflow

Snowflake does not support certain column changes using ALTER TABLE … ALTER COLUMN. These include:

  • Changing a column data type to a different type (for example, DATE to TIMESTAMP_NTZ)
  • Adding or changing a column default (except when the default is a sequence)
  • Decreasing the length of a text or string column
  • Changing the scale of a number column

When you change a Node, Coalesce attempts to CLONE the table, ALTER the copy, then SWAP the cloned table with the original and delete the original. This keeps the original data safe if an operation fails. The ALTER step fails for unsupported changes because Snowflake rejects the data type conversion. Advanced Deploy Nodes use direct ALTER (no clone or swap) and fail the same way. The recommended approach is to make the change off-platform in Snowflake, then sync Coalesce metadata.

Use this workflow when:

  • Column order matters: Type 2 SCD tables, dimension tables with strict ordering, or downstream consumers that depend on column position.
  • You must preserve history: A simple DROP and CREATE would destroy change-tracking history in SCD tables.
  • You need a repeatable, documented process: This workflow gives you a clear sequence you can follow and adapt.

For simpler cases where you don't need to retain data, you can drop the table in Snowflake, commit your Coalesce changes, and deploy to re-create the table with the new structure. The table will be empty until the next refresh. For cases where column order and history matter, this guide focuses on the copy-table migration pattern that preserves structure and data.

Prerequisites

Before you start, ensure you have:

  • Backup or Time Travel: Snowflake Time Travel can restore data if something goes wrong. Confirm your table has sufficient retention, or take a manual backup.
  • Downtime or maintenance window: The SWAP step is atomic, but data migration can take time for large tables. Plan accordingly.
  • Warehouse permissions: You need CREATE TABLE, ALTER TABLE, INSERT, and SELECT on the table and schema.
  • Coalesce access: You'll need to commit changes and deploy, and possibly clear Node metadata. Ensure you have the right Workspace and Environment permissions.
Advanced Deploy Nodes

If the table is managed by an Advanced Deploy Node, a failed deployment does not update Coalesce metadata. That can leave the database and Coalesce out of sync and block future deployments until metadata is corrected. This workflow includes steps to recover from that state.

Phase 1: Create a Copy Table With Modified DDL

Create a new table with the same structure as the original, but with the modified column data types. The copy table will hold the migrated data before you SWAP it with the original. Using Coalesce to generate the DDL ensures it matches what Coalesce expects and preserves column order.

  1. In your Coalesce Workspace, update the data type for the target columns in the Node definition.
  2. Run a Validate Create or Create operation to generate the DDL SQL.
  3. Copy the generated DDL into Snowflake. Modify the table name to [ORIGINAL_TABLE_NAME]_COPY and adjust the database or schema to match your target Environment's storage mapping.
  4. Execute the statement to create the copy table. Preserve column order exactly. It must match the original so that the SWAP and any downstream logic remain consistent.

Example: Changing amount from VARCHAR to NUMBER(18,2) in sales_fact:

-- DDL generated from Coalesce, table name changed to sales_fact_COPY
CREATE TABLE my_schema.sales_fact_COPY (
id NUMBER,
customer_id NUMBER,
amount NUMBER(18,2), -- Changed from VARCHAR
order_date DATE
-- ... other columns in same order
);
Type 2 SCD Tables

For Type 2 SCD tables, include all effective-date, expiration, and current-flag columns in the same order. Preserving column order ensures that views, reports, and Coalesce Node definitions that reference column position continue to work.

Phase 2: Migrate Data Column by Column With Type Conversions

Copy data from the original table into the copy table. For columns whose type changed, use CAST or TRY_CAST to convert values. Handle nulls and invalid values as needed.

  1. Write an INSERT INTO … SELECT that maps each column from the original to the copy.

  2. For the changed columns, use the appropriate conversion (for example, TRY_CAST(amount AS NUMBER(18,2))).

  3. Run the insert. For large tables, consider batching or using a warehouse with sufficient resources.

    Example:

    INSERT INTO my_schema.sales_fact_COPY (
    id,
    customer_id,
    amount,
    order_date
    )
    SELECT
    id,
    customer_id,
    TRY_CAST(amount AS NUMBER(18,2)), -- Converts; invalid values become NULL
    order_date
    FROM my_schema.sales_fact;
  4. Validate row counts and spot-check data to ensure the conversion produced the expected results.

Invalid or Truncated Data

TRY_CAST returns NULL for values that cannot be converted. If you need different behavior (for example, default values or error handling), adjust the SELECT logic accordingly. For string length decreases, ensure no values exceed the new length or handle truncation explicitly.

Phase 3: SWAP To Replace the Original Table

Use Snowflake’s ALTER TABLE … SWAP WITH to atomically replace the original table with the copy. The original table becomes the backup; the copy becomes the production table.

  1. Run the SWAP:

    ALTER TABLE my_schema.sales_fact SWAP WITH my_schema.sales_fact_COPY;
  2. After the SWAP, sales_fact has the new schema and data. sales_fact_COPY now holds the old table. Drop the copy table, or keep it briefly as a backup before dropping.

  3. Verify the production table has the correct schema and data.

Downstream Dependencies

The SWAP is atomic and instant. Any views, streams, or tasks that reference the table will now see the new structure. Ensure your type conversions did not introduce nulls or values that break those dependencies.

Phase 4: Sync Coalesce Metadata

Coalesce stores metadata about your tables from the last successful deployment. After you change the table off-platform, you must sync that metadata so Coalesce matches the actual warehouse state.

Option A: Commit and Deploy

Use this when:

  • You've made straightforward changes in Coalesce (column additions, renames, data type changes)
  • The changes were made through the Coalesce UI rather than directly in Snowflake
  • You expect Presync to handle the reconciliation automatically
  • This is your first attempt at deploying changes
  • The table structure in Snowflake should match what's defined in your Coalesce Nodes

Complete these steps:

  1. Return to Coalesce. The Node definition should already reflect the new column data types from Phase 1.
  2. Commit your changes.
  3. Deploy to the Environment. Presync will compare Coalesce metadata with the actual warehouse state and reconcile. If the table structure matches what you deployed, metadata updates and you're done.

Option B: Enable Column Resync on Affected Nodes

Use this when:

  • Coalesce metadata is out of sync with the actual Snowflake table structure
  • You see "metadata update" messages in deployment logs but no actual changes occur
  • You've made manual changes in Snowflake that Coalesce isn't detecting
  • Option A failed and you're getting deployment errors related to column mismatches
  • You want to pull in column descriptions or other metadata from Snowflake back into Coalesce

Column resync forces a restore of all metadata for that object and resolves inconsistencies during deployment. For setup details, see Enable Column Resync on Node Types.

  1. Open the affected Node in your Workspace.
  2. Navigate to the Node's configuration or properties.
  3. Add or enable the is column resync enabled property at the top of the Node configuration.
  4. Commit and deploy. Presync will reconcile metadata with the actual table state in Snowflake.

Option C: Disable and Re-enable Nodes for Deployment

Use this as a last resort when:

  • Options A and B have failed to resolve the issue
  • There are significant structural differences between Coalesce and Snowflake that can't be reconciled
  • You've changed Node types (for example, from table to dynamic table)
  • You're dealing with complex metadata corruption issues
  • You're willing to re-create the objects completely
Data Loss Risk

This approach drops and re-creates tables in Snowflake, which can result in data loss if not handled properly. Use carefully.

This approach drops the objects in Snowflake and removes them from Coalesce metadata, then re-creates them from the current Node definition.

  1. Right-click on the affected Nodes and disable them for deployment.
  2. Commit and deploy this change. This drops the objects in Snowflake and removes them from Coalesce metadata.
  3. Re-enable the Nodes for deployment.
  4. Commit and deploy again. This re-creates the objects based on the current Node definition.

Option D: Contact Coalesce Support

When the above options don't work, contact Coalesce Support with:

  1. Your Environment ID
  2. The specific Node IDs that need metadata cleared
  3. A description of the sync issue you are experiencing
What Metadata Clearing Affects

Clearing metadata only affects Coalesce's understanding of what was last deployed. It doesn't impact your Snowflake objects or Workspace definitions.

General recommendation: Start with Option A, escalate to Option B if you encounter metadata sync issues, and only use Option C when the other options fail. Option C should be used carefully as it involves dropping and recreating objects, which can result in data loss if not handled properly.

For details on how Presync works, see Understanding the Presync Process.

Recovery From Failed Deployments

When a deployment fails partway, Coalesce doesn't update Environment metadata. That is intentional: metadata is only updated after a successful deployment. The result can be drift between the database and Coalesce.

Detecting Metadata Drift

You may notice drift when:

  • A deployment fails with errors like "column already exists" or "object not found."
  • You made changes off-platform (for example, a data type change) and Coalesce still shows the old schema.
  • Advanced Deploy Nodes block future deployments because Coalesce expects a different table state than what exists in Snowflake.

Clearing Node Metadata So Presync Can Reconcile

To reconcile metadata, use one of the options in Phase 4: Sync Coalesce Metadata: enable column resync on the affected Nodes, disable and re-enable the Nodes for deployment, or contact Coalesce Support for backend metadata clearing.

After metadata is cleared or resynchronized, run a deployment. Presync compares Coalesce’s view with the actual warehouse state and updates metadata to match. The next deploy will only apply changes that are not already present, avoiding duplicate column adds or other conflicts.

When To Use Rollback versus Manual Fixes

  • Rollback: Use Rollback a Deployment when you want to revert data structures to a prior commit. Rollback redeploys the previous commit; it doesn't restore data. Use Snowflake Time Travel to restore data if needed.
  • Manual fixes: Use the workflow in this guide when you have already changed the table off-platform and need to sync Coalesce. Clearing Node metadata and redeploying lets Presync adopt the current state.

Alternative: Manual Deletion When Data Retention Is Not Needed

If you don't need to retain the data in the table, you can use a simpler approach:

  1. In Coalesce, make the necessary data type changes in the Node mapping and commit.
  2. In Snowflake, drop the table. All data in that table will be deleted and lost.
  3. Deploy the commit from Coalesce. The deployment will create the table with the expected columns and data types. The table remains empty until the next refresh runs.
Data Loss

Dropping the table permanently deletes all data. Use this approach only when you can afford to lose the data or will repopulate from upstream sources.

How This Differs From Drop-and-Recreate

A simple DROP and CREATE is faster but has trade-offs:

ApproachColumn orderSCD historyDowntimeComplexity
Copy + SWAP (this workflow)Original order preserved; new columns added using ALTER go to endPreservedMinimal (SWAP is atomic)Higher
DROP + CREATELost (new table may have different order)LostFull (table missing during re-creation)Lower

For Type 2 SCD tables, drop-and-recreate destroys the change-tracking history. For tables with strict column ordering requirements, the copy-table workflow preserves the original column order (new columns added using ALTER statements are appended to the end, regardless of their position in Coalesce).


What's Next?