Navigated to blog β€Ί snowflake-csv-import-errors
Back to Blog
crm-import-guides

Snowflake CSV Import Errors: Fix COPY INTO and Data Type Failures

March 21, 2026
13
By SplitForge Team

Quick Answer

Snowflake CSV imports fail for five predictable reasons: COPY INTO syntax errors (wrong stage path, missing file format options), data type mismatches between CSV values and table column definitions, column count mismatches where the CSV has more or fewer columns than the target table, NULL handling conflicts where empty fields violate NOT NULL constraints, and encoding mismatches between the file's character set and Snowflake's expected encoding.

Why it happens: Snowflake's COPY INTO command is strict by default. Unlike CRM import wizards that auto-detect formats, COPY INTO requires explicit file format specification. Any deviation between the CSV structure and the stated format options produces an error.

The fix: Validate your CSV structure against the target table schema before running COPY INTO. Catch data type mismatches, NULL violations, and column count errors before they abort the load.

Root cause: Data type mismatches are among the most common Snowflake CSV failures. A column defined as INTEGER that receives $1,250.00 (currency format) fails the entire COPY INTO batch by default.


Fast Fix (90 Seconds)

If your COPY INTO just failed, try this first:

  1. Read the exact error β€” Snowflake's error output includes the file name, row number, column position, and the specific value that failed. Start there.
  2. Run COPY INTO ... ON_ERROR = 'CONTINUE' β€” This loads all valid rows and writes errors to a separate file, letting you see the full error surface rather than stopping at the first failure.
  3. Check column count β€” SELECT COUNT(*) FROM information_schema.columns WHERE table_name = 'YOUR_TABLE' gives your table's column count. Compare to the CSV's header row.
  4. Check data types β€” The failing column and value are in the error output. Confirm the value's format matches the column type (no currency symbols in numeric columns, correct date format for DATE columns).
  5. Validate before reimporting β€” Run Data Validator on the CSV to catch all type and format errors before the next COPY INTO attempt.

TL;DR: Snowflake's COPY INTO stops at the first error by default β€” use ON_ERROR = 'CONTINUE' to expose all failures at once. Most errors trace to data type mismatches, NULL violations, or wrong column count. Validate your CSV structure against the table schema before loading. For the existing column-mismatch-specific guide, see Snowflake CSV Columns Do Not Match.


You've built a pipeline that loads a partner CRM export into Snowflake nightly. Tonight's load fails. The error output says: Numeric value '$1,250.00' is not recognized. You fix that row. Next run fails on a different row: Timestamp '2026-04-15T14:30:00.000Z' does not match format 'YYYY-MM-DD'. You fix that. The pattern continues β€” each run surfaces one new error.

The real problem isn't individual bad values. It's that the CRM changed its export format, and now a month of data has currency symbols in numeric fields and ISO 8601 timestamps where your pipeline expects date-only strings. COPY INTO's default ON_ERROR = 'ABORT_STATEMENT' hides the full scope until you switch to ON_ERROR = 'CONTINUE'.

Here's what a real Snowflake load error output looks like β€” this is the level of precision available before you've fixed anything:

❌ ACTUAL LOAD ERROR OUTPUT (from COPY_HISTORY / LOAD_HISTORY):

File: @my_stage/export_2026_04.csv
Row: 47, Column: DEAL_VALUE
Error: Numeric value '$1,250.00' is not recognized
Value: $1,250.00

Row: 312, Column: CREATED_AT
Error: Timestamp '2026-04-15T14:30:00.000Z' does not match format 'YYYY-MM-DD'
Value: 2026-04-15T14:30:00.000Z

Row: 891, Column: CONTACT_NAME
Error: String 'Bartholomew Worthington-Henderson III' is too long for column (max 30 chars)
Value: Bartholomew Worthington-Henderson III

FIXED:
DEAL_VALUE:  1250.00         (strip $ and comma)
CREATED_AT:  2026-04-15      (strip time component)
CONTACT_NAME: truncate to 30 chars or increase column VARCHAR length

Snowflake pipeline files often contain business data β€” transaction records, customer exports, deal histories β€” that constitutes personal data under GDPR Article 4(1). Most cloud-based CSV validation tools upload the file to a remote server to check it. For pipeline files containing partner data, financial records, or customer information, that upload creates a GDPR Article 5(1)(c) data minimization exposure and may trigger Article 28 Business Associate obligations if the validation vendor processes on your behalf. SplitForge's Data Validator processes file structure and type validation entirely in Web Worker threads in your browser β€” the file never reaches any external server. Verify with Chrome DevTools Network tab during validation: zero outbound requests.

Error patterns in this guide were verified against Snowflake's COPY INTO documentation and tested against a Snowflake trial instance, March 2026. For the complete CRM import failure taxonomy, see our CRM import failures complete guide. For column count mismatch errors specifically, see Snowflake CSV Columns Do Not Match. Teams dealing with similar type-validation failures in PostgreSQL or MySQL will find PostgreSQL COPY CSV Error Fix and MySQL CSV Import 1064 Syntax Error useful for comparison.


What Snowflake's COPY INTO Errors Actually Mean

Numeric value '[value]' is not recognized β€” The column is defined as NUMBER, INTEGER, or FLOAT, but the CSV value contains non-numeric characters: currency symbols, commas as thousands separators, or text.

Timestamp '[value]' is not recognized β€” The timestamp or date value doesn't match the format specified in the FILE FORMAT or COPY INTO options. Snowflake is strict β€” 2026-04-15T14:30:00Z and 2026-04-15 14:30:00 are different strings.

NULL result in a non-nullable column β€” An empty field in the CSV mapped to a column defined as NOT NULL. Snowflake treats empty CSV fields as NULL by default.

Number of columns in file [N] does not match that of the corresponding table [M] β€” The CSV has a different column count than the target table. Even one extra or missing column causes this error.

String '[value]' is too long and would be truncated β€” A text value exceeds the VARCHAR length defined for that column.


ErrorRoot CauseFix
Numeric value not recognizedCurrency symbol, comma, or text in numeric fieldStrip formatting before load
Timestamp not recognizedFormat doesn't match FILE FORMAT specificationSet TIMESTAMP_FORMAT in FILE FORMAT
NULL in non-nullable columnEmpty field, column is NOT NULLSet EMPTY_FIELD_AS_NULL = FALSE or provide default
Column count mismatchCSV has different columns than tableAlign CSV headers to table schema
String too longValue exceeds VARCHAR column lengthIncrease column width or truncate values
Invalid UTF-8 characterFile encoding doesn't match expectedRe-encode to UTF-8 before staging

Table of Contents


Why Snowflake's COPY INTO Is Stricter Than CRM Importers

CRM import wizards are built for non-technical users β€” they auto-detect delimiters, infer data types, and skip bad rows. Snowflake's COPY INTO is built for data engineers β€” it assumes you've validated the file structure before loading, and it enforces the table schema exactly as defined.

This strictness has a purpose: Snowflake is designed for large-scale production data pipelines. Silently skipping bad rows would corrupt analytics results. Instead, it aborts or continues based on your ON_ERROR setting and logs every failure precisely.

❌ BROKEN β€” COPY INTO default behavior on first error:

COPY INTO transactions
FROM @my_stage/export_2026_04.csv
FILE_FORMAT = (TYPE = 'CSV' FIELD_OPTIONALLY_ENCLOSED_BY = '"');

Row 1: loads successfully
Row 2: loads successfully
Row 47: "Numeric value '$1,250.00' is not recognized" β†’ ABORT_STATEMENT
Result: 0 rows loaded. Transaction rolled back. Rows 1–46 not committed.

FIXED β€” surface all errors before deciding:

COPY INTO transactions
FROM @my_stage/export_2026_04.csv
FILE_FORMAT = (TYPE = 'CSV' FIELD_OPTIONALLY_ENCLOSED_BY = '"')
ON_ERROR = 'CONTINUE';

Result: 1,847 rows loaded. 23 rows failed. Error details in LOAD_HISTORY.
Now you can see all 23 failure reasons at once.

Fix 1: Data Type Mismatches

The most common Snowflake import failure for CRM exports: the source system exports numbers with currency formatting (commas, dollar signs, parentheses for negatives) that violate Snowflake's strict numeric parsing.

❌ BROKEN β€” Numeric column values that fail Snowflake type validation:

Table definition: deal_value NUMBER(15,2)

CSV values that fail:
"$1,250.00"    β†’ currency symbol + comma thousands separator β†’ not recognized
"1,250.00"     β†’ comma as thousands separator β†’ not recognized
"(250.00)"     β†’ parentheses for negative β†’ not recognized
"1.25k"        β†’ abbreviated notation β†’ not recognized
"1250"         β†’ integer format in a decimal column β†’ OK (Snowflake accepts)
"1250.00"      β†’ plain decimal β†’ OK βœ“

FIXED β€” strip formatting before staging:
1250.00
1250.00
-250.00
1250.00
1250.00
1250.00

For bulk stripping of currency formatting, Data Cleaner processes the column in your browser without uploading the file.


Fix 2: NULL Handling Conflicts

Snowflake by default treats empty CSV fields as NULL. When a column has a NOT NULL constraint, any empty field causes an error. The fix depends on whether the empty value represents a genuine NULL or a missing value that should receive a default.

❌ BROKEN β€” Empty field in NOT NULL column:

Table: CREATE TABLE contacts (id INT NOT NULL, name VARCHAR NOT NULL, dept VARCHAR);

CSV row: 1001,,Engineering   ← name column is empty

Error: "NULL result in a non-nullable column [NAME]"

Options to fix:

Option A β€” set EMPTY_FIELD_AS_NULL = FALSE in FILE FORMAT:
  Treats empty fields as empty strings '' instead of NULL.
  Works if your NOT NULL column accepts empty strings.

Option B β€” use a column default value:
  ALTER TABLE contacts MODIFY name SET DEFAULT 'Unknown';
  Empty fields now use the default value.

Option C β€” clean the source data:
  Replace empty values in the name column with a placeholder before staging.
  Most reliable for production pipelines.

FIXED β€” empty name replaced before staging:
1001,Unknown,Engineering

Fix 3: Timestamp and Date Format Errors

Snowflake's date and timestamp parsing requires an exact format match unless you use AUTO detection. The default TIMESTAMP_FORMAT is YYYY-MM-DD HH24:MI:SS.FF. Any deviation β€” including ISO 8601 with timezone offset β€” fails.

❌ BROKEN β€” Timestamp formats that fail Snowflake's default parser:

Column: created_at TIMESTAMP_NTZ

CSV values that fail with default FILE FORMAT:
"2026-04-15T14:30:00Z"        β†’ ISO 8601 with Z timezone β†’ wrong format
"Apr 15, 2026 2:30 PM"        β†’ natural language β†’ wrong format
"15/04/2026 14:30"            β†’ European format β†’ wrong format
"1744899000"                  β†’ Unix epoch seconds β†’ wrong type

FIXED β€” specify format in FILE FORMAT:

For ISO 8601 with timezone:
COPY INTO ... FILE_FORMAT = (
  TYPE = 'CSV'
  TIMESTAMP_FORMAT = 'YYYY-MM-DDTHH:MI:SS.FF9TZH:TZM'
  DATE_FORMAT = 'YYYY-MM-DD'
);

For Unix epoch:
Cast during the COPY INTO SELECT:
COPY INTO table (id, created_at)
FROM (SELECT $1, TO_TIMESTAMP($2::INT) FROM @stage/file.csv);

Fix 4: Encoding Issues

Snowflake's COPY INTO expects UTF-8 by default. Files encoded in Windows-1252 (common in Excel exports) or Latin-1 cause character parsing failures on any non-ASCII characters β€” accented names, special symbols, currency characters outside ASCII range.

The failure is often invisible until you look at the raw bytes. Here's what's actually happening:

❌ BROKEN β€” Windows-1252 bytes interpreted as UTF-8:

Character: Γ©  (e with acute accent β€” in "GarcΓ­a")
Windows-1252 encoding: ED (hex byte 0xED)
UTF-8 encoding:        C3 A9 (two bytes: 0xC3 0xAD)

When Snowflake reads Windows-1252 byte 0xED as UTF-8:
0xED signals a 3-byte UTF-8 sequence starting at that byte.
Next bytes don't continue a valid UTF-8 sequence.
Result: "Invalid UTF-8 character sequence found in file"

Raw byte comparison:
Windows-1252 "GarcΓ­a": 47 61 72 63 ED 61
                                    ↑ this byte is not valid UTF-8 lead byte in this position

UTF-8 "GarcΓ­a":         47 61 72 63 C3 A9 61
                                    ↑ C3 A9 is the correct 2-byte UTF-8 sequence for Γ©

FIXED β€” re-encode to UTF-8:
iconv -f windows-1252 -t utf-8 input.csv > output_utf8.csv
Verify: file output_utf8.csv β†’ "CSV text, with UTF-8 characters"
FIXED β€” two options:

Option A: Re-encode file to UTF-8 before staging
  In your ETL pipeline, add an encoding conversion step.
  Linux: iconv -f windows-1252 -t utf-8 input.csv > output.csv

Option B: Specify encoding in FILE FORMAT (Snowflake supports some encodings):
  COPY INTO ... FILE_FORMAT = (TYPE = 'CSV' ENCODING = 'WINDOWS1252');
  Not all Windows-1252 variants are fully supported β€” test before production use.

Fix 5: FILE FORMAT Specification Errors

COPY INTO requires explicit file format options. Mismatched options β€” wrong delimiter, wrong quote character, wrong escape character β€” cause rows to parse incorrectly even when the data itself is valid.

❌ BROKEN β€” FILE FORMAT options that don't match the CSV structure:

CSV file uses semicolons as delimiters (European format):
field1;field2;field3

COPY INTO command specifies comma delimiter (default):
COPY INTO ... FILE_FORMAT = (TYPE = 'CSV');

Result: Snowflake sees the entire row as one field.
No error β€” just one giant VARCHAR field per row instead of three.

FIXED β€” match FILE FORMAT to actual CSV structure:

COPY INTO ... FILE_FORMAT = (
  TYPE = 'CSV'
  FIELD_DELIMITER = ';'
  FIELD_OPTIONALLY_ENCLOSED_BY = '"'
  NULL_IF = ('NULL', 'null', '')
  EMPTY_FIELD_AS_NULL = TRUE
  SKIP_HEADER = 1
);

Using ON_ERROR to Surface All Failures at Once

The most productive first step for any failing COPY INTO is switching to ON_ERROR = 'CONTINUE' and reading the full error output before fixing anything.

-- Step 1: Run with CONTINUE to get full error surface
COPY INTO my_table
FROM @my_stage/file.csv
FILE_FORMAT = (TYPE = 'CSV' SKIP_HEADER = 1)
ON_ERROR = 'CONTINUE';

-- Step 2: Query the load history to see all errors
SELECT *
FROM TABLE(information_schema.copy_history(
  table_name => 'MY_TABLE',
  start_time => DATEADD(hours, -1, CURRENT_TIMESTAMP())
))
ORDER BY last_load_time DESC;

-- Step 3: Fix all error types identified, then reload
-- (Optionally truncate the table first if CONTINUE loaded partial data)
TRUNCATE TABLE my_table;
-- Re-run COPY INTO after fixes

Real Pipeline Failure: CRM Export Format Changed Mid-Month

This is the failure mode that breaks production pipelines β€” not a one-time import, but a nightly load that worked for 6 months and silently started failing.

❌ BROKEN β€” Nightly Salesforce β†’ Snowflake pipeline, silent failure after format change:

Week 1–24: Pipeline runs cleanly. 0 errors. Load succeeds nightly.

Week 25: Salesforce admin enables "Currency Symbol in Export" setting.
         Pipeline runs. ON_ERROR = 'ABORT_STATEMENT'. 0 rows loaded.
         Alert fires. You investigate.

COPY_HISTORY output:
  file_name:     salesforce_export_2026_04_15.csv
  rows_parsed:   12,847
  rows_loaded:   0
  errors_seen:   12,847
  first_error:   "Numeric value '$47,250.00' is not recognized"
  first_error_row: 2
  first_error_column_name: ANNUAL_REVENUE

Every single row failed. The format change added currency symbols
to every numeric field in the export.

FIXED:
Option A: Pre-process CSV before staging (strip currency symbols)
  sed 's/\$//g; s/,//g' salesforce_export.csv > cleaned_export.csv

Option B: Cast during COPY INTO using SELECT:
  COPY INTO deals (id, annual_revenue)
  FROM (SELECT $1, REPLACE(REPLACE($6, '$', ''), ',', '')::FLOAT
        FROM @stage/salesforce_export.csv);

Root cause: upstream format change, caught only because COPY_HISTORY was monitored.
Prevention: add format validation step before staging in your pipeline.

Using ON_ERROR = 'SKIP_FILE' instead of CONTINUE skips the entire file on first error β€” useful for staging pipelines where partial loads are worse than no load.


Additional Resources

Official Snowflake Documentation:

Technical Standards:

Tested: Snowflake COPY INTO error patterns verified against Snowflake trial instance and official documentation, March 2026.

PLATFORM SPECIFICATION SOURCE
Platform: Snowflake Data Platform
Source: Snowflake Documentation β€” COPY INTO Table Command
URL: https://docs.snowflake.com/en/sql-reference/sql/copy-into-table
Verified: March 2026
Next re-verify: June 2026

COPY INTO behavior and FILE FORMAT options reflect Snowflake's current
release at verification date. Snowflake releases weekly β€” verify
against current documentation for new format options or changed defaults.

FAQ

Snowflake's default ON_ERROR setting is ABORT_STATEMENT, which rolls back the entire load on the first error. Switch to ON_ERROR = 'CONTINUE' to load valid rows and log errors, or ON_ERROR = 'SKIP_FILE' to skip files with errors. After identifying all error types with CONTINUE, fix the source data and reload with the default abort setting for production.

Strip the currency formatting before staging. Remove $, €, Β£, comma thousands separators, and parenthetical negatives from the column. Snowflake's NUMBER type only accepts plain decimal notation: 1250.00 not $1,250.00. For bulk stripping, use a SQL REPLACE in a transformation step or pre-process with Data Cleaner locally.

Not directly. COPY INTO uses a single TIMESTAMP_FORMAT per file format. If your CSV has multiple timestamp formats across different columns, the cleanest solution is to load the timestamp columns as VARCHAR first, then use TO_TIMESTAMP() with explicit format masks during a subsequent transformation step.

COPY INTO requires column counts to match unless you use a SELECT statement inside the COPY INTO to explicitly map columns. For extra CSV columns: COPY INTO table (col1, col2) FROM (SELECT $1, $3 FROM @stage/file.csv) β€” this loads only columns 1 and 3 from the CSV, ignoring column 2.

The Snowflake CSV Columns Do Not Match guide covers the specific "number of columns does not match" error. This guide covers the broader COPY INTO error taxonomy: data type failures, NULL handling, timestamp format mismatches, encoding issues, and FILE FORMAT specification errors β€” errors that have nothing to do with column count.


Validate Your CSV Before Staging to Snowflake

Detect data type mismatches, NULL values in required fields, and encoding issues before COPY INTO
Catch formatting characters in numeric columns that will abort the load
Files validate entirely in your browser β€” pipeline data never transmitted to any server
Fix all errors in one pass instead of one-error-per-run cycles

Continue Reading

More guides to help you work smarter with your data

csv-import-guides

CSV Delimiter Errors: Fix Comma vs Semicolon for International Teams

Stop all data in Column A errors. Learn comma, semicolon & tab CSV delimiters plus quick fixes for global teams.

Read More
csv-guides

How to Split Large CSV Files Without Excel (Even 1M+ Rows)

Need to split a massive CSV file but Excel keeps crashing? Learn how to split files with millions of rows safely in your browser without uploads.

Read More
excel-guides

Batch Convert Multiple Excel Files to CSV Without Opening Each One

Opening 50 Excel files one at a time to save as CSV takes 45 minutes and produces inconsistent results. Three methods handle the same task in under 60 seconds β€” none require opening a single file.

Read More

We use analytics cookies to improve SplitForge. Your files never leave your browser β€” ever. Privacy policy