Verified: 1M rows — 19.8 seconds — February 2026
Last Updated: February 2026

Excel Splitter Performance
Benchmarks

One verified benchmark. Calculated projections for every other scenario. Hardware configuration, measurement protocol, and honest limitations — documented.

50,380/s
Throughput
rows per second
19.8 sec
1M Row Test
verified benchmark
4
Split Modes
algorithmic breakdown
Never
Upload
browser-only
How to Read This Page

One result is a verified measurement: the 1-million-row By Sheet XLSX test at 19.8 seconds (50,380 rows/sec). All other times shown on this page are projections calculated by dividing the target row count by 50,380, with overhead multipliers derived from the algorithmic complexity of each split mode and export format. They represent expected performance, not measured results. Your actual results will vary based on hardware, browser, file complexity, and available system memory. Always verify with your own files before committing to a time-sensitive workflow.

Processing Time by Row Count

By Sheet mode, XLSX export. One entry is measured — three are projected. Bars marked with a star are calculated from the 50,380 rows/sec baseline.

100K250K500K1MRow Count05101520Seconds
Verified measurement (Feb 2026)
Projected from 50,380 rows/sec baseline

All projections assume XLSX output and By Sheet mode unless stated otherwise. Other modes are slower — see overhead table below.

2s
100K rows
PROJECTED
5s
250K rows
PROJECTED
9.9s
500K rows
PROJECTED
19.8s
1M rows
MEASURED

Scalability by Split Mode

By Sheet is fastest. By Column Value is slowest — it must read all rows, group by unique value, then write each group. Overhead multipliers are algorithmic estimates, not measured results.

File SizeBy SheetBy Max RowsBy File SizeBy Column
Overhead factor1.00x (baseline)~1.12x~1.22x~1.55x
100K rows2sprojected2.2sprojected2.4sprojected3.1sprojected
250K rows5sprojected5.6sprojected6.1sprojected7.7sprojected
500K rows9.9sprojected11.1sprojected12.1sprojected15.4sprojected
1M rows19.8smeasured22.2sprojected24.2sprojected30.8sprojected
All values except the 1M-row By Sheet result are projected calculations. Results vary by hardware, browser, file complexity, and available system memory. Verify with your own files before committing to time-sensitive workflows.

Split Mode: Algorithmic Overhead

Each split mode has different algorithmic complexity. These overhead ratios explain why By Column Value takes roughly 55% longer than By Sheet for the same row count.

By Sheet
1.00x — Baseline

Read entire sheet → write to one output file. Linear scan. No grouping or sampling required.

1
Load sheet into memory via sheet_to_json()
2
Write all rows to one output file
3
Repeat per selected sheet
By Max Rows
~1.12x — +12% overhead

Read sheet → split into sequential chunks at the target row count → write each chunk. One additional pass to calculate chunk boundaries.

1
Load sheet into memory via sheet_to_json()
2
Calculate chunk boundaries (rows / maxRows)
3
Slice array, write each chunk with header row prepended
By File Size
~1.22x — +22% overhead

Sample a small chunk → estimate row density in bytes → calculate target row count → split sequentially. One extra export pass for the sample.

1
Load sheet into memory
2
Export 1,000-row sample, measure byte size
3
Calculate target rows per MB
4
Split sequentially at calculated boundary, write chunks
Output file sizes vary ±30% from target due to row density variation within the dataset.
By Column Value
~1.55x — +55% overhead

Read sheet → scan entire column for unique values → group all rows by value → write one file per group. Three passes: read, group, write.

1
Load entire sheet into memory via sheet_to_json()
2
Scan target column, build map of unique values
3
Iterate rows, push each row into its value-keyed group
4
Write one output file per group — 2,000-file cap
High-cardinality columns (many unique values) produce many small files. Hard cap: 2,000 output files.

Export Format Relative Performance

Relative to XLSX (1.0x baseline) for By Sheet mode at the verified 50,380 rows/sec. CSV and TSV are faster because text serialization is simpler than XLSX binary format. JSON and JSONL are slower due to object construction per row. These are algorithmic estimates, not independently measured.

CSV
1.30xfaster
Faster — simpler text serialization vs XLSX binary format
XLSX
1.00xfaster
Baseline — binary format with cell metadata
TSV
1.30xfaster
Same as CSV — tab delimiter vs comma, same serialization path
JSONL
0.55xslower
Slower — builds one JSON object per row before writing
JSON
0.45xslower
Slowest — builds full array in memory before serializing
These ratios are estimated from algorithmic complexity, not from independent measurements. Actual ratios depend on cell data types, compression, and available CPU cache.

Hardware and Memory Impact

Performance degrades predictably with lower-spec hardware. These estimates are based on typical CPU and memory performance differentials — not measured.

High-End Desktop
Intel i7-12700K+, 32GB+ RAM, Chrome stable
1.0x (benchmark conditions)
1M rows: ~19.8 sec. Memory pressure rare below 1.5GB files.
Mid-Range Laptop
Intel i5 / Ryzen 5, 16GB RAM, Chrome stable
~1.8–2.5x slower
1M rows: ~35–50 sec estimated. Files over 400MB may cause memory pressure.
Budget / Older Machine
Core i3 / older CPU, 8GB RAM, any browser
~3–5x slower
1M rows: ~60–100 sec estimated. Files over 200MB may hit memory limits.

Calculate Your Time Savings

Manual baseline: approximately 15 minutes per sheet via Excel copy/paste — open file, select sheet, copy all rows, paste into new workbook, rename, save. SplitForge processes an entire workbook in approximately 20 seconds regardless of sheet count.

Monthly reports avg: 8–15 sheets

Weekly cadence = 4, daily = 22

Default 15 min — adjust to match your workflow

Data analyst avg $45–75/hr

Manual Time / Month
8.0
hours
Hours Saved / Year
96
hours per year
Labor Saved / Year
$5,265
at $55/hr
Start reclaiming 96 hours/year — free, no account required.
Assumes By Sheet mode, XLSX export. Results vary by hardware and file complexity.

Honest Limitations: Where SplitForge Excel Splitter Falls Short

No tool is perfect for every use case. Here's where Python openpyxl / pandas / AWS Glue might be a better choice, and the real limitations of our browser-based architecture.

Browser-Based Processing

Performance depends on your device's RAM and CPU. Modern laptops (2022+) handle 10M+ rows easily, but older devices may struggle with very large files.

Workaround:
Close unnecessary browser tabs to free up memory. For files over 50M rows, consider database solutions.

No Offline Mode (Initial Load)

Requires internet connection to load the tool initially. Processing happens offline in your browser after loading.

Workaround:
Once loaded, you can disconnect and continue processing. For true offline environments, desktop tools may be better.

Browser Tab Memory Limits

Most browsers limit individual tabs to 2-4GB RAM. This is the practical ceiling for file size.

Workaround:
Use 64-bit browsers with sufficient RAM. Chrome and Firefox handle large files best.

Single-Threaded Processing — One CPU Core

The Web Worker runs on one thread. Modern i7/i9 processors with 12–24 cores provide no parallelism advantage for a single split operation. Only one core is used.

Workaround:
For parallel processing across multiple files, open multiple browser tabs and run one split per tab. Not ideal but functional for batches under 10 files.

No True Streaming — Full Sheet in Memory

SheetJS sheet_to_json() loads the entire sheet into a JavaScript array before any processing begins. A 1M-row sheet requires ~200–400MB of browser heap depending on cell data types.

Workaround:
Sheets up to approximately 1M rows are stable. Above 1.5M rows, test on your hardware first. For sheets over 2M rows, use Python openpyxl with iter_rows() for true streaming reads.

By Column Value: 2,000-File Hard Cap

Column value splitting stops at 2,000 output files regardless of unique value count. Files beyond the cap are silently dropped.

Workaround:
Split by a grouped/parent column (region instead of customer ID) to reduce unique value count. For high-cardinality splits, use pandas groupby() which has no output file limit.

File Size Estimates Vary ±30%

By File Size mode samples 1,000 rows to estimate byte density per row, then targets a specific MB output. Density varies across rows — actual output files may be 70%–130% of the target size.

Workaround:
Use By Max Rows instead when you need precise file boundaries. The file size estimate is a convenience feature, not a precision tool.

When to Use Python openpyxl / pandas / AWS Glue Instead

Sheets regularly exceed 1M rows or files over 1GB

Browser memory makes this unreliable. Performance degrades non-linearly above 1.5M rows per sheet.

💡 Python openpyxl with iter_rows() for streaming reads. PySpark for distributed workloads.

Automated or scheduled Excel splitting in a pipeline

SplitForge has no API, CLI, or webhook. Cannot run headlessly.

💡 Python pandas + openpyxl with cron scheduling, or AWS Glue for cloud-native ETL.

Splitting 50+ files in a single session

SplitForge processes one workbook at a time. High-volume batch work is tedious.

💡 Python glob() + openpyxl loop processes 50+ files in a single script run with full progress logging.

Questions about limitations? Check our FAQ section below or contact us via the feedback button.

Performance FAQ

What hardware was used in the benchmark?

What does "calculated" mean on this page?

Why does my file take longer than the benchmark shows?

Does the tool use multiple CPU cores?

How does memory usage scale with file size?

What is the fastest split mode for large files?

When should I use Python instead for performance?

Can I estimate time for my file before running the split?

Ready to Put the Benchmark to Work?

Upload your workbook and see actual processing time for your files on your hardware. No install. No account. No uploads to any server.

File contents never leave your device
50,380 rows/sec — verified February 2026
Four split modes, five export formats
Files up to 2GB accepted

Related: Excel Row Limit · Excel File Too Large · CSV Splitter · Excel Cleaner