HomePivot & UnpivotPerformance
Verified Benchmark · Last tested: Feb 2026 · i7-12700K · Chrome 132 · Re-tested Quarterly

10 Million CSV Rows Pivoted
in ~18 Seconds. In Your Browser.

Detailed throughput numbers, aggregation overhead breakdown, memory model explanation, and honest limitations — so you know exactly what to expect before you process your first file. Engine: PapaParse streaming parser + Web Worker incremental aggregation.

548K
rows/sec
Pivot throughput
680K
rows/sec
Unpivot throughput
~18s
total
10M row pivot
0
uploads
File transmissions

Benchmark Results by Row Count

Pivot vs Unpivot vs Excel, measured at 100K through 10M rows. Excel hits its hard cap at 1,048,576 rows — those cells are empty by design.

Pivot (SUM + COUNT + AVG, 3 group-by cols)
Unpivot (48 value cols → rows)
Excel — hard cap at 1,048,576 rows (off scale)
0s5s10s15s20sTime (seconds)~15 min100K~15 min500KExcel: N/A1MExcel: N/A2MExcel: N/A5MExcel: N/A10MRow Count
Excel note: ~15 min reflects full workflow time (insert PT, configure fields, set aggregations, format, export) — not compute time, which is near-instant. Hard row cap: 1,048,576 rows in all Excel versions — files above this size cannot be loaded. Excel bars at 100K and 500K are truncated (actual ≈900s = ~15 min) to keep SplitForge detail visible.

Full Scalability Data

Row CountPivot TimeUnpivot TimePivot ThroughputUnpivot ThroughputNotes
100K rows0.18s0.15s548K/sec667K/secBaseline — fast enough for interactive use
500K rows0.91s0.74s549K/sec676K/secWell under 1 second for both modes
1M rows1.82s1.47s549K/sec680K/secExcel hard cap reached. SplitForge at peak throughput.
2M rows3.65s2.94s548K/sec680K/secpandas MemoryError risk on <16GB RAM machines
5M rows9.12s7.35s548K/sec680K/secLinear scaling confirmed at 5M
10M rows18.2s14.7s548K/sec680K/secUpper practical limit for typical 40–60 column business CSVs. Actual ceiling varies by column count, data types, and group cardinality. Very high-cardinality pivots may hit memory limits earlier.

Test config: Chrome 132, Windows 11, Intel i7-12700K (3.6GHz, 12-core), 32GB DDR4-3200 RAM, February 2026. Pivot: 3 group-by columns, SUM + COUNT + AVG, CSV output. Unpivot: 2 ID columns, 48 value columns, CSV output. 10 runs per row count, drop highest and lowest, average remaining 8. Results vary by hardware, browser, aggregation complexity, and file structure (±15–20%).

Aggregation Overhead Breakdown

Not all aggregations cost the same. Here's exactly what each one does to throughput and memory — so you can make informed choices.

Sum / Count / Average

548K rows/secBaseline: 548K
Memory: O(1) per group — running sum, count, mean
Running state: sum, count. Mean derived at finalize. Constant memory regardless of value distribution.

Min / Max

548K rows/secBaseline: 548K
Memory: O(1) per group — compare and replace
Single comparison per row per group. No state accumulation. Zero overhead over baseline.

StdDev / Variance

~480K–493K rows/sec (+10% overhead vs baseline)Baseline: 548K
Memory: O(1) per group — Welford's online algorithm
Welford's algorithm (Welford, 1962 — en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Welford's_online_algorithm): two extra arithmetic ops per row (delta, M2 update). No buffering, no sorting. O(1) memory per group — safe at any cardinality. +10–15% overhead vs baseline.

Weighted Average

~521K rows/sec (+5% overhead vs baseline)Baseline: 548K
Memory: O(1) per group — running weighted sum + weight sum
Two running sums: ∑(value × weight) and ∑(weight). Final result: weighted_sum / weight_sum. One extra multiplication per row. Minimal overhead (+5%).

% of Total

~521K rows/sec (+5% overhead vs baseline)Baseline: 548K
Memory: O(groups) — second pass over finalized group Map
First pass: compute group sums (same as SUM). Second pass: divide each group sum by grand total. Minimal overhead since second pass is over the group Map (small), not the full dataset.

Count Distinct

~391K rows/sec (+40% overhead vs baseline)Baseline: 548K
Memory: O(unique values) per group — Set.add() each row
Set per group stores unique string values. Set.add() is O(1) average but has overhead from hash computation and memory allocation. Capped at 100K unique values per group. Hard abort triggered if estimated total groups exceed 50K to prevent OOM. +40% overhead vs baseline.

Median

HIGH MEMORY
~274K rows/sec (+100% overhead vs baseline)Baseline: 548K
Memory: O(values per group) — buffers all values, 50K cap
Buffers all numeric values per group during streaming pass. On finalize: sort each group's buffer (O(n log n)), return middle value. Above 50,000 values per group, buffer is truncated and result shows TRUNCATED. Not recommended for high-cardinality datasets. +100% overhead vs baseline.

Mode

HIGH MEMORY
~274K rows/sec (+100% overhead vs baseline)Baseline: 548K
Memory: O(unique values per group) — Map for frequency counts
Frequency Map per group (Map<value, count>). On finalize: find max-count entry per group. Faster than Median but same buffering risk — Map grows with unique value count per group, capped at 50K. Returns the most frequent value. Tied values: returns the first-encountered. +100% overhead vs baseline.

Pivot vs Unpivot: Why Unpivot Is Faster

Same Web Worker engine. Fundamentally different memory models.

Pivot Mode

Two-Phase Architecture

Phase 1 — Streaming Accumulation
50K-row chunks from PapaParse
Each row updates group Map (key = concat of group-by column values)
Running state per group: sum, count, min, max, stdev M2, Set (if count_distinct)
Progress events every chunk: rowsProcessed, groupsFound, rowsPerSec
Cardinality check at 10K rows — warning if >50K estimated groups
Phase 2 — Finalization
Iterate group Map (small — one entry per unique group combination)
Compute derived values: mean = sum/count, stdev = √(M2/count)
Compute sort-based stats: median (sort buffer), mode (max frequency)
% of Total: second pass — divide each group sum by grand total
Serialize to CSV blobParts in 50K output chunks
Memory growth: O(groups) — grows with unique group-by combinations. High-cardinality pivots use more RAM. Monitor the cardinality warning in the UI.
Unpivot Mode

Single-Phase Streaming

Inline Row Expansion (No Finalization Phase)
50K-row input chunk from PapaParse
Each input row expands to N output rows inline (one per value column)
Output rows written immediately to blobParts — not buffered
Chunk flushed; memory reclaimed. Next chunk begins.
No Map, no Set, no group accumulation, no finalization
Progress events: rowsProcessed, outputRowsGenerated, rowsPerSec
Process completes when last chunk is flushed — no second pass
Memory: O(chunk) — constant at ~50K rows regardless of total file size. A 10M row unpivot uses the same RAM as a 100K row unpivot.
Why it's faster: No Map insertions, no Set operations, no finalization sort. Pure row-by-row expansion. 680K rows/sec vs 548K rows/sec for pivot = 24% faster.

Honest Limitations

What this tool does well and where it reaches its ceiling — so you can make informed decisions, not discover surprises mid-project.

Browser Memory Ceiling (~10M rows for typical CSVs)

The practical maximum is approximately 10 million rows for typical 40–60 column business CSVs on a modern desktop browser — actual ceiling varies significantly by column count, data types, and group cardinality. Above this, Chrome may show an out-of-memory error or silently crash the tab. Pivot is more constrained than unpivot because the group Map grows in memory. Very high cardinality (millions of unique group combinations) can hit the ceiling before 10M rows. Plan accordingly: if your pivot produces millions of unique groups, reduce group-by columns or pre-filter the data.

Median / Mode: 50K Value/Group Cap

Median and Mode buffer all values per group during the streaming pass. The buffer is capped at 50,000 values per group. Above the cap, the result cell shows TRUNCATED instead of a numeric value. For datasets where any group has more than 50K rows of the same value column, median and mode results will be incomplete. This is intentional — without the cap, these aggregations could exhaust all available browser memory.

Count Distinct: Hard Abort at >50K Estimated Groups

Count Distinct maintains a Set per group to track unique values. If the worker estimates that the total number of groups will exceed 50,000 (based on cardinality sampling from the first 10K rows), processing is hard-aborted with an error message before OOM conditions develop. This is a safety mechanism, not a bug. If you hit this, reduce your group-by column count or pre-filter to a subset of the data.

No API, CLI, or Automation Support

SplitForge Pivot & Unpivot is browser-only. It cannot be called from cron jobs, ETL pipelines, GitHub Actions, Python scripts, or any automated workflow. For scheduled or automated pivot/unpivot operations, the right tools are Python pandas (local), Apache Spark/dask (cluster), AWS Glue, dbt (transformations), or Airflow (orchestration). This tool is designed for human-in-the-loop, interactive data transformation workflows.

Not an Interactive Pivot Table Explorer

SplitForge produces a flat output file, not an interactive click-and-drag pivot table like Excel's. You cannot drag fields between row/column/value areas after processing. Re-run with different configuration if you want to explore different groupings. For interactive pivot exploration, Excel Pivot Tables or Tableau are the right tools. SplitForge's value is in processing large, compliance-constrained datasets at scale.

No Real-Time Data Connections

Processes uploaded files only. No database connectors, no API polling, no live data streams. For real-time or live-connected pivot analysis, use Power BI DirectQuery, Tableau Live, or Grafana/Kibana depending on your data source. SplitForge is the right choice when you have a file — CSV or Excel — and need it processed privately and without setup friction.

Test Methodology

Full transparency on how every number on this page was produced.

Time Savings Calculator

Manual baseline: ~15 min per pivot report via Excel Pivot Tables. SplitForge with saved config: ~30 sec.

Weekly cadence = 4/mo

Year-round = 12

Analyst avg: $45–75/hr

Annual Time Saved
11.6
hours per year
Annual Savings
$638
vs. manual Excel workflow

Estimates based on a 15-minute manual Excel Pivot Table workflow baseline. Actual savings vary by report complexity and existing automation. SplitForge processing time varies by hardware, browser, and file structure (±15–20%).

Benchmark FAQs

Related Benchmarks & Guides

Ready to Pivot 10M Rows Yourself?

Drop a CSV or Excel file. Configure in 30 seconds. Download the result. File contents never leave your browser.

548K rows/sec pivot (Chrome 132, i7-12700K, Feb 2026)
100% client-side — zero uploads
Results vary by hardware, browser, file complexity (±15–20%)