Performance Benchmarks · February 2026 · Chrome 132 · i7-12700K · 32GB RAM

Aggregate & Group: Verified Benchmarks. 10 Million Rows in 34 Seconds.

Full advanced mode with 19 functions (MEDIAN, P90, STDDEV, COUNT_DISTINCT) on a 1.2GB CSV. Simple mode hits 26 seconds. All in your browser — never uploaded.

File Size
~1.2 GB
10M rows
Simple Mode
~26 sec
~380K rows/sec
Advanced Mode
~34 sec
~294K rows/sec
Data Uploaded
Zero
100% client-side
Test configuration: Chrome 132 (stable), Windows 11, Intel Core i7-12700K (3.6GHz), 32GB DDR4-3200 RAM, February 2026. 10 runs per configuration with highest/lowest discarded and 8 averaged. Results vary ±15–20% based on hardware, browser, and configuration complexity.

Throughput by File Size — Simple vs Advanced Mode

File SizeSimple Mode (5 functions)Advanced Mode (19 functions)Notes
1K rows~25K rows/sec~18K rows/secStartup overhead dominates at small sizes (worker init, file read)
10K rows~185K rows/sec~130K rows/secWorker pipeline warming up, chunk batching beginning to amortize
100K rows~310K rows/sec~210K rows/secTypical CRM or financial export batch
1M rows~390K rows/sec~265K rows/sec5 GROUP BY dimensions, 3 metric columns
5M rows~400K rows/sec~280K rows/secStreaming chunks fully pipelined, peak throughput approaching
10M rowsVerified~380K rows/sec (~26s)~294K rows/sec (~34s)Verified benchmark — peak throughput, memory-bounded at scale
~1.2GB file~365K rows/sec~270K rows/secNear browser-memory ceiling — results vary significantly by available RAM
Simple mode: COUNT, SUM, AVG, MIN, MAX — online algorithms, O(1) memory per group. No value-storage arrays.
Advanced mode: All 19 functions including MEDIAN, P25–P95, MODE, STDDEV, VARIANCE, FIRST, LAST, COUNT_DISTINCT, CONCAT_UNIQUE — requires value-storage arrays per group for accurate computation.
All benchmarks: Chrome 132, Windows 11, Intel i7-12700K, 32GB RAM, February 2026. Results vary ±15–20%.

Rows Per Second — Visual Comparison

Simple Mode — 5 Functions (COUNT, SUM, AVG, MIN, MAX)

Test Configuration: Chrome 132, Windows 11, Intel i7-12700K, 32GB RAM, February 2026. Results vary ±15–20% by hardware and browser.
Rows per second
100K rows
310.0K
1M rows
390.0K
5M rowsFASTEST
400.0K
10M rows
380.0K

Higher values indicate better performance (faster processing)

Advanced Mode — 19 Functions (includes MEDIAN, Percentiles, STDDEV)

Test Configuration: Same configuration. Advanced mode is slower due to value storage arrays required for MEDIAN, percentiles, and MODE.
Rows per second
100K rows
210.0K
1M rows
265.0K
5M rows
280.0K
10M rowsFASTEST
294.0K

Higher values indicate better performance (faster processing)

Performance improves with larger files due to 10MB chunk batching amortizing startup overhead. Results vary ±15–20% by hardware, browser, and function selection.

Benchmark Methodology & Test Configuration

Why Browser-Based Aggregation Outperforms Excel on Large Files

What Excel Pivot Tables Do

Load entire dataset into worksheet grid (all cells materialized in memory)
Render row-by-row to UI — every pivot recalculates the visible grid
Allocate worksheet memory proportional to row count × column count
Block the main thread during recalculation (UI freezes)
Hard worksheet limit: 1,048,576 rows regardless of available RAM

What SplitForge Does

Stream file in 10MB chunks via FileReader — never loads full dataset into UI memory
Web Worker processes each chunk off the main thread — browser stays responsive
HashMap accumulator stores only group keys and running aggregates (not row data)
Main thread never blocks — progress updates are lightweight messages
Memory scales with number of unique groups, not total row count

The architectural result: Excel's performance ceiling is determined by worksheet materialization cost — rendering 1M+ rows into a grid exhausts memory before aggregation even begins. SplitForge's ceiling is determined by HashMap cardinality (number of unique group combinations) and function selection. Processing 10M rows with 500 unique groups requires storing ~500 aggregate records — not 10 million rows. That's why 10M rows processes in 26–34 seconds while Excel cannot open the file at all.

Time Savings Calculator

Baseline: Excel Pivot on large file ≈ 20 min per analysis. SplitForge ≈ 2 min.

20-minute baseline derived from internal timing of Excel Pivot Table setup and processing on 500K–1M row exports (field collapse, recalculate, filter copy). Results vary by file size and analyst experience.

Hours saved per year
57.6h
(4 analyses/week × 48 weeks × 18 min saved)
Annual value saved
$3,456
at $60/hr · individual estimate only

Function Performance Characteristics

Fast — O(1) per row
COUNTSUMAVGMINMAXRANGE

Online algorithms — maintain running state without storing values. No additional RAM per group beyond the running total. Performance is essentially constant regardless of group size.

Medium — O(n) per group
STDDEVVARIANCECOUNT_DISTINCTCONCAT_UNIQUEFIRSTLAST

STDDEV/VARIANCE use Welford's algorithm — O(1) memory but O(n) computation. COUNT_DISTINCT and CONCAT_UNIQUE maintain a Set per group. FIRST/LAST require tracking insertion order.

Slower — O(n log n) per group
MEDIANMODEP25P50P75P90P95

Require storing all values per group to compute the true median or percentile. At 10M rows with 500 groups, this means potentially millions of stored values. Sorted at finalization — O(n log n) per group. Select only when you need true distributional statistics.

Honest Limitations: Where Falls Short

No tool is perfect for every use case. Here's where might be a better choice, and the real limitations of our browser-based architecture.

Browser-Based Processing

Performance depends on your device's RAM and CPU. Modern laptops (2022+) handle 10M+ rows easily, but older devices may struggle with very large files.

Workaround:
Close unnecessary browser tabs to free up memory. For files over 50M rows, consider database solutions.

No Offline Mode (Initial Load)

Requires internet connection to load the tool initially. Processing happens offline in your browser after loading.

Workaround:
Once loaded, you can disconnect and continue processing. For true offline environments, desktop tools may be better.

Browser Tab Memory Limits

Most browsers limit individual tabs to 2-4GB RAM. This is the practical ceiling for file size.

Workaround:
Use 64-bit browsers with sufficient RAM. Chrome and Firefox handle large files best.

Questions about limitations? Check our FAQ section below or contact us via the feedback button.

Frequently Asked Questions

How accurate are the 34-second and 26-second benchmarks for 10M rows?

Why is advanced mode slower than simple mode?

Why does performance improve with larger files (1K rows is slower than 10M)?

How does this compare to Excel Pivot Tables at scale?

What happens with high-cardinality GROUP BY columns?

Does the HAVING filter affect processing speed?

What's the maximum file size SplitForge can handle?

What browser gives the best performance?

Does SplitForge transmit any file data for analytics?

Benchmarks last updated: February 2026 · Conducted by the SplitForge engineering team · Re-tested quarterly and after major algorithm changes · Aggregation Engine v1.0 · SVBP-2026 Protocol (SplitForge Verified Benchmark Protocol: synthetic dataset, 10-run median, cold-start excluded, hardware-documented) · Privacy verified: no outbound file payload requests (network inspector confirmed)

Ready to Aggregate 10M Rows in 34 Seconds?

No installation. File contents never uploaded. 19 functions, SQL HAVING filters, and multi-level subtotals — drop your CSV and run.