10 Million Rows
in 33 Seconds
Benchmark-proven delimiter and encoding conversion at massive scale. 10M rows — comma to tab delimiter, Windows CRLF to Unix LF — 303K rows/sec sustained, all in your browser, zero uploads.
Full Benchmark Results
| Row Count | Processing Time | Speed | Notes |
|---|---|---|---|
| 1K rows | ~4ms | ~250K rows/sec | Startup overhead dominates at small file sizes |
| 100K rows | ~156ms | ~641K rows/sec | Comma to Tab, CRLF to LF — throughput climbing |
| 500K rows | ~474ms | ~1.05M rows/sec | Peak throughput — chunk pipeline fully saturated |
| 1M rows | ~3.3s | ~303K rows/sec | Sustained rate as memory pressure builds |
| 5M rows | ~16.5s | ~303K rows/sec | Sustained 303K rows/sec — GC pauses stable |
| 10M rowsVerified | ~33s | ~303K rows/sec | Peak tested capacity — sustained throughput |
| ~1GB+ file | varies | ~280K rows/sec | Maximum practical capacity (browser-memory bound) |
Speed vs Row Count
Throughput (K rows/sec) by Row Count
Lower values indicate better performance (faster processing)
Benchmark Methodology
Time Saved Calculator
Adjust the manual baseline to match your actual workflow. Typical range is 5–45 minutes per file depending on how obvious the problem is and what tools are available. SplitForge baseline: approximately 30 seconds at 10M rows on tested hardware — smaller files complete proportionally faster.
Why It's Fast (Architecture)
64KB Streaming Chunks
The file is never fully loaded into memory. A streaming reader yields 64KB chunks continuously. Output chunks accumulate as Uint8Array blobs and are assembled at completion. Memory usage scales with output buffer size rather than full file load, avoiding the exponential memory growth seen in grid-based tools like Excel.
Dedicated Web Worker
All conversion runs in a background Web Worker, keeping the main thread and browser UI completely unblocked. You can navigate while a 10M row conversion runs. Progress is reported via postMessage callbacks at each chunk boundary.
One-Pass Multi-Attribute
Delimiter parsing, line ending normalization, RFC 4180 quote fixing, whitespace trimming, and empty line skipping all happen in a single streaming pass. Zero intermediate files, zero second passes — every byte touches the CPU exactly once.
Zero Upload Architecture
The file is read via the FileReader API directly into the Web Worker. It never reaches a network socket. Processing is entirely CPU and RAM — no server round-trip latency, no bandwidth cost, no server-side data exposure.
Honest Limitations: Where Falls Short
No tool is perfect for every use case. Here's where might be a better choice, and the real limitations of our browser-based architecture.
Browser-Based Processing
Performance depends on your device's RAM and CPU. Modern laptops (2022+) handle 10M+ rows easily, but older devices may struggle with very large files.
No Offline Mode (Initial Load)
Requires internet connection to load the tool initially. Processing happens offline in your browser after loading.
Browser Tab Memory Limits
Most browsers limit individual tabs to 2-4GB RAM. This is the practical ceiling for file size.
Questions about limitations? Check our FAQ section below or contact us via the feedback button.
Frequently Asked Questions
How accurate is the 303K rows/second benchmark?
Why is 500K rows faster per row than 10M rows?
What does 'multi-attribute conversion' mean for performance?
How does auto-detection work?
How does this compare to Excel Save As CSV?
What file types are supported?
Does RFC 4180 fixing affect performance?
Benchmarks last updated: February 2026. Re-tested quarterly and after major algorithm changes.