Processing Time by File Size
SplitForge (JSON→CSV) vs Excel Power Query vs Python pandas — seconds to completion
Throughput at Scale
Chrome (stable) · Windows 11 · Intel i7-12700K · 32GB RAM · February 2026
| File Size | JSON → CSV | CSV → JSON | Excel → CSV | Notes |
|---|---|---|---|---|
| 1,000 rows | ~22K rows/sec | ~9K rows/sec | ~8K rows/sec | Startup overhead visible at small sizes — worker init + file API dominates |
| 100K rows | ~210K rows/sec | ~85K rows/sec | ~50K rows/sec | Mixed data types, 8 columns |
| 1M rows | ~390K rows/sec | ~160K rows/sec | ~94K rows/sec | Standard CSV, nested JSON with 2-level nesting |
| 5M rows | ~490K rows/sec | ~200K rows/sec | N/A (XLSX row cap) | ChunkWriter architecture reaches near-peak throughput |
| 10M rows | ~537K rows/sec | ~220K rows/sec | N/A (XLSX row cap) | Peak throughput — zero-copy Uint8Array blob construction optimal |
| ~1GB+ file | ~520K rows/sec | ~210K rows/sec | N/A | Maximum tested capacity (browser-memory dependent) |
Results vary by hardware, browser, and data complexity. Throughput improves at scale due to ChunkWriter 60MB batching and compiled row extraction. Excel row cap is 1,048,576 — Excel→CSV beyond that row count is not applicable.
Why Local Processing Is Faster Than Upload Tools
The 537K rows/sec benchmark only tells half the story
Upload-based converters (Zamzar, CloudConvert, data.page) all require your file to travel to a server and back. That network round-trip adds latency that cannot be optimized away — it is physics, not engineering. SplitForge eliminates the entire network path by processing in your browser.
Feature Performance Overhead
Calculate Your Time Savings
Typical: 2–5 files per session
Weekly = 52, Monthly = 12, Daily = 260
Analyst avg: $45–75/hr
Benchmark Methodology
Full test configuration, data generation, and measurement protocol
Honest Limitations: Where SplitForge Format Converter Falls Short
No tool is perfect for every use case. Here's where Python pandas / Excel Power Query might be a better choice, and the real limitations of our browser-based architecture.
Browser-Based Processing
Performance depends on your device's RAM and CPU. Modern laptops (2022+) handle 10M+ rows easily, but older devices may struggle with very large files.
No Offline Mode (Initial Load)
Requires internet connection to load the tool initially. Processing happens offline in your browser after loading.
Browser Tab Memory Limits
Most browsers limit individual tabs to 2-4GB RAM. This is the practical ceiling for file size.
Browser Memory Ceiling (~1GB practical limit)
The ChunkWriter architecture minimizes memory usage, but very large files (1–2GB+) can still cause browser tab crashes on machines with limited RAM. An 850MB JSON file fits comfortably on 16GB+ systems. Systems with 8GB RAM may experience reduced performance on files above ~500MB.
No Automation, API, or Scheduled Conversion
SplitForge runs in a browser tab — it cannot be triggered via command line, REST API, cron job, or CI/CD pipeline. Automated or scheduled conversion workflows require Python pandas, dbt, Airflow, or a proper ETL tool.
One File at a Time
SplitForge converts one file per session. Batch conversion across multiple files (e.g., converting 50 JSON exports from an API to CSV in one operation) is not supported.
Excel Output Capped at 1,048,576 Rows
The XLSX format has a hard 1,048,576 row limit. If you request Excel output on a file with more rows, SplitForge converts the first 1,048,576 rows and notifies you. This is not a SplitForge constraint — it is the Excel specification.
No Schema Validation or Data Transformation
SplitForge converts format, not schema. It does not validate JSON against a schema, enforce types, merge columns, or apply custom transformation rules during conversion. What is in the input is what appears in the output (with optional type detection and flattening).
When to Use Python pandas / Excel Power Query Instead
You need automated, scheduled, or batch file conversion
SplitForge has no API and no command-line interface. Browser-only workflow cannot run programmatically.
You need to convert files larger than ~1GB regularly
Browser memory limits the practical file size ceiling. Very large files require server-side processing.
You need complex schema transformation during conversion
SplitForge only flattens nested JSON and detects types — it does not merge fields, compute derived columns, or validate against schemas.
You need to join multiple source files during conversion
SplitForge converts one file at a time with no join or merge capability.
Questions about limitations? Check our FAQ section below or contact us via the feedback button.
Frequently Asked Questions
How accurate is the 537K rows/second benchmark?
Why is JSON→CSV nearly 2.5x faster than CSV→JSON?
Why is Excel→CSV so much slower than JSON→CSV?
What is the JSONL streaming architecture and why does it matter?
Does nested JSON flattening work on any schema, or do I need to define the schema first?
How does performance compare to Python pandas for the same conversion?
What is the practical row limit?
Can these benchmarks be reproduced?
Benchmarks last updated: February 2026. Re-tested quarterly and after major algorithm changes.