VERIFIED BENCHMARK — February 2026

Format Converter Performance

10M rows converted in your browser. No upload. No server. Benchmark methodology, feature overhead breakdown, and honest limitations.

~537K/sec
JSON → CSV throughput
~220K/sec
CSV → JSON throughput
10M+
Rows tested
0
Bytes uploaded

Chrome (stable) · Windows 11 · Intel Core i7-12700K (3.6GHz) · 32GB DDR4-3200 · February 2026
10 runs per config — highest/lowest discarded, remaining 8 averaged. Results vary ±15–20% by hardware, browser, and data complexity.

Processing Time by File Size

SplitForge (JSON→CSV) vs Excel Power Query vs Python pandas — seconds to completion

All SplitForge times: Chrome (stable), Windows 11, Intel i7-12700K, 32GB RAM, February 2026. 10 runs per configuration — highest/lowest discarded, remaining 8 averaged. Excel Power Query figures estimated from internal workflow testing — actual times vary significantly by machine, file schema, and Power Query version. Python pandas estimates for Python 3.11 on equivalent hardware — your environment will vary. Results vary by hardware, browser, available RAM, and data complexity.

Throughput at Scale

Chrome (stable) · Windows 11 · Intel i7-12700K · 32GB RAM · February 2026

Format Converter throughput by row count and conversion direction
File SizeJSON → CSVCSV → JSONExcel → CSVNotes
1,000 rows~22K rows/sec~9K rows/sec~8K rows/secStartup overhead visible at small sizes — worker init + file API dominates
100K rows~210K rows/sec~85K rows/sec~50K rows/secMixed data types, 8 columns
1M rows~390K rows/sec~160K rows/sec~94K rows/secStandard CSV, nested JSON with 2-level nesting
5M rows~490K rows/sec~200K rows/secN/A (XLSX row cap)ChunkWriter architecture reaches near-peak throughput
10M rows~537K rows/sec~220K rows/secN/A (XLSX row cap)Peak throughput — zero-copy Uint8Array blob construction optimal
~1GB+ file~520K rows/sec~210K rows/secN/AMaximum tested capacity (browser-memory dependent)

Results vary by hardware, browser, and data complexity. Throughput improves at scale due to ChunkWriter 60MB batching and compiled row extraction. Excel row cap is 1,048,576 — Excel→CSV beyond that row count is not applicable.

Why Local Processing Is Faster Than Upload Tools

The 537K rows/sec benchmark only tells half the story

Upload-based converters (Zamzar, CloudConvert, data.page) all require your file to travel to a server and back. That network round-trip adds latency that cannot be optimized away — it is physics, not engineering. SplitForge eliminates the entire network path by processing in your browser.

Upload Tool Timeline (100MB file)
TLS handshake + connection setup0.5–2 sec
File upload to server (100MB @ 50 Mbps)16–20 sec
Server receives + writes file to disk1–3 sec
Cold start or queue wait (shared infra)2–10 sec
Server processes file5–30 sec
Server writes output file to disk1–3 sec
Download output file3–10 sec
Total29–78 seconds
SplitForge Timeline (100MB file)
File API reads file from local disk0.2–0.5 sec
Web Worker initializes (one-time)0.05–0.1 sec
ChunkWriter processes file (JSON→CSV)0.2–0.4 sec
Blob URL created for download<0.05 sec
Upload latency0 sec
Server queue wait0 sec
Download from server0 sec
Total0.5–1 second
100%
Network latency eliminated
No upload, no download from server
30–78x faster
Total time advantage
On 100MB file vs average upload tool
0 seconds
Data exposure window
File never transmitted — compliance by architecture
Upload tool timings are estimates based on a 100MB file, 50 Mbps upload speed, and average cloud processing latency. Actual times vary by network speed, server load, and file complexity. SplitForge timings are measured on tested hardware (Chrome stable, Windows 11, Intel i7-12700K, 32GB RAM, February 2026).

Feature Performance Overhead

JSON → CSV (Default)
Baseline
~537K rows/sec
ChunkWriter streaming: reads JSON objects from input stream, compiles a zero-branch row extractor function at runtime, writes CSV lines in 60MB buffer chunks. This compiled extraction eliminates per-row branching — the primary source of throughput advantage over naive implementations.
JSONL → CSV (Streaming)
~5% slower
~510K rows/sec
JSONL streams each line independently using an async generator (streamLinesFast). Each line is parsed as a separate JSON object and passed to the compiled row extractor. Slightly slower than JSON array because each line requires a separate JSON.parse() call rather than batched object iteration.
CSV → JSON (Array Output)
~59% slower than JSON→CSV
~220K rows/sec
CSV→JSON is slower because each row becomes a full JavaScript object that must be serialized back to JSON string. Object creation and JSON.stringify() are more expensive than CSV line writing. For 10M rows this produces a very large JSON array — JSONL output is recommended above 2M rows to avoid memory pressure.
Nested JSON Flattening
~9% slower
~490K rows/sec
Recursive flattenObject() traverses each nested object using depth-first search, joining keys with dot-notation. Overhead scales with nesting depth and number of nested keys — a flat JSON object adds zero overhead. Most API responses (2–3 nesting levels) add 8–12% overhead. The compiled row extractor is rebuilt once per unique schema, not per row.
Type Detection (num/bool/null)
<5% overhead
~520K rows/sec
Parses each cell value for number, boolean, or null conversion. isNaN() check + Number() conversion. Runs as optional second pass — skipped entirely when type detection is disabled. For 10M rows: roughly 0.5–1 additional second. Always worth enabling when converting CSV→JSON for API payloads or database imports where string-typed numbers break downstream.
Excel → CSV (XLSX Parse)
Separate path
~94K rows/sec
Excel conversion is fundamentally slower because XLSX format is a ZIP archive of XML files — the entire file must be decompressed and parsed as XML before any CSV writing can begin. SheetJS handles this natively in the browser but the XLSX parse overhead (~60–70% of total time) cannot be streamed. Practical ceiling: ~1M rows before XLSX format itself becomes the bottleneck. For large Excel files, convert to CSV first using the Excel to CSV converter, then process the CSV.
Keyed JSON Output
~5% over CSV→JSON
~210K rows/sec
Instead of pushing objects into an array, keyed mode uses a specified column value as the object key in a hash table: {"id_1": {...}, "id_2": {...}}. Overhead is a single additional string lookup per row — negligible. The primary performance difference vs standard JSON is equivalent, since both require the same object serialization path.
BOM / Encoding Handling
Near-zero
~535K rows/sec
UTF-8 BOM stripping happens once at file header read — one conditional check before the main processing loop. Auto-encoding detection runs on the first 2KB of the file only. Neither operation has meaningful impact on per-row throughput. Add BOM to output is a single 3-byte prefix write on the final blob.
All overhead figures measured on 10M row dataset, February 2026, Chrome (stable), 32GB RAM, Intel i7-12700K. Results vary by hardware, browser, JSON nesting depth, and column count.

Calculate Your Time Savings

Manual baseline: ~15 minutes per file to convert JSON/JSONL to CSV via Excel Power Query — based on internal workflow testing, February 2026. This covers: Data tab, Get Data, From File, From JSON, navigate folder, load, transform in Power Query Editor, fix date/type errors, close and load, export CSV. Repeat per file. Does not account for the additional 5–15 minutes spent when Power Query crashes on larger files. SplitForge converts any supported format in under 45 seconds with no setup, no schema config, and no repeat configuration process.

Typical: 2–5 files per session

Weekly = 52, Monthly = 12, Daily = 260

Analyst avg: $45–75/hr

Annual Time Saved
37.1
hours per year
Annual Labor Savings
$1,853
per year (vs manual Power Query workflow)

Calculation: 3 files × 15 min/file × 52 sessions/year = 39.0 hours/year manual. SplitForge: 3 files × 45 sec/file × 52 sessions = 1.9 hours/year. Net savings: 37.1 hours at $50/hr = $1,853/year.

Benchmark Methodology

Full test configuration, data generation, and measurement protocol

Honest Limitations: Where SplitForge Format Converter Falls Short

No tool is perfect for every use case. Here's where Python pandas / Excel Power Query might be a better choice, and the real limitations of our browser-based architecture.

Browser-Based Processing

Performance depends on your device's RAM and CPU. Modern laptops (2022+) handle 10M+ rows easily, but older devices may struggle with very large files.

Workaround:
Close unnecessary browser tabs to free up memory. For files over 50M rows, consider database solutions.

No Offline Mode (Initial Load)

Requires internet connection to load the tool initially. Processing happens offline in your browser after loading.

Workaround:
Once loaded, you can disconnect and continue processing. For true offline environments, desktop tools may be better.

Browser Tab Memory Limits

Most browsers limit individual tabs to 2-4GB RAM. This is the practical ceiling for file size.

Workaround:
Use 64-bit browsers with sufficient RAM. Chrome and Firefox handle large files best.

Browser Memory Ceiling (~1GB practical limit)

The ChunkWriter architecture minimizes memory usage, but very large files (1–2GB+) can still cause browser tab crashes on machines with limited RAM. An 850MB JSON file fits comfortably on 16GB+ systems. Systems with 8GB RAM may experience reduced performance on files above ~500MB.

Workaround:
Use the CSV Splitter to break large files into smaller chunks, convert each chunk separately, then merge the results.

No Automation, API, or Scheduled Conversion

SplitForge runs in a browser tab — it cannot be triggered via command line, REST API, cron job, or CI/CD pipeline. Automated or scheduled conversion workflows require Python pandas, dbt, Airflow, or a proper ETL tool.

Workaround:
Python: pd.read_json(file).to_csv(output) for JSON→CSV. pd.read_csv(file).to_json(output, orient="records") for CSV→JSON.

One File at a Time

SplitForge converts one file per session. Batch conversion across multiple files (e.g., converting 50 JSON exports from an API to CSV in one operation) is not supported.

Workaround:
For batch workflows, use Python with glob: [pd.read_json(f).to_csv(f.replace(".json", ".csv")) for f in glob.glob("*.json")]

Excel Output Capped at 1,048,576 Rows

The XLSX format has a hard 1,048,576 row limit. If you request Excel output on a file with more rows, SplitForge converts the first 1,048,576 rows and notifies you. This is not a SplitForge constraint — it is the Excel specification.

Workaround:
Use CSV output for files over 1M rows. CSV has no row limit. Excel can open CSV files with more rows using Power Query.

No Schema Validation or Data Transformation

SplitForge converts format, not schema. It does not validate JSON against a schema, enforce types, merge columns, or apply custom transformation rules during conversion. What is in the input is what appears in the output (with optional type detection and flattening).

Workaround:
For schema validation and complex transformation, use Python with jsonschema, Pandas operations, or a dedicated ETL tool like Fivetran or dbt.

When to Use Python pandas / Excel Power Query Instead

You need automated, scheduled, or batch file conversion

SplitForge has no API and no command-line interface. Browser-only workflow cannot run programmatically.

💡 Python pandas with glob for batch: [pd.read_json(f).to_csv(...) for f in files]. Schedule with cron or Airflow.

You need to convert files larger than ~1GB regularly

Browser memory limits the practical file size ceiling. Very large files require server-side processing.

💡 Python pandas with chunksize parameter for streaming large files. dbt for structured data pipeline transformations.

You need complex schema transformation during conversion

SplitForge only flattens nested JSON and detects types — it does not merge fields, compute derived columns, or validate against schemas.

💡 Python pandas with custom transformation logic, or jq for JSON manipulation. dbt models for SQL-based transformation.

You need to join multiple source files during conversion

SplitForge converts one file at a time with no join or merge capability.

💡 Use SplitForge to convert individual files first, then use the CSV Merger tool to combine CSVs. Or Python pandas merge()/join() for complex joins.

Questions about limitations? Check our FAQ section below or contact us via the feedback button.

Frequently Asked Questions

How accurate is the 537K rows/second benchmark?

Why is JSON→CSV nearly 2.5x faster than CSV→JSON?

Why is Excel→CSV so much slower than JSON→CSV?

What is the JSONL streaming architecture and why does it matter?

Does nested JSON flattening work on any schema, or do I need to define the schema first?

How does performance compare to Python pandas for the same conversion?

What is the practical row limit?

Can these benchmarks be reproduced?

Benchmarks last updated: February 2026. Re-tested quarterly and after major algorithm changes.

Ready to Convert 10M Rows in Under a Minute?

No installation. File contents never uploaded. CSV, JSON, JSONL, and Excel in any direction — with nested JSON flattening and JSONL streaming built in.

No signup. No email. No install. No account required.