93,423 Rows/Second โ€” Verified Benchmark

Excel to JSON at
93,423 Rows Per Second

Verified: 1 million rows converted to JSON in 10.7 seconds โ€” entirely in your browser. Zero uploads. Full methodology below.

This page is for engineers who want numbers, not marketing. Every claim is sourced. Every projection is labeled. Hardware config and discard methodology are below.

93,423
rows/sec
Processing Speed
Verified
10.7
seconds
Total Time (1M rows)
Verified
~95.8
MB
Input Size (1M rows)
Verified
~166
MB JSON
Output Size
Verified

Test Configuration

February 2026
Hardware
  • Chrome (stable channel)
  • Windows 11 (22H2)
  • Intel Core i7-12700K
  • 32GB DDR4 RAM
Test File
  • 1,000,000 rows x 10 columns
  • Mixed types: text, numbers, dates
  • Single sheet workbook
  • ~95.8 MB input file
Methodology
  • 10 runs total
  • Highest + lowest discarded
  • Remaining 8 averaged
  • Pretty-print enabled
Variability disclaimer: Results vary by hardware, browser, and file complexity (plus or minus 20%). Low-RAM machines or older browsers may be 30-50% slower. Startup overhead (~0.21s) is fixed per operation regardless of file size.

Processing Time by Output Format

Calculated from verified 93,423 rows/sec baseline. All values except 1M rows pretty-print are projections.

Pretty-Print1M = Verified
Minified
Multi-Sheet
10K rows100K rows500K rows1M rows0481216Time (seconds)
DatasetPretty-PrintMinified (~15% faster)Multi-Sheet (+20%)Source
10K rows0.32s~0.3s~0.34sCalculated
100K rows1.28s~1.12s~1.49sCalculated
500K rows5.56s~4.76s~6.63sCalculated
1M rows10.7sVerified~9.31s~13.05sMeasured

Calculated values use: startup_overhead (0.21s) + rows / 93,423. Not independently measured. Multi-sheet overhead is an algorithmic estimate, not a verified benchmark.

Output Format Overhead Analysis

Pretty-Print JSON
Verified
10.7s (1M rows)
BaselineFactor: 1.00x

JSON.stringify(data, null, 2). Human-readable, 2-space indent. ~30-40% larger output. Best for debugging and development workflows.

Minified JSON
Calculated
~9.31s (1M rows)
~15% fasterFactor: 0.85x

JSON.stringify(data). No whitespace. Smaller output file. Best for production API payloads where file size matters.

Multi-Sheet ZIP
Calculated
~13.05s per 1M rows
~20% overheadFactor: 1.20x

Each sheet processed sequentially. JSZip assembles the archive. Best for batch workbook exports needing separate JSON files.

Time and Cost Savings Calculator

Quick presets:
How many Excel to JSON conversions per month?
Time via Python setup or online tool (setup + wait)
Your effective hourly rate or team blended rate
Hours saved / month
3.9hrs
Savings / month
$236
Hours saved / year
47hrs
Savings / year
$2,832

Assumes ~30 seconds per SplitForge conversion vs your specified manual time. Estimates for planning purposes only.

Honest Limitations: Where Falls Short

No tool is perfect for every use case. Here's where might be a better choice, and the real limitations of our browser-based architecture.

Browser-Based Processing

Performance depends on your device's RAM and CPU. Modern laptops (2022+) handle 10M+ rows easily, but older devices may struggle with very large files.

Workaround:
Close unnecessary browser tabs to free up memory. For files over 50M rows, consider database solutions.

No Offline Mode (Initial Load)

Requires internet connection to load the tool initially. Processing happens offline in your browser after loading.

Workaround:
Once loaded, you can disconnect and continue processing. For true offline environments, desktop tools may be better.

Browser Tab Memory Limits

Most browsers limit individual tabs to 2-4GB RAM. This is the practical ceiling for file size.

Workaround:
Use 64-bit browsers with sufficient RAM. Chrome and Firefox handle large files best.

No Formula Preservation

Formulas export as evaluated values. =SUM(A1:A10) showing 150 in Excel exports as 150 in JSON โ€” not the formula string. Formula expression preservation is not supported.

Workaround:
SplitForge reads the same cached value Excel displays โ€” no recalculation. For formula-critical workflows, use Excel's Save As to export values first, or use Excel VBA to export formula strings directly.

No Nested JSON Structures

Output is a flat array of objects โ€” one object per row. No support for nested structures, parent-child relationships, or custom JSON schema.

Workaround:
Export flat JSON from SplitForge, then post-process with Python json module or jq to build nested structures. SplitForge handles the fast bulk extraction; a small script handles the reshape.

Memory Ceiling (~1GB)

Processing is limited by available browser RAM. Practically around 1GB or 1M rows on most modern laptops. Very large files or low-RAM machines may trigger memory errors.

Workaround:
Use SplitForge Excel Splitter to divide into smaller sheets first, then convert each. For files over 2GB, Python openpyxl with chunked reading is more appropriate.

No API or Automation

Browser-only tool โ€” no REST API, CLI, or webhook support. Cannot be used in automated pipelines, cron jobs, or CI/CD workflows.

Workaround:
Python pandas: pd.read_excel("file.xlsx").to_json("output.json", orient="records"). Schedule with cron, AWS Lambda, or Airflow for automated recurring jobs.

Single File Per Session

One workbook per conversion session. No batch processing across multiple files simultaneously.

Workaround:
Sequential processing is fast โ€” seconds per file. For 50+ file batches, use Python with glob() and openpyxl.

When to Use Instead

Automated recurring conversions on a schedule

SplitForge has no API or CLI โ€” cannot run headlessly or on a cron schedule.

๐Ÿ’ก Python pandas: pd.read_excel().to_json(). Schedule with cron, Airflow, or AWS Lambda.

Nested JSON output required

Output is flat arrays โ€” no nested structure or schema mapping logic.

๐Ÿ’ก Python json module with custom serializer, or jq for post-processing. Export flat JSON from SplitForge, then reshape.

Files regularly exceed 2GB

Browser memory limits make very large file processing unreliable.

๐Ÿ’ก Python openpyxl with chunked reads: load_workbook(read_only=True) with row iteration.

High-volume batch processing (50+ files)

Single file per session makes browser-based batch workflows impractical at scale.

๐Ÿ’ก Python glob() + openpyxl loop. Or AWS Glue DataBrew for cloud-scale ETL.

Questions about limitations? Check our FAQ section below or contact us via the feedback button.

Share Your Benchmark Results

Run the converter on your own hardware? We want to know your results โ€” different machines produce different numbers and your data helps improve the benchmark page.

Speed on Your Hardware

What speed did you get? Share your row count, time, browser, and machine specs. We'll update the benchmark page with community results.

Share your results
Multi-Sheet Workbooks

Converted a workbook with many sheets? How many sheets, how many rows total, and how long did the ZIP export take?

Share your results
Privacy-Sensitive Use Cases

Did the browser-only architecture matter for your workflow โ€” HIPAA, GDPR, or internal data policy reasons? We'd like to understand real compliance contexts.

Share your results

Performance FAQs

Related Performance Guides

Further reading on browser-based large file processing, format conversion benchmarks, and data privacy architecture.

Ready to Convert at 93K Rows/Second?

Upload your workbook. Pick your output format. Download in seconds. No install, no account, no uploads.

File contents never leave your device
1M rows in 10.7 seconds โ€” verified benchmark
Pretty-print or minified output
Multi-sheet workbooks to ZIP in one click

Related: Excel Row Limit Explained ยท CSV vs JSON vs Excel ยท Excel Splitter ยท Excel to CSV Converter ยท Feature Overview
Benchmarks last updated February 2026 by the SplitForge team.