Nifty Tools

JSON to CSV

Convert json to csv in your browser. Paste a JSON array of objects and download a clean RFC 4180 CSV. No upload, no signup, no watermark.

Processing mode: Local Browser-local

Waiting for JSON.

How to use it

JSON to CSV Converter — Free, In Your Browser

  1. Paste JSON into the editor (an array of objects, or a single object). Up to ~10 MB of pasted text per run.
  2. Click Convert and the parser builds a header row from the union of keys and writes one CSV row per object.
  3. Copy the CSV to the clipboard or download it as a `.csv` file. Nothing leaves your browser.

Good for

Common use cases

People convert JSON to CSV when the next tool in their workflow speaks spreadsheets, not nested data. JSON is the default response shape for almost every modern API — Stripe, HubSpot, Shopify, GitHub, Airtable, Linear, Notion — but the people who need to act on the data are usually working in Google Sheets, Excel, or a BI tool that ingests CSV at the start of a pipeline. Pasting a 5,000-row JSON response into a sheet directly produces one giant cell of unreadable text; pasting CSV produces a clean grid. The same gap exists for SaaS exports that hand back JSON for "developer use" while the operations team only has spreadsheet tooling, for log lines that arrive as JSON in a webhook payload, and for data-science notebooks that prefer `pandas.read_csv` over the JSON loader for tabular work. Doing the conversion in the browser (no upload, no signup, no watermark) keeps confidential customer rows, financial extracts, and internal API dumps off third-party servers — the JSON never leaves the page, and the CSV materialises locally for download or copy.

Processing mode

Browser-local

Files are processed by your browser. They never reach our servers.

Questions

JSON to CSV Converter — Free, In Your Browser FAQ

What JSON shape produces a clean CSV?

The parser expects a top-level array of flat objects — `[{"name": "Ada", "score": 91}, {"name": "Linus", "score": 88}]` — and a bare object becomes a one-row CSV. Anything else (a top-level string, a top-level number, an array of mixed types) is rejected with a clear message rather than silently producing a broken file. If your JSON wraps the array in a `data` or `results` key (Stripe, GitHub, and HubSpot do this), unwrap it once before paste — the converter intentionally stays close to the structure it sees rather than guessing which key holds the rows.

How are nested objects, arrays, and dates handled in cells?

Nested objects and arrays are JSON-stringified into a single cell so the row stays flat. That means a column called `address` holding `{"city": "Bristol", "postcode": "BS1"}` ends up as the literal text `{"city":"Bristol","postcode":"BS1"}` in the CSV — no silent data loss, but no automatic column explosion either. ISO 8601 date strings stay as strings, which is what every spreadsheet tool already parses on import. If you need the nested object expanded into separate columns (`address.city`, `address.postcode`), flatten the JSON in your source before paste — the converter intentionally does not invent column names.

Why is some text wrapped in quotes and other text not?

The CSV emitter follows RFC 4180. A cell is wrapped in double quotes when it contains a comma, a double quote, a carriage return, a line feed, or has leading/trailing whitespace — every other cell is left unquoted. Embedded double quotes are escaped by doubling them (`he said "hi"` becomes `"he said ""hi"""`). This is the format Excel, Google Sheets, LibreOffice Calc, and `pandas.read_csv` all parse without a quoting hint, so the output round-trips through any modern spreadsheet tool without manual import settings.

What if my array has different keys on different rows?

The header row is the union of keys, in first-seen order. A row that doesn't have a particular key gets an empty cell in that column — not the literal text "null" or "undefined", just an empty field. This matches the behaviour every spreadsheet tool expects for sparse data and keeps the CSV honest about which fields were actually present in the source JSON. If you need a default value instead of an empty cell, fill it in upstream before paste.

Is there a file size limit for the JSON paste?

Each paste stays under roughly 10 MB. The parser materialises the full array in memory before walking it, so very large exports can stall on lower-RAM devices. If your export is larger, split it into smaller chunks, convert each one, and concatenate the resulting CSVs (drop the header row from every file after the first). For multi-million-row jobs the right tool is server-side `jq` or a streaming converter — this tool is built for the everyday "paste a webhook response into a sheet" case.

Will this tool stay free?

The basic workflow is designed to stay free. Paid upgrades later will focus on bigger limits, batch work, OCR, saved presets, and ad-free use.