Nifty Tools

CSV to JSON

Convert csv to json in your browser. RFC 4180 parser handles quoted fields, embedded commas, CRLF/LF, and delimiter detection. No upload.

Processing mode: Local Browser-local

Waiting for CSV.

How to use it

CSV to JSON Converter — Free, In Your Browser

  1. Paste CSV into the editor or load a `.csv` file (up to ~10 MB per run). Set the delimiter (auto-detect by default), header-row toggle, output shape, and empty-cell mode.
  2. Click Convert. The parser walks the input character by character against the RFC 4180 grammar and builds the JSON output.
  3. Copy the JSON to the clipboard or download it as a `.json` file. Nothing leaves your browser.

Good for

Common use cases

People convert CSV to JSON when the next thing in the pipeline is code, not a spreadsheet. CSV is what spreadsheets, BI tools, CRM exports, and analytics dashboards hand back — Google Sheets and Excel both export CSV as the "save as plain data" format, HubSpot/Salesforce/Pipedrive export contact and deal lists as CSV, GA4 exports its explorer reports as CSV, and almost every back-office system ships a "download as CSV" button somewhere in its UI. The next step almost always wants JSON: a webhook payload, an API request body, a fixture file for a test suite, a seed dataset for a side project, a JSON column in a Postgres or DynamoDB write. The honest version of this conversion has to handle the things `String.split(",")` famously gets wrong — fields wrapped in double quotes, commas inside quoted fields, embedded line breaks inside quoted cells, escaped quotes (`""` doubling inside a quoted field), CRLF line endings from Windows-exported CSVs alongside LF endings from Unix tooling, and the European-locale convention of comma decimals with semicolons as the field separator. The parser here walks the input character by character against the RFC 4180 grammar, picks the delimiter from comma/tab/semicolon/pipe by checking the first few rows, and lets you choose whether the first row is treated as a header (default) or as data, whether output is an array of objects or an array of arrays, and whether empty cells become `""` or `null`. Doing the conversion in the browser keeps customer rows, financial extracts, and internal exports off third-party servers — the CSV never leaves the page, the JSON materialises locally for download or copy.

Processing mode

Browser-local

Files are processed by your browser. They never reach our servers.

Questions

CSV to JSON Converter — Free, In Your Browser FAQ

Why not just use String.split(",") to convert CSV to JSON?

Because RFC 4180 CSV allows commas, line breaks, and double quotes inside quoted fields, and `String.split(",")` knows about none of those. A row like `"Smith, John",42,"He said ""hi"""` has three fields — the name, the age, and a free-text comment — but `split(",")` produces six broken fragments. A row with an embedded newline inside a quoted "Notes" field gets sliced into two by a line-based reader. A doubled `""` inside a quoted field is the RFC 4180 escape for a literal double-quote character, not a field boundary. The parser here handles each of those cases against the actual grammar — it walks the input character by character, tracks whether it is inside a quoted field, treats `""` inside quotes as an escape, and only splits on the delimiter or line terminator when it is outside a quoted field.

What output shapes are available, and which should I pick?

Two shapes. The default is an array of objects — the first row is treated as a header and becomes the keys for every subsequent row. So `name,age\nAda,42\nLinus,53` becomes `[{"name":"Ada","age":"42"},{"name":"Linus","age":"53"}]`. This is the right choice when the destination is a webhook payload, an API request body, a JSON column write, or a fixture file you want to read by name. The second shape is an array of arrays — every row including the header is a flat array of strings. This is the right choice when the source has no real header row, when you want to preserve duplicate header names verbatim, or when the destination is a piece of code that walks rows by index rather than by key.

How does the delimiter detector decide between comma, tab, semicolon, and pipe?

It looks at the first ten non-empty lines and counts how many fields each candidate delimiter would produce per line. The candidate that produces the most consistent count across those lines (with at least two fields per line on average) wins. In practice this picks comma for ordinary CSVs, tab for `.tsv` files saved by spreadsheet tools, semicolon for European-locale Excel exports (where the comma is reserved for decimals), and pipe for the kind of system exports that use `|` to dodge data-side commas entirely. If your file is ambiguous — say, every row has a single column with no separators — the detector falls back to comma; pinning the delimiter manually in the dropdown is the right escape hatch.

What happens if my CSV has quoted commas, embedded line breaks, or doubled quotes?

All three round-trip correctly. A field wrapped in double quotes can contain commas (`"Smith, John"`), line breaks (a multi-line "Notes" cell), and literal double-quote characters escaped by doubling them (`"He said ""hi"""` becomes the string `He said "hi"`). The parser treats the opening double-quote as a state change rather than as data, accumulates everything up to the closing double-quote, and only resumes splitting on delimiters or line terminators after the quoted region ends. This is the behaviour Excel, Google Sheets, LibreOffice Calc, and `pandas.read_csv` all assume on import, so the JSON output matches what those tools see.

How are empty cells, sparse rows, and missing headers handled?

Empty cells become an empty string by default, which matches what CSV actually contains; switch the empty-cell mode to "null" if the destination JSON consumer expects `null` for absent values rather than `""`. Sparse rows — records with fewer columns than the header — are padded with empty cells (or `null`s) so every object in the JSON output has the same key shape; this stops the destination code from having to defensively check whether each property exists. If you turn the header-row toggle off, the parser synthesises generic `column_1`, `column_2`, ... keys based on the widest row in the file, so the output stays a uniform shape even when the source has no header. Duplicate header names are auto-disambiguated (`id`, `id` becomes `id` and `id_2`) and blank header cells are filled with their column position (`column_3`) so no row in the JSON output ever loses a value to a key collision or to an empty-string key.

What happens if the CSV is malformed — bad quotes, unclosed strings, junk after a closed quote?

The parser refuses to silently corrupt your data. A quote in the middle of an unquoted field (`foo"bar`), a closing quote followed by non-delimiter characters (`"abc"def`), and a quoted field that runs to end-of-input without a closing quote all surface as a single error message with the line and column where the parser gave up. This is the only honest behaviour for a CSV-to-JSON tool — silently absorbing the bad characters or treating a stray `"` as the start of a new field would yield JSON output that looks valid but no longer corresponds to your source rows. Fix the source CSV (or wrap the offending field in proper double quotes and double its internal `"` characters) and re-run.

Is there a file size limit for CSV to JSON?

Each run stays under roughly 10 MB. The parser materialises the full string and the parsed-row array in memory before serialising to JSON, so very large exports can stall on lower-RAM devices. If your CSV is larger, split it into chunks (each one keeping the header row), convert each chunk, and concatenate the resulting JSON arrays. For multi-million-row jobs the right tool is a streaming parser like `csv-parse` in Node or `pandas.read_csv` in Python — this tool is built for the everyday "paste a sheet export into a webhook body" case.

Will this tool stay free?

The basic workflow is designed to stay free. Paid upgrades later will focus on bigger limits, batch work, OCR, saved presets, and ad-free use.