Tinker Tools

JSON to CSV Instantly

Convert JSON arrays to CSV format for spreadsheets. All processing is done locally in your browser—your data never leaves your device.

JSON InputInput
CSV OutputOutput

How it works

1. Paste JSON

Paste a JSON array of objects into the input area. Nested objects will be automatically flattened with dot notation for CSV columns.

Local Processing

2. Choose Delimiter

Select comma, tab, or semicolon as your CSV delimiter. Different regions and applications may prefer different separators.

Flexible Format

3. Download CSV

Copy the CSV to your clipboard or download it as a .csv file. Open it directly in Excel, Google Sheets, or any spreadsheet app.

Ready to Use

What is JSON to CSV Conversion?

JSON and CSV are two of the most common data formats in software development, and converting between them is something you will do regularly. JSON — JavaScript Object Notation — stores data as nested key-value pairs and arrays, which makes it ideal for APIs, configuration files, and document databases. CSV — Comma-Separated Values — stores data as rows and columns in plain text, which makes it ideal for spreadsheets, data analysis tools, and bulk imports. When you have a JSON array of objects and need to drop it into Excel, Google Sheets, or a SQL bulk loader, a JSON to CSV converter handles the translation for you without writing throwaway scripts.

The tricky part of this conversion is structural mismatch. CSV is inherently flat — every row has the same number of columns, and every cell holds a single scalar value. JSON is inherently hierarchical — objects contain other objects, arrays nest inside arrays, and a single record can have fields at five different depth levels. A good converter bridges this gap by flattening nested structures using dot notation. A field like address.city becomes a column header, and the value from each record fills the corresponding cell. Without this flattening step, nested data either gets lost or ends up as a raw JSON string stuffed into a single CSV cell, which defeats the purpose of converting in the first place.

The CSV format itself is governed by RFC 4180, which defines the rules for delimiters, quoting, line endings, and escape sequences. It sounds simple — just commas and newlines — but edge cases pile up fast. What happens when a field value contains a comma? Or a double quote? Or a literal newline character? RFC 4180 answers all of these questions, and a reliable converter follows these rules precisely. Ignoring them produces CSV files that break when imported into other tools, especially across different operating systems where line ending conventions differ.

Key Features and Benefits

  • Automatic Header Generation from JSON Keys The converter inspects every object in your JSON array and extracts a union of all unique keys to build the CSV header row. This matters when your JSON records are not uniform — some objects might have an email field while others do not. The converter creates a column for every key that appears in any record and fills in empty cells where a particular record lacks that field. You get a complete, consistent header without manually specifying column names.
  • Dot-Notation Flattening for Nested Objects Nested JSON objects get flattened into single-level keys using dot notation. A structure like {user: {profile: {name: "Alice"}}} becomes a column header user.profile.name with the value Alice. This approach preserves the path to every value, so you can reconstruct the original nesting if needed. The alternative — ignoring nested data or serializing entire sub-objects as strings — loses information and creates messy CSV output that no spreadsheet can parse meaningfully.
  • RFC 4180 Compliant Output The generated CSV strictly follows RFC 4180. Fields containing commas, double quotes, or newlines are wrapped in double quotes. Any double quote inside a field is escaped by doubling it — so a value like He said "hello" becomes "He said ""hello""" in the output. Line endings use CRLF as the spec requires. This compliance means the output imports cleanly into Excel, Google Sheets, LibreOffice Calc, database import tools, and any other software that respects the standard.
  • Array Value Handling When a JSON field contains an array of primitive values — like tags: ["javascript", "python", "go"] — the converter joins them into a single delimited string within the CSV cell, typically separated by semicolons or pipes to avoid conflict with the comma delimiter. Arrays of objects get flattened further, with indexed keys like items.0.name and items.1.name becoming separate columns. This gives you full visibility into array data without losing structure.
  • Custom Delimiter Support While comma is the default, the converter supports tab, semicolon, and pipe delimiters for output. Semicolons are the standard CSV delimiter in European locales where the comma serves as a decimal separator. Tab-separated values — TSV — work well for data that contains many commas naturally, like address fields or product descriptions. Choosing the right delimiter upfront saves you from escaping headaches downstream.
  • Client-Side Processing for Data Privacy The conversion runs entirely in your browser. Your JSON data — which might contain customer records, API secrets, financial figures, or personal information — never leaves your machine. There is no upload, no server-side processing, no logging. For teams working under GDPR, HIPAA, or SOC 2 requirements, this is not a nice-to-have. It is a requirement.

How to Convert JSON to CSV

  1. 1

    Prepare Your JSON Input

    Start with a JSON array of objects. Each object represents one row in the resulting CSV, and each key represents a column. Make sure your input is valid JSON — missing commas, trailing commas, and unquoted keys will cause parse errors. If your data comes from an API response, the array is often nested under a key like data or results, so extract just the array portion before converting. A quick way to validate: paste it into a JSON formatter first and confirm it parses without errors.

  2. 2

    Configure Flattening Depth

    Decide how deep you want nested objects flattened. A depth of 1 flattens only the top-level nested objects. A depth of 3 handles structures like order.shipping.address.zip. Setting the depth too shallow means nested data gets serialized as JSON strings in your CSV cells. Setting it too deep on deeply recursive structures can generate hundreds of columns. For most API responses, a depth of 3 or 4 covers everything you need without creating column sprawl.

  3. 3

    Choose Your Delimiter and Options

    Pick the output delimiter — comma for standard CSV, semicolon for European locale compatibility, tab for TSV. Decide whether to include a byte order mark (BOM) at the start of the file. Excel on Windows often needs the UTF-8 BOM (\uFEFF) to correctly display non-ASCII characters like accented letters, Chinese characters, or emoji. Without it, Excel may misinterpret the encoding and show garbled text. On macOS and Linux, the BOM is unnecessary and can actually cause problems with command-line tools.

  4. 4

    Run the Conversion

    Click convert and let the tool process your data. The converter iterates through every object in the array, collects all unique keys across all records, sorts them to produce a stable column order, and then writes each record as a row. Fields that exist in some records but not others get empty cells. The processing happens in a single pass for flat data and two passes for nested data — one to discover all flattened key paths, one to extract values.

  5. 5

    Review and Download the CSV

    Preview the first few rows of output to verify the conversion looks right. Check that nested fields expanded into the expected column headers, that array values serialized correctly, and that special characters did not break the quoting. Then download the file or copy it to your clipboard. The downloaded file uses the .csv extension and UTF-8 encoding by default. If you plan to open it in Excel, remember the BOM setting from step three — it makes the difference between readable text and question-mark soup.

Expert Tips for JSON to CSV Conversion

Pay attention to data type loss during conversion. CSV is a text-only format — it has no concept of numbers, booleans, null, or dates as distinct types. The number 42 and the string "42" become identical in CSV. A boolean true becomes the text true. Null values become empty cells, which are indistinguishable from empty strings or missing fields. If you need to round-trip your data — convert to CSV and then back to JSON — you will need a schema or type map to restore the original types. Without one, everything comes back as strings.

Handle heterogeneous arrays carefully. If your JSON array contains objects with wildly different schemas — say, a mix of user records and transaction records in the same array — the converter will create a union of all keys from all objects. That means a lot of empty cells and a confusing spreadsheet. The better approach is to filter your JSON array before conversion, grouping objects by type and converting each group separately. A quick Array.filter() call or a jq command like jq '[.[] | select(.type == "user")]' does the job in seconds.

Watch for key collisions after flattening. If your JSON has both a top-level key called user.name as a literal dot-containing string and a nested object path user.name, the flattener will produce two columns with the same header. Most tools resolve this by treating literal dots differently — some escape them, some use bracket notation like user[name] instead. Before converting production data, test with a small sample to see how your tool handles these edge cases. A silent collision that merges two different fields into one column can corrupt your dataset without any warning.

Consider streaming for large datasets. If your JSON array contains 100,000 or more records, loading the entire thing into memory, flattening it, and writing the CSV all at once can choke a browser tab or a Node.js process. Tools like json2csv in Node support streaming mode — they process one record at a time and write CSV rows incrementally. This keeps memory usage flat regardless of input size. For truly massive files — millions of records — use a command-line tool like jq piped into csvkit or Miller rather than a browser-based converter. The browser is great for quick conversions up to tens of thousands of rows, but it was not built for ETL workloads.

Related Tools

JSON to CSV conversion is one step in a larger data pipeline. You might format and validate your JSON first, convert it to CSV for analysis, then convert the CSV back to JSON after making edits in a spreadsheet. Comparing file sizes between the two formats helps you decide which one to use for storage and transfer — CSV is often smaller for flat tabular data, while JSON is more efficient for deeply nested structures. These tools chain together naturally in everyday data workflows.

Frequently Asked Questions

Recommended Tools