How to Convert JSON to CSV Using jq (Command Line)
Complete guide to converting JSON to CSV with jq — from basic @csv filters to nested fields, dynamic headers, null handling, and real API response patterns. Includes alternatives when jq isn't the right tool.
- Prerequisites
- The basics: `@csv` and `@tsv`
- Flat array of objects
- Hard-coded columns (explicit field list)
- With a header row
- Write directly to a file
- Dynamic headers from the first record
- Handling nested objects
- Handling null and missing fields
- Null-safe dynamic columns
- API envelope patterns
- Stripe-style (`data` key)
- GitHub-style (direct array with nested user objects)
- Nested `results` key (common in paginated APIs)
- Expanding arrays of objects (one row per nested item)
- Filtering before converting
- Reformatting dates and numbers
- Unix timestamp to ISO string
- Truncate to date only
- Amount in cents to dollars
- Round to 2 decimal places
- Handling different fields across records
- Processing multiple JSON files
- Handling large files with jq
- Other command-line tools worth knowing
- miller (`mlr`)
- csvkit's `in2csv`
- Python one-liner (for quick scripting)
- When to use each
- Putting it together: a real Stripe export
- Frequently Asked Questions
- Why does `jq '.[] | @csv'` give an error?
- How do I handle fields that contain commas or newlines?
- Why is `jq` outputting `null` instead of empty strings?
- Can jq output TSV instead of CSV?
- My JSON is one object per line (NDJSON) — does jq handle that?
- Is there a way to convert JSON to CSV without installing anything?
jq is the standard tool for JSON on the command line. It's fast, composable, and available everywhere — Linux, Mac, WSL, CI pipelines. Converting JSON to CSV with jq is one of the most common things developers reach for, and also one of the most reliably frustrating when the JSON gets complicated.
This guide goes from the first @csv filter through the patterns that actually show up in production: nested objects, sparse schemas, API envelopes, arrays of arrays, timestamp reformatting, and filtering before conversion. Each example uses realistic data, not {"name": "Alice"}.
If you want a no-install option for one-off files, the JSON to CSV converter handles everything on this page automatically — no jq syntax to remember. But if you're scripting, automating, or living in the terminal, read on.
Prerequisites
Install jq if you haven't:
# macOS
brew install jq
# Ubuntu/Debian
sudo apt install jq
# Windows (via Chocolatey)
choco install jq
# Or download the binary directly from https://jqlang.org
Verify: jq --version → jq-1.7.1 or similar.
The basics: @csv and @tsv
jq has two built-in output formats for tabular data: @csv and @tsv.
@csvwraps fields in double quotes and escapes internal quotes — valid RFC 4180 CSV@tsvuses tab separators and backslash-escapes — simpler but tab-sensitive
Both take an array as input and return a single string. The -r flag (--raw-output) prints that string without JSON string escaping — you need -r every time.
echo '["Alice", "Engineering", 95000]' | jq -r '@csv'
# "Alice","Engineering",95000
Flat array of objects
Starting point — the JSON everyone uses in tutorials:
[
{"id": 1, "name": "Alice Chen", "dept": "Engineering", "salary": 95000},
{"id": 2, "name": "Bob Kumar", "dept": "Design", "salary": 78000},
{"id": 3, "name": "Sara Mills", "dept": "Management", "salary": 110000}
]
Hard-coded columns (explicit field list)
jq -r '.[] | [.id, .name, .dept, .salary] | @csv' employees.json
Output:
1,"Alice Chen","Engineering",95000
2,"Bob Kumar","Design",78000
3,"Sara Mills","Management",110000
With a header row
jq -r '["id","name","dept","salary"], (.[] | [.id, .name, .dept, .salary]) | @csv' employees.json
Output:
"id","name","dept","salary"
1,"Alice Chen","Engineering",95000
2,"Bob Kumar","Design",78000
3,"Sara Mills","Management",110000
The trick: put the header array first as a literal, pipe the data after. Both get passed through @csv.
Write directly to a file
jq -r '["id","name","dept","salary"], (.[] | [.id, .name, .dept, .salary]) | @csv' \
employees.json > employees.csv
Dynamic headers from the first record
Hard-coding column names breaks when the schema changes. This pattern reads the keys from the first record and uses them for both the header and each row:
jq -r '
(.[0] | keys_unsorted) as $keys |
$keys,
(.[] | [.[$keys[]]]) |
@csv
' employees.json
keys_unsorted preserves the key order as it appears in the JSON (insertion order). keys would sort alphabetically.
.[$keys[]] is jq's way of iterating over an array of key names and looking each one up — equivalent to [.id, .name, .dept, .salary] but derived from the data.
Caveat: If different records have different keys, this only outputs columns from the first record. For sparse schemas, see the handling different fields section below.
Handling nested objects
Nested objects don't map cleanly to @csv — you have to pull out specific sub-fields explicitly.
[
{
"id": 1,
"name": "Alice Chen",
"address": {"city": "Austin", "state": "TX"},
"contact": {"email": "alice@example.com", "phone": "512-555-0100"}
}
]
jq -r '
["id","name","city","state","email"],
(.[] | [.id, .name, .address.city, .address.state, .contact.email]) |
@csv
' nested.json
Output:
"id","name","city","state","email"
1,"Alice Chen","Austin","TX","alice@example.com"
Use dot-notation to reach any depth: .address.city, .payment.card.last4, .meta.tags[0].
Handling null and missing fields
If a field is missing on some records, jq outputs null — which becomes the literal string null in your CSV. Usually you want an empty string instead.
jq -r '.[] | [(.id // ""), (.name // ""), (.email // "")] | @csv' data.json
// is jq's "alternative" operator — it substitutes the right side if the left is null or false.
For a default value other than empty string:
jq -r '.[] | [.id, (.status // "unknown"), (.score // 0)] | @csv' data.json
Null-safe dynamic columns
Combining dynamic key extraction with null safety:
jq -r '
(.[0] | keys_unsorted) as $keys |
$keys,
(.[] | [(.[$keys[]] // "")]) |
@csv
' data.json
API envelope patterns
Real APIs wrap the data array in a response object. Navigate to the array first.
Stripe-style (data key)
{
"object": "list",
"data": [
{"id": "pi_001", "amount": 4999, "currency": "usd", "status": "succeeded"},
{"id": "pi_002", "amount": 1200, "currency": "usd", "status": "requires_payment_method"}
],
"has_more": false
}
jq -r '
["id","amount","currency","status"],
(.data[] | [.id, .amount, .currency, .status]) |
@csv
' stripe_response.json
GitHub-style (direct array with nested user objects)
[
{
"number": 42,
"title": "Fix login redirect",
"state": "open",
"user": {"login": "alicechen"},
"created_at": "2025-11-01T14:23:00Z"
}
]
jq -r '
["number","title","state","author","created_at"],
(.[] | [.number, .title, .state, .user.login, .created_at]) |
@csv
' issues.json
Nested results key (common in paginated APIs)
# .results.items[]
jq -r '.results.items[] | [.id, .name] | @csv' response.json
# Or if you're not sure of the depth, use ..[] to search
jq -r '.. | arrays | .[] | select(type == "object") | [.id, .name] | @csv' response.json
Expanding arrays of objects (one row per nested item)
When each record contains a sub-array and you want one CSV row per sub-array item:
[
{
"order_id": "ORD-001",
"customer": "Alice Chen",
"items": [
{"sku": "WIDGET-A", "qty": 2, "price": 29.99},
{"sku": "GADGET-B", "qty": 1, "price": 49.99}
]
}
]
jq -r '
["order_id","customer","sku","qty","price"],
(.[] | . as $order | .items[] | [$order.order_id, $order.customer, .sku, .qty, .price]) |
@csv
' orders.json
Output:
"order_id","customer","sku","qty","price"
"ORD-001","Alice Chen","WIDGET-A",2,29.99
"ORD-001","Alice Chen","GADGET-B",1,49.99
. as $order saves the parent object so you can still access it after iterating into .items[].
Filtering before converting
jq's select() filters records before they reach @csv:
# Only active users
jq -r '.[] | select(.status == "active") | [.id, .name, .email] | @csv' users.json
# Salary above threshold
jq -r '.[] | select(.salary >= 100000) | [.id, .name, .salary] | @csv' employees.json
# Multiple conditions
jq -r '.[] | select(.dept == "Engineering" and .status == "active") | [.id, .name] | @csv' data.json
# Exclude nulls on a specific field
jq -r '.[] | select(.email != null) | [.id, .name, .email] | @csv' users.json
# Contains string
jq -r '.[] | select(.tags[] == "premium") | [.id, .name] | @csv' users.json
Reformatting dates and numbers
jq doesn't have a date library, but you can do basic transformations:
Unix timestamp to ISO string
jq -r '.[] | [.id, (.created_at | todate)] | @csv' data.json
todate converts a Unix epoch integer to an ISO 8601 string: 2025-11-01T14:23:00Z.
Truncate to date only
jq -r '.[] | [.id, (.created_at | todate | .[0:10])] | @csv' data.json
# 2025-11-01
Amount in cents to dollars
jq -r '.[] | [.id, (.amount / 100)] | @csv' payments.json
Round to 2 decimal places
jq -r '.[] | [.id, (.price * 100 | round / 100)] | @csv' products.json
Handling different fields across records
When records have different sets of keys (sparse JSON), dynamic column detection from the first record will miss fields that only appear later.
The cleanest jq solution is two passes: one to collect all keys, one to write rows.
# Pass 1: collect all unique keys (sorted)
KEYS=$(jq -r '[.[] | keys[]] | unique | @csv' data.json)
# Pass 2: output with those keys
jq -r --argjson keys "$(jq '[.[] | keys[]] | unique' data.json)" '
$keys,
(.[] | [.[$keys[]] // ""]) |
@csv
' data.json
For a cleaner one-liner, pass the key list as a jq argument:
jq -r '
([.[] | keys] | add | unique) as $keys |
$keys,
(.[] | [(.[$keys[]] // "")]) |
@csv
' data.json
[.[] | keys] | add flattens all per-record key arrays into one, then unique deduplicates.
Processing multiple JSON files
# Merge files and convert (all must have the same schema)
jq -rs '
(.[0][0] | keys_unsorted) as $keys |
$keys,
(.[][] | [.[$keys[]]]) |
@csv
' file1.json file2.json file3.json > combined.csv
-s (--slurp) reads all input files into a single array. .[0][0] gets the first record of the first file for key extraction. .[][] iterates over files, then records.
Handling large files with jq
jq is a streaming processor — it reads top-level values one at a time. For a large array, it still loads the entire array before iterating. For very large files, use the --stream flag:
jq -rn --stream '
fromstream(1 | truncate_stream(inputs)) |
[.id, .name, .email] |
@csv
' large.json
--stream emits path/value pairs rather than parsed objects. fromstream(1 | truncate_stream(inputs)) reconstructs top-level array items one at a time without holding the full array in memory. It's verbose but memory-constant.
For files over a few hundred MB, Python with ijson is usually easier. See how to convert large JSON files to CSV for that approach.
Other command-line tools worth knowing
jq is the right choice when you're already working with jq filters or need the flexibility of a full query language. For some tasks, other tools are faster or simpler.
miller (mlr)
Miller is purpose-built for tabular data conversion. JSON to CSV in one flag:
mlr --json --ocsv cat data.json > output.csv
Miller handles nested JSON by default (flattens to dot-notation), handles sparse schemas, and is significantly faster than jq for large files. Install: brew install miller / apt install miller.
csvkit's in2csv
in2csv --format json data.json > output.csv
Part of csvkit — a Python-based suite of CSV utilities. Handles flat JSON arrays cleanly. Install: pip install csvkit.
Python one-liner (for quick scripting)
python3 -c "
import json, csv, sys
data = json.load(open('data.json'))
w = csv.DictWriter(sys.stdout, fieldnames=data[0].keys())
w.writeheader(); w.writerows(data)
" > output.csv
No dependencies beyond the standard library. Useful in environments where you can't install packages.
When to use each
| Tool | Best for | |------|----------| | jq | Already in a jq pipeline; need filtering, transformation | | miller | Fastest CLI conversion; nested JSON; large files | | csvkit | Python ecosystem; need csvkit's other tools | | Python | Custom logic; scripting; when ijson is needed for large files | | Online converter | One-off files; no install; sharing with non-technical colleagues |
Putting it together: a real Stripe export
# Download all charges via Stripe API (paginated)
# Assumes you've fetched and saved to stripe_charges.json
jq -r '
["id", "amount_usd", "currency", "status", "customer_email", "created"],
(.data[] | [
.id,
(.amount / 100),
.currency,
.status,
(.customer // "" | if type == "object" then .email else . end),
(.created | todate | .[0:10])
]) |
@csv
' stripe_charges.json > charges.csv
echo "Exported $(wc -l < charges.csv) rows"
This handles:
- Amount in cents → dollars (
/ 100) - Customer field that's either a string ID or an expanded object (the
if type == "object"branch) - Unix timestamp → date-only string
- Explicit header row
Frequently Asked Questions
Why does jq '.[] | @csv' give an error?
@csv requires an array as input, not an object. .[] on an array of objects yields objects one at a time. You need to wrap the fields you want in an array first: .[] | [.id, .name] | @csv.
How do I handle fields that contain commas or newlines?
You don't need to do anything — @csv handles this automatically. Fields with commas are wrapped in double quotes, internal double quotes are doubled, and newlines are preserved inside the quoted field. That's what makes @csv correct RFC 4180 CSV.
Why is jq outputting null instead of empty strings?
When a field doesn't exist on a record, jq outputs null. Use the alternative operator // "" to substitute an empty string: (.missing_field // "").
Can jq output TSV instead of CSV?
Yes — use @tsv instead of @csv. TSV uses tab separators and backslash-escapes for special characters. It's simpler to parse but breaks if your data contains literal tabs. For most uses, @csv is the safer choice.
My JSON is one object per line (NDJSON) — does jq handle that?
Yes. jq handles NDJSON naturally because it processes multiple top-level JSON values by default. Replace .[] with . in your filters:
# NDJSON — each line is a complete object
jq -r '[.id, .name, .email] | @csv' data.ndjson
No -s flag needed. jq reads each line as a separate input and applies the filter to each one.
Is there a way to convert JSON to CSV without installing anything?
Yes — the JSON to CSV converter runs entirely in your browser. Paste or upload your JSON and download the CSV. It handles nested objects (flattened to dot notation), arrays within records, sparse schemas, and files up to several hundred megabytes. No account, no install, nothing uploaded to a server.