How to Import JSON into Excel (The Right Way)
Copy-pasting JSON into Excel doesn't work. This guide covers the correct ways to convert JSON to Excel—including how to handle nested objects and arrays.
- Method 1: Excel Power Query (Get and Transform)
- Method 2: Convert JSON to CSV First
- Method 3: Python with pandas and openpyxl
- Method 4: Use an Online Converter
- The Nested JSON Problem in Excel
- Handling Arrays of Objects
- Working with Large JSON Files
- Formatting Tips for JSON Data in Excel
- Frequently Asked Questions
- Can I import JSON directly into Excel without Power Query?
- Why does Excel show "List" or "Record" when I import JSON?
- How do I convert a JSON array to Excel rows?
- What is the maximum JSON file size that Excel can handle?
How to Import JSON into Excel (The Right Way)
The intuitive approach — copy a JSON string, paste it into Excel — produces exactly one thing: a single cell containing the entire JSON string as plain text. Excel has no idea what JSON is. It sees a string starting with { and treats it like any other value.
From there, options get confusing quickly. Excel's built-in help points you toward Power Query, which works in some cases but silently produces List and Record placeholders wherever it encounters nested data. Python scripts fix that, but require setup and code knowledge not everyone has. And online converters range from genuinely useful to tools that mangle your data in surprising ways.
This guide covers every practical method, with honest assessments of when each one is the right tool.
Method 1: Excel Power Query (Get and Transform)
Power Query is Microsoft's official answer to importing structured data like JSON. It's built into Excel 2016 and later (Windows and Mac).
Step-by-step:
- Open Excel and go to Data → Get Data → From File → From JSON.
- Select your
.jsonfile and click Import. - Power Query Editor opens. If your JSON is an array of objects, you'll see a column called
List. Click To Table in the ribbon, then click OK. - You'll see a column called
Column1withRecordin each cell. Click the expand icon (two arrows) in the column header. - Select which fields to include and click OK.
- Click Close and Load to load the data into a worksheet.
For flat JSON, this works well. For nested JSON, it starts to break down.
The nested data problem with Power Query: When Power Query encounters a nested object, it shows Record in that cell. When it encounters an array, it shows List. You can click through to expand each one, but this is a manual, one-level-at-a-time process. Deeply nested structures require multiple expand operations, and arrays of objects require a separate "Expand to New Rows" step. Getting a 3-level-deep JSON into a clean flat table can take 15+ manual steps.
When Power Query is the right choice: For simple JSON with one or two levels of nesting, Power Query is perfectly adequate and doesn't require any code. If you need to refresh the data from the same JSON source regularly (Power Query can reload from a URL or file path), it's also worth the setup time.
Method 2: Convert JSON to CSV First
The intermediate-step approach: convert your JSON to CSV, then open the CSV in Excel. This is often the most reliable path because CSV-to-Excel is a solved problem with decades of tooling behind it.
When to use this approach:
- Your JSON is flat or mostly flat
- You're already familiar with the CSV import workflow
- You need the data in Excel quickly without writing code
- The JSON has nested objects that you want flattened using dot notation
Use our JSON to CSV converter to flatten and convert the JSON, then open the resulting CSV in Excel using Data → From Text/CSV (not by double-clicking, which can cause encoding and delimiter issues).
The limitation: if your JSON has arrays of objects that should become separate rows, the CSV intermediate step requires those to be expanded first. The converter handles this, but you'll want to configure the array handling before downloading the CSV.
Method 3: Python with pandas and openpyxl
For repeated conversions, large files, or nested data that needs careful handling, Python gives you full control. This approach requires Python 3 and the pandas and openpyxl packages (pip install pandas openpyxl).
import pandas as pd
import json
def json_to_excel(json_path, excel_path, array_field=None):
"""
Convert a JSON file to Excel, handling nested objects via flattening.
Args:
json_path: Path to the input JSON file
excel_path: Path for the output .xlsx file
array_field: If set, expand this array field into separate rows
"""
with open(json_path, encoding="utf-8") as f:
data = json.load(f)
# Handle both a list of records and a single object
if isinstance(data, dict):
data = [data]
if array_field:
# Expand an array of objects into multiple rows
df = pd.json_normalize(
data,
record_path=[array_field],
meta=[k for k in data[0] if k != array_field],
sep=".",
errors="ignore"
)
else:
# Flatten nested objects only
df = pd.json_normalize(data, sep=".")
# Write to Excel with column width autofit
with pd.ExcelWriter(excel_path, engine="openpyxl") as writer:
df.to_excel(writer, sheet_name="Data", index=False)
# Auto-fit column widths
worksheet = writer.sheets["Data"]
for col_idx, column in enumerate(df.columns, 1):
max_len = max(
len(str(column)),
df[column].astype(str).map(len).max() if len(df) > 0 else 0
)
# Cap at 60 characters to keep the sheet readable
worksheet.column_dimensions[
worksheet.cell(1, col_idx).column_letter
].width = min(max_len + 2, 60)
print(f"Wrote {len(df)} rows, {len(df.columns)} columns to {excel_path}")
# Example usage
json_to_excel("orders.json", "orders.xlsx", array_field="line_items")
For JSON with multiple distinct sections you want on separate sheets:
def json_to_excel_multisheets(json_path, excel_path):
"""Write different sections of a JSON file to separate sheets."""
with open(json_path, encoding="utf-8") as f:
data = json.load(f)
with pd.ExcelWriter(excel_path, engine="openpyxl") as writer:
for key, value in data.items():
if isinstance(value, list) and value:
df = pd.json_normalize(value, sep=".")
# Sheet names max 31 chars, no special characters
sheet_name = str(key)[:31].replace("/", "_").replace("\\", "_")
df.to_excel(writer, sheet_name=sheet_name, index=False)
print(f"Sheet '{sheet_name}': {len(df)} rows")
# Usage: data.json = {"customers": [...], "orders": [...], "products": [...]}
json_to_excel_multisheets("data.json", "data_multi.xlsx")
Method 4: Use an Online Converter
For one-off conversions without writing code, our JSON to Excel converter handles the full conversion pipeline in your browser. It supports nested objects (flattened with dot notation), arrays of objects (expanded to rows or kept as inline strings), and produces a properly formatted .xlsx file with auto-sized columns.
The converter also handles the multiple-sheet case: if your JSON is an object where each top-level key maps to an array, it can place each array on its own sheet. This is particularly useful for database exports and multi-entity API responses.
The Nested JSON Problem in Excel
The core challenge is that Excel's data model is strictly two-dimensional. A cell contains a scalar value — a string, number, date, or boolean. It cannot contain a structured object.
Consider a JSON field like orders.items[0].price. Power Query will show this as List in the orders column, then require you to expand the list, which shows Record in the items column, then require another expansion to get price. Each expansion is a UI step, and none of it is scriptable without Power Query M syntax.
The Python approach (and the online converter) solves this by resolving the nesting before Excel ever sees the data. By the time a column lands in the spreadsheet, it's already been given a name like orders_items_0_price — a flat string that Excel handles perfectly.
The key architectural difference: Power Query resolves nesting lazily and interactively. Code-based approaches resolve it eagerly and consistently, which is better for automation and reproducibility.
Handling Arrays of Objects
Arrays of objects — order line items, event attendees, product variants — present a choice:
Expand to rows: Each array element becomes its own row, with parent fields repeated. An order with 5 line items becomes 5 rows. This is the right choice when you need to filter, sort, or sum based on individual array elements (e.g., total revenue by SKU).
Separate sheet: Put the parent objects on one sheet and the array elements on another, linked by an ID. This mirrors a relational database structure and works well in Excel when you need to use VLOOKUP or Power Pivot to join them.
Inline string: Join primitive values from the array into a single cell ("sku1 | sku2 | sku3"). Readable at a glance, but not queryable. Appropriate when you just need to see the values, not analyze them.
In the Python example above, record_path handles row expansion. For the inline approach:
def inline_list_columns(df, sep=" | "):
"""Convert list-valued columns to joined strings."""
for col in df.columns:
if df[col].apply(lambda x: isinstance(x, list)).any():
df[col] = df[col].apply(
lambda x: sep.join(str(i) for i in x) if isinstance(x, list) else x
)
return df
Working with Large JSON Files
Excel has a hard row limit of 1,048,576 rows per sheet (about 1 million). If your JSON file, after expansion, produces more rows than that, Excel will silently truncate the data. You won't see an error — the last rows will simply be missing.
For large files:
-
Check row count before converting. In Python:
len(df)before writing. If it exceeds a million, split the data. -
Split into multiple sheets or files. Divide by date range, region, or another logical partition so each sheet stays under the limit.
-
Consider whether Excel is the right tool. For files over a few hundred thousand rows, Excel becomes slow and unstable. DuckDB, pandas, or a proper BI tool like Tableau or Power BI handles large datasets much better.
# Split large dataset into multiple Excel files
ROWS_PER_FILE = 900_000 # Leave headroom below the 1M limit
for i, chunk_start in enumerate(range(0, len(df), ROWS_PER_FILE)):
chunk = df.iloc[chunk_start : chunk_start + ROWS_PER_FILE]
output_path = f"data_part_{i + 1}.xlsx"
chunk.to_excel(output_path, index=False)
print(f"Wrote {len(chunk)} rows to {output_path}")
Formatting Tips for JSON Data in Excel
A few gotchas specific to JSON data in Excel:
Number precision. JSON numbers are IEEE 754 floating-point. Excel also uses floating-point, but it displays and rounds differently. A JSON value of 0.1 + 0.2 stored as 0.30000000000000004 will appear that way in Excel unless you format the column as a number with 2 decimal places. Use openpyxl or Excel's format cells dialog to apply number formatting.
Date parsing. JSON dates are strings ("2024-03-15T09:22:00Z"). Excel doesn't automatically recognize ISO 8601 dates. In Python, convert date columns before writing:
date_columns = ["createdAt", "updatedAt", "order_date"]
for col in date_columns:
if col in df.columns:
df[col] = pd.to_datetime(df[col], utc=True).dt.tz_localize(None)
Once stored as actual datetime values, Excel will format and sort them correctly.
Phone numbers and zip codes. JSON stores these as strings ("02134", "+1-617-555-0100"), which is correct. Excel may interpret them as numbers and strip leading zeros. Force string format in openpyxl by setting number_format = "@" on those columns, or prefix values with an apostrophe to force text interpretation.
Frequently Asked Questions
Can I import JSON directly into Excel without Power Query?
Not natively — Excel has no built-in "Open JSON" option outside of Power Query. Your alternatives are: using Power Query (built into Excel 2016+), converting to CSV first, using Python to write a .xlsx file directly, or using an online converter like our JSON to Excel converter.
Why does Excel show "List" or "Record" when I import JSON?
Power Query shows List when it encounters a JSON array and Record when it encounters a JSON object. These are Power Query's internal representations of those structures. You need to click the expand icon in the column header to drill into them. For deeply nested data, this can require many manual expand steps. Using a code-based approach or an online converter that pre-flattens the data avoids this entirely.
How do I convert a JSON array to Excel rows?
In Python: pd.json_normalize(your_list).to_excel("output.xlsx", index=False). In Power Query: after loading the file, click "To Table" when you see a List, then expand the Column1 column. With our JSON to Excel converter, upload the JSON file and the tool handles arrays of objects automatically.
What is the maximum JSON file size that Excel can handle?
Excel itself is limited to approximately 1,048,576 rows and 16,384 columns per sheet. For JSON files, the practical limit depends on how much nesting and expansion is involved. A 10MB flat JSON array might expand to 500,000 rows and load fine. A 10MB deeply nested JSON with large arrays could expand to well over 1 million rows and exceed Excel's limits. For files producing more than 800,000 rows, consider splitting the data or using a different analysis tool.