When i try to create a sheet with more than 200000 rows and 100 columns. I am getting Java Heap out of memory issue. I am using aspose cells version 24.7 with java 21.
You’re seeing an OutOfMemoryError when creating a worksheet with more than 200 000 rows and 100 columns because the default JVM heap is not large enough for that amount of data.
What you can try
- Increase the JVM heap size – start the JVM with a larger heap, for example:
A similar configuration was used in a discussion about large‑worksheet processing that helped avoid OOM [2].java -Xms16G -Xmx16G -XX:+UseG1GC -jar YourApp.jar - Confirm the new limits – after the workbook is created, print the heap statistics to verify the settings are applied:
System.out.println("Heap: " + Runtime.getRuntime().totalMemory() / (1024 * 1024) + " MB"); - Monitor memory usage – observe the heap while the workbook is being built. Other large‑workbook operations (e.g., PDF conversion) have been reported to consume several gigabytes of memory, so allocating a multi‑gigabyte heap is often necessary [3].
- Consider data size – if OOM persists even with a large heap, try reducing the worksheet size or processing the data in smaller batches.
Next step
Run the code with the increased heap and watch the memory values. If the error remains, please provide a minimal code snippet that reproduces the issue and the exact JVM flags you are using so we can look further into the problem.
@Vishal038
Please set memory setting as the following to reduce memory:
workbook.getSettings().setMemorySetting(MemorySetting.MEMORY_PREFERENCE// MemorySetting.FILE_CACHE //
);
Handling large datasets can be a common challenge, especially when attempting to create a sheet with more than 200,000 rows and 100 columns and encountering a Java heap out of memory error. Please note that the default approach loads all cell data into memory, which can be highly resource-intensive.
For your specific setup, we recommend try enabling memory optimization mode (as suggested by @simon.zhao). Alternatively, if the dataset is exceptionally large, you might consider using the LightCells API. If you are utilizing cells.get("A1").putValue(...), you could implement LightCellsDataProvider to stream rows to disk, which helps avoid loading all the millions of cells into RAM simultaneously. For further details, you may refer to the documentation here: Using LightCells API|Documentation
Additionally, we suggest increasing the JVM heap size to accommodate the memory requirements.
If the issue still persists, we would appreciate it if you could share a sample console Java application or program that reproduces the issue. This will allow us to investigate the problem more thoroughly and assist you further.
I am using this method to generate an excel based on row and column values. As of now the rows can be 250000. It is giving issue above this. Also my target is to check aspose cells with read and write operation upto 500000 rows.
Right now the memory settings are:
-Xms512m
-Xmx8000m
public void writeExcel(int totalRows, int totalColumns) throws Exception {
String fileName = "R"+totalRows+"C"+totalColumns+"WL" + System.currentTimeMillis() + ".xlsx";
File directory = new File(this.excelOutputPath);
if (!directory.exists()) {
directory.mkdirs();
}
String savePath = new File(directory, fileName).getAbsolutePath();
long startTime = System.nanoTime();
Workbook workbook= null;
Worksheet sheet= null;
try {
workbook = new Workbook(FileFormatType.XLSX);
sheet = workbook.getWorksheets().get(0);
Cells cells = sheet.getCells();
for (int r = 0; r < totalRows; r++) {
for (int c = 0; c < totalColumns; c++) {
cells.get(r, c).putValue("R" + (r + 1) + "C" + (c + 1));
}
}
sheet.autoFitColumns();
workbook.save(savePath, SaveFormat.XLSX);
long endTime = System.nanoTime();
long durationNanos = endTime - startTime;
double durationSeconds = durationNanos / 1_000_000_000.0;
System.out.println(String.format("Time taken: %.3f seconds", durationSeconds));
System.out.println("workbook licensed: "+ workbook.isLicensed()+ "|| Excel created successfully with " + totalRows + " rows and " + totalColumns + " columns at: " + savePath);
}finally {
if(sheet!= null)
sheet.dispose();
if(workbook!= null)
workbook.dispose();
}
}
Thanks for the code snippet and further details.
I ran your specific sample code with Aspose.Cells for Java v26.2 (please give it a try). It executed perfectly and took just 29 seconds to finish the task. I’m using JDK 22 on my computer, which has a core i7 processor and runs on Windows 11 with ample RAM. The resulting Excel file (165 MB) was produced successfully and is well-optimized.
int totalRows = 250000;
int totalColumns =100;
String fileName = "R"+totalRows+"C"+totalColumns+"WL" + System.currentTimeMillis() + ".xlsx";
long startTime = System.nanoTime();
Workbook workbook= null;
Worksheet sheet= null;
try {
workbook = new Workbook(FileFormatType.XLSX);
sheet = workbook.getWorksheets().get(0);
Cells cells = sheet.getCells();
for (int r = 0; r < totalRows; r++) {
for (int c = 0; c < totalColumns; c++) {
cells.get(r, c).putValue("R" + (r + 1) + "C" + (c + 1));
}
}
sheet.autoFitColumns();
workbook.save("d:\\files\\" + fileName, SaveFormat.XLSX);
long endTime = System.nanoTime();
long durationNanos = endTime - startTime;
double durationSeconds = durationNanos / 1_000_000_000.0;
System.out.println(String.format("Time taken: %.3f seconds", durationSeconds));
System.out.println("workbook licensed: "+ workbook.isLicensed()+ "|| Excel created successfully with " + totalRows + " rows and " + totalColumns + " columns at: " + "d:\\files\\");
}finally {
if(sheet!= null)
sheet.dispose();
if(workbook!= null)
workbook.dispose();
}
console output:
Time taken: 29.147 seconds
workbook licensed: true|| Excel created successfully with 250000 rows and 100 columns at: d:\files\
- Kindly ensure that you are using a system with an x64-based processor and adequate resources (e.g., sufficient RAM) for optimal performance.
- If possible, we recommend trying the latest version of Aspose.Cells for Java to benefit from the most recent improvements and updates.
- If a higher-configured system is not available, please consider the following suggestions:
- Enable memory optimization mode as recommended by @simon.zhao.
- Explore using the LightCells API. For instance, if you are using
cells.get("A1").putValue(...), implementingLightCellsDataProvidercan help stream rows to disk, reducing the need to load millions of cells into RAM simultaneously. For more information, please refer to the documentation here: Using LightCells API|Documentation. - Increase the JVM heap size more to meet the memory requirements.
Hi ,
Thank you for your suggestions.
I wanted to let you know that implementing the LightCellsDataProvider worked well for me. After applying it, I was able to increase the limit from 250,000 to 600,000 records, which is a significant improvement with the same memory size. I really appreciate your guidance on this.
I’d like to further optimize the performance if possible. Do you have any additional recommendations like configuration tweaks, best practices with LightCells, or JVM tuning tip that could help push the limit even higher or improve memory efficiency further?
We are glad that LightCellsDataProvider works as a better solution for your case.
We are afraid there’s little room left for further performance optimization. With LightCells apis, most data of the cells/rows, such as row/cell structure and properties, cell values including numeric, boolean, errors and formulas, can be saved to the final xml stream or file stream directly without being kept in memory, so the memory cost can be improved. However, there are still some gobal data and references need to be maintained in memory for the data/file structure of the generated file and memory consumption increases as the data volume grows.
If there are large amount of string values for cells, please implement LightCellsDataProvider.isGatherString() to make it return false as we explained in the provided document about LightCells. For other parts of the cells data, we are afraid there is no better solution than general implementation of LightCells.