Hi Deepa,
For the memory cost issue, it mainly depends on the dataset's size to
import into the workbook if you have not used other memory consumed operations
such as inserting drawing objects or applying many different styles for
cells. And, Aspose.Cells for Java is a pure java component, there is not any
special operation to allocate and use memory by itself and everything is managed by
JVM's GC mechanism. So, for your questions:
1) We are able to export large reports doing single user testing using
aspose. But when we run continue single user and single report testing for the
same report multiple times..ie if we run the large report 7-8 times in
succession we notice JVM heap dumps due to aspose out of memory exceptions. See
the exception details below.
I think maybe it is because the report you created
just causes the JVM to reach the threshhold of memory. As you know, the GC
operation of JVM is not so accurate and cannot be exactly same for every one of
several repeated tasks. And, would you please check whether there are some global
objects in your enviornment that may be created or cannot be destroyed by
GC when you create the same report multiple times.
2) Another question is - if we are able to load the data from resultset
into aspose cells without out of memory error, why does it happen during while
saving the workbook?
When saving the workbook, some extra
memory will be needed to hold some middle data structures such as the buffer for
resultant file data. If the used memory has been near to the threshhold of JVM
after importing data, OutOfMemory exception may be encountered when saving the
workbook.
And for your questions in your prior post:
1) Tested trail version v2.2.0 but still got the JVM heap dumps due to
Aspose. Do we need to test the Licensed version or make code modifications? Is
it different from trial version. Can you also highlight the improvements in
v2.2.0 design vs v1.7 in regards to rendering to excel.
Well, there should be not much difference between trial
and licensed versions for memory cost. There are many differences between v2.2.0
and v1.7 such as fixed bugs, enhancements or new features are incorporated. Regarding memory
cost, one significant improvement is for applying large amount of styles
for the cells.
2) Increased JVM memory settings from Min 50 MB, Max 256 MB to Min 512
MB,Max 1.5 GB: Improved performance for single report but still get memory dumps
for concurrent reports.
Well, concurrent reports will surely require more
memory than a single one. For memory cost, I think it is just the summation of
memories that every single one needs.
3) Chunking the data. Instead of processing all the rows at a time we have
broken down the resultset to process 500 rows at a time - Improved performance
for single report but still get memory dumps for concurrent reports.
Would you please explain it more? Do you mean that
splitting one large resultset into multiple smaller resultsets and calling
"importResultSet()" many times will improve the performance? It is strange
because we think the memory cost should depend on the total dataset imported
into the Workbook object and has nothing to do with the operation count of
importing it.
Do you have other operations on the Workbook except importing plain data?
By our test, 500M memory is enough for importing simple data(such as Date,
Number, String) for about 300,000 cells. If your dataset is simple too but it
costs too many memories, would you please give us a simple test
project and we will check it soon to see whether we can make some improvements
for it.
Thank you.