Absurd memory usages of aspose

Let me first explain how we are using the Aspose.Pdf product. We have written a wrapper which uses generic tables, columns and rows to create a document. once these tables are built (and filled) we convert them into Aspose tables and generate a pdf document.

This setup is fairly succesfull until we began testing with creating documents with larger tables and started noticing the memory usage. When we start the process of converting out generic tables to aspose tables the used memory frankly explodes! (up in the 800Mb - 1GB range), of course this forces the computer to swap memory to the hard drive which grinds the whole process to a halt.

We have tried to use the Direct-To-File mode but although memory usage stays around 400 Mb(!) this is way to slow.

At the moment creating a simple document with a table with 7000 rows takes about 7-8 minutes (memory usage 500-700Mb) but anything above 10000 rows grinds to a halt due to the memory requirements.

To me these amounts of memory usage seem abnormal, can aspose explain why so much memory is consumed? and if this is expected behaviour? and what is the best way to use Aspose.Pdf if you need to make a document with a table of 90000 or more rows?

Kind regards

Robbert,

Hello Robert,

Please make sure you are using the latest version of Aspose.Pdf 3.9.0.0 and in case the issue still persists, please share the code/project that you are using, so that we can test the issue at our end. We apologize for your inconvenience.

FYI: In order to prevent users other than Aspose staff, to access the attachment, you can mark this thread as private.

Well we are using 3.8.0.0 and looking at the release notes of 3.9.0.0 I don't see any mention of changes related to memory usage, but to be thorough I will test the 3.9.0.0 dll although I do not expect any different results.

You ask for our code/project, but that is going to be extremely difficult, perhaps I can post a snippet of the conversion code where we go from generic tables to aspose tables but still I don't see how this could cause such an enormous amount of memory usage.

Anyway I think what we like to know is, is it at all possible to create a PDF document with a table of lets say 8 columns and 90000 rows(or more)? and how would you need to go about it in order to do it most efficiently? surely Aspose can answer that without having to look at my code.

kind regards

RobBert,

Oke so I've tested the 3.9.0.0 dll and there is no improvement.

Also to get some more grip on the issue I've disabled some of my wrapper code, so now all it does is create a Aspose table, generate 100000 rows and add 8 cells to each row.

this is the actual code

Aspose.Pdf.Table tempTable = new Aspose.Pdf.Table(_mainSection);
for (int i = 0; i < 100000; i++)
{
Aspose.Pdf.Row row = tempTable.Rows.Add();
for (int x=0; x< 8 ; x++)
{
row.Cells.Add("inhoud rij:" + i.ToString()+" cell:"+x.ToString());
}
}
_mainSection.Paragraphs.Add(tempTable);

excuse my dutch

Anyway just before I enter the loop, the process as viewed in the taskmanager occupies about 70Mb mainly due to a Dataset which holds the source data. As soon as it starts to build the table the memory usage goes up to 700-900 Mb effectivly grinding my pc to a halt as it starts to swap memory to the harddrive, curiously the cpu never goes over 50% but I suppose that's due to my dual core..however when it starts to swap the cpu drops to 0 and only briefly peaks (10%-20%) every once in a while, which is logical I suppose. after about half an hour there is still no result and as I do have other things to do I terminate the process.

It's frighting to think what this would do to a webserver, and we have serious doubts if Aspose is the way to go for us if these kinds of memory consumptions are considered normal operation for Aspose.Pdf.

kind regards

Robbert,

Hello Robbert,<?xml:namespace prefix = o ns = "urn:schemas-microsoft-com:office:office" />

Thanks for considering Aspose.

I have tested the scenario and I am able to notice the same problem. We are looking into the details of this matter and will keep you updated with the status of correction. We apologize for your inconvenience.

Hi Robbert,

It is a known issue that the memory usage is high when processing very large tables. Our developers are working on it and have made some progress. This is our main focus in recent works. We will send you an update for test once it is ready. Thanks.

Best regards.