we have a large pdf-document(140MB) and are trying to convert it to pdfa3b. System crashes (with no exception) and we have a memory leak (about 3GB) in this process (normaly consumes about 1GB RAM, after the crash 4GB, after restart again 1GB).
The server on which the program is running has about 10GB of free RAM. Can it be, that it is not sufficient?
Would you please make sure that you are using the latest version of the API and in case issue still occurs, please share your sample PDF document with us along with sample code snippet. We will test the scenario in our environment and address it accordingly. You can upload your document to Dropbox or Google Drive and share the link with us.
Thank you @asad.ali for response.
Unfortunately I cannot share the document as it contains sensitive information.
I hope you can help me despite that.
We are using the version 20.9. Here is the code:
image.png (22.1 KB)
I am certain, that neither of marked lines of code are invoked (they do not appear in log). I also cannot found any problem in Windows Eventlog.
Interestingly the same scenario is working when I am debugging the code on local machine. The problem occurs only when the code is deployed to a server, even though I have massively increased the RAM Memory there, so it cannot be an issue any more.
The RAM installed on the server seems sufficient to process the files with such size. However, could you please make sure that the application is built using x64 mode of debugging. Also, you can please try loading the document directly using file path instead of using file stream.
We are afraid that we cannot comment much on the issue without replicating it in our environment and for the purpose, we need a sample file. In case you cannot share it publicly, you can share it in a private message which will help us in investigation. You can send a private message by clicking over the username and pressing Blue Message Button.