Please have a look at aspose words as well. Saving a word doc as a pdf also shows the same behavior.
sample code should work in the project provided earlier.
try
{
byte[] byteValue = java.util.Base64.getDecoder().decode(wordContent.getBytes());
ByteArrayInputStream wordFile = new ByteArrayInputStream(byteValue);
com.aspose.words.Document doc;
doc = new com.aspose.words.Document(wordFile);
// Save as PDF
String filename = "resources/" + UUID.randomUUID().toString() + "_wordtopdf.pdf";
doc.getLayoutOptions().getRevisionOptions().setShowInBalloons(1);
doc.save(filename, com.aspose.words.SaveFormat.PDF);
com.aspose.pdf.Document pdfDoc = new com.aspose.pdf.Document(filename);
// Create a new memory stream.
ByteArrayOutputStream baos = new ByteArrayOutputStream();
// Save the document to stream.
pdfDoc.save(baos);
pdfDoc.close();
output.setBaos(baos);
output.setFilePath(filename);
baos.close();
return output;
}
doc variable does not have close() as its a com.aspose.words.Document not a pdf. but doc.save(filename, com.aspose.words.SaveFormat.PDF); cases the same FD issue.
We have tested the scenario using the latest version of Aspose.Words for Java 22.3 on Linux Ubuntu 21.10 and have not found this issue. In our case, the /proc/PID/fd folder is completely cleared after the completion of the algorithm. Also, this issue is not reproducible for Aspose.PDF on our side. Could you provide more details on the testing methodology, the Linux distribution used, and other specifics to consider?
Have you tried running the code/project on a webserver jvm? We have our implementation running via a java webservice on IBM’s webserver. The server is running REDhat Linux not sure on OS version though.
For us the descriptor remains open until we restart the JVM hosting the EAR webservice.
Can you please try at your end on a JVM and verify? Meanwhile ill confirm the OS details with my teams and update here asap.
Thank you for your reply. We will investigate this scenario according to your recommendations using a web server and will share with you updated information about it in this forum thread.
Please expedite a fix for this as soon as possible. Its been nearly 3 months since the issue was reported and open descriptors really choke the system once they cross a certain threshold against the ulimit parameter.
We have set a high priority for this issue and will try to make the necessary corrections as soon as possible. Please accept our apologies for the inconvenience.
Unfortunately, we don’t have any updates on this issue at the moment. We will expedite the process of fixing this issue. Please accept our apologies for the inconvenience.