I am impressed with your quick and detailed response. Thank you.
As you may infer, efficient processing of large documents is very important for us. My team has come up with a document and some representative operations. I was wondering if it would be possible to obtain some metrics around this document. I do realize that I am asking for a lot in terms of a pre-sales support question.
Here are the details of the document:
The document is attached. It has the following items:
a) 150 tables
b) 150 paragraphs with text
c) 150 lists with 3 list items each
Here are representative operations for the purposes of benchmarking performance:
1) Find every table and writes to standard out
2) Find every paragraph and writes to standard out
3) Find every list item (not just a list, but a list item) and writes to standard out
We would love to know the following metrics:
i) What were the peak and averge memory numbers?
ii) What were the peak and average CPU utlization?
iii) How much time did the entire operation take?
Of course ii) and iii) will be hardware dependent, so they would have to be qualified based on the hardware. My baseline software environment is Sun Java 1.4 on Windows.
When processing large documents, i) is of utmost importance so that the application does not run out of memory. ii) comes next and iii) is further down.
Now, if you were to compare the performance of your java library to simple XML processing using a DOM parser you will have some very interesting benchmarks that would interest a lot of people.
It would definitely interest me.
Good luck on your end of June release.