I am filling table with some custom function (we canot use mail merge, because of the old templates with formfields and inserted rows are different types - first row can be some item information with 5 cells, second can be description,…)
I store my diffrent types of row in a hashtable “allRows”. One row contains one or more Cells which can contain some text and one or more formfields. I populate my table like this:
Row rowTmp = (Row)allRows[((string)element[“ROWTYPE”])]; //not important -> fast, gets an origirinal row from hashtable
Row newRow = (Row)rowTmp.Clone(true); //SLOW!
newRow = updateDataInRow(newRow, element);
tableTemp.InsertAfter(newRow, (Node)tablesStartNodes[tableKey]); //tablesStartNodes…where i insert
Can I speed up Clone function, by some work arround? The problem is, that time complexity is not linear. When i create table with 500 or 800 rows, it works fine (two seconds or a bit more) but when i create document with 2000 or 3000 elements parsing becomes painfuly slow (could be a minute or more) and Clone function takes more and more percent of all the CPU time.
Are you sure that it is Clone function that slows things down? The similar problem was discussed here:
and we came to a conclusion that it was indexed access which became slow with growing number of the rows. It is also significantly slower for the first indexed access attempt after the number of the rows is changed. InsertAfter is also working slower than AppendChild or Add function.
So the final recomendation was to build table first in one loop and fill the data in another.
Check if it helps. If not we will consider the matter further. We can even try to optimize it specifically for the tasks like yours in the future version. You will need to provide the complete use case (test project) to commence this activity.
yes, preparing the rows first did the trick (dont know why, but it did)!!!