[XML-SIG] Optimising/strategies for DOM/XSL lookup/parsing

Tony McDonald tony.mcdonald@ncl.ac.uk
Thu, 18 Mar 1999 22:33:00 +0000

Thanks to the help from this list I've managed to get use the XSL Parser of
Dieters' and the DOM routines to do some searching/extraction of element
'chunks' my XML documents. Many thanks for that!

One thing that I've noticed is that the initial DOM 'parsing' is slow
relative to the XSL pattern matching. On my iMac 266, DOM parsing a 76k XML
file took 4-5 seconds (utils.FileReader), whilst the XSL pattern matching
took 1.8 seconds (tp = Parser(pattern), topics =
tp.select(reader.document)). By the way, is there any way of telling how
much memory a DOM tree is occupying?

The way I think things are likely to happen is that there will be large
numbers of XSL queries and very few DOM creations. However, there are
something like 140 documents that need to be 'available' for XSL querying
and subsequent transformation into HTML/RTF. In addition, there will be
times when an XSL query across all 140 documents will definitely happen.

Would one strategy be to load up all 140 documents into memory on startup,
do the DOM processing then and then when an XSL query comes along, 'route'
it to the appropriate DOM tree (now in memory)?

If this isn't possible, is it possible to 'save' a DOM tree to an external
file and re-read it in once a relevant XSL query is ready to be acted upon?

Now I'm not going to be serving my XML docs from my iMac!...but if at all
possible I'd like to limit the DOM parsing as much as possible.

Am I being naive and missing something obvious here?

Any thoughts would be appreciated,