Hello Stefan, Thursday, May 11, 2006, 5:37:11 PM, you wrote:
Admittedly, the largest was only about 1M. Otherwise, the benchmarks would take too long to run, especially on ET. I changed bench.py to use longer strings now, that should not make a difference in most tests but give us better numbers of tree copying and serialization. You can also now pass the options -l and -L (large or LARGE trees).
Anyway, it can't be related to Python. Python just get's a char* and a size and can then happily allocate its final buffer to memcpy it. No appending at all.
Maybe it's libxml2 then, but I really wouldn't know why...
If you want, you can run the modified bench script again, and if you have enough RAM, you can pass the -L option to see if that makes a difference. Sure, anything I can help with. I've ran the tests with "-L", and they took quite a while to perform and even crashed after a point. Since I'm in a hurry lately I did not have the time to see why, but the results are attached. See that some tests results were really slow compared to ET and cET. Ex:
lxe: tostring_utf16 (SA T3 ) 80626.3903 msec/pass, best of ( 81157.4800 80631.1504 80626.3903 ) cET: tostring_utf16 (SA T3 ) 3305.6618 msec/pass, best of ( 3305.6618 3332.7984 3310.7507 ) ET : tostring_utf16 (SA T3 ) 3413.7482 msec/pass, best of ( 3418.9271 3413.7482 3415.6650 ) lxe: tostring_utf8 (UA T3 ) 37834.8396 msec/pass, best of ( 37834.8396 37970.8700 37908.7146 ) cET: tostring_utf8 (UA T3 ) 2880.5753 msec/pass, best of ( 2880.5753 2886.5763 2885.0215 ) ET : tostring_utf8 (UA T3 ) 2981.1059 msec/pass, best of ( 3000.1362 2981.1059 2988.5129 ) The server is at your disposal if you want to use it. -- Best regards, Steve mailto:howe@carcass.dhs.org