On Mon, 06 May 2013, Sebastian Berg wrote:
if you care to tune it up/extend and then I could fire it up again on that box (which doesn't do anything else ATM AFAIK). Since majority of time is spent actually building it (did it with ccache though) it would be neat if you come up with more of benchmarks to run which you might think could be interesting/important.
I think this is pretty cool! Probably would be a while until there are many tests, but if you or someone could set such thing up it could slowly grow when larger code changes are done?
that is the idea but it would be nice to gather such simple benchmark-tests. if you could hint on the numpy functionality you think especially worth benchmarking (I know -- there is a lot of things which could be set to be benchmarked) -- that would be a nice starting point: just list functionality/functions you consider of primary interest. and either it is worth testing for different types or just a gross estimate (e.g. for the selection of types in a loop) As for myself -- I guess I will add fancy indexing and slicing tests. Adding them is quite easy: have a look at https://github.com/yarikoptic/numpy-vbench/blob/master/vb_reduce.py which is actually a bit more cumbersome because of running them for different types. This one is more obvious: https://github.com/yarikoptic/numpy-vbench/blob/master/vb_io.py -- Yaroslav O. Halchenko, Ph.D. http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org Senior Research Associate, Psychological and Brain Sciences Dept. Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik