28 May
2009
28 May
'09
9:10 a.m.
I don't know anything about PIL and its implementation, but I would not be surprised if the cost is mostly accessing items which are not contiguous in memory and bounds checking ( to check where you are in the subimage). Conditional inside loops often kills performances, and the actual computation (one addition/item for naive average implementation) is negligeable in this case.
This would definitely be the case in sub-images. However, coming back to my original question, how do you explain that although PIL is extremely fast in large images (2000x2000), numpy is much faster when it comes down to very small images (I tested with 10x10 image files)?