Just had a query about transform.rescale (and by extension, warp), which is giving inconsistent results between different platforms. I wondered if this was a bug or was unavoidable.
Version info: numpy==1.13.0 scikit-image==0.13.1 scipy==0.19.1
Platforms: Scientific linux 7.4 and Windows 10
So, I have a raster I'm rescaling from 5m resolution to 10m: In : import numpy as np In : input_array=np.array([[ 30.7213501 , 30.73872986, 30.77840255, 30.79360602], ...: [ 30.40055123, 30.39305344, 30.40674613, 30.39675925], ...: [ 30.70790547, 30.67351216, 30.7357262 , 30.70840534], ...: [ 30.3960635 , 30.33706349, 30.38129123, 30.34338268]])
In : from skimage.transform import rescale In : rescale(input_array,0.5,mode='symmetric',preserve_range=True,order=0) Out: array([[ 30.39305344, 30.77840255], [ 30.33706349, 30.70840534]])
When I try the same thing on a different linux server (same distro and version), the output is different. Out: array([[ 30.39305344, 30.40674613], [ 30.33706349, 30.70840534]])
Over on windows, I get a different answer again. Out: array([[ 30.39305344, 30.79360602], [ 30.33706349, 30.34338268]])
It looks like, when downsampling with nearest neighbour interpolation, for each group of four pixels, one is chosen for the output raster, but which one is chosen varies on different platforms. I stepped through the code and the difference happens in the _warp_fast function in _warps_cy, so I couldn't tell exactly why each value was chosen.
Is this expected behaviour, or should the same value be chosen on each platform when downsampling? It's messing with our unit tests, as we need a different expected value for each platform we run the tests on.
Jon Morris Software Developer
T +44 (0) 1756 799919 www.jbarisk.comhttp://www.jbarisk.com
[Visit our website]http://www.jbarisk.com [http://www.jbagroup.co.uk/imgstore/JBA-Email-Sig-Icons-LINKEDIN.png] https://www.linkedin.com/in/jon-morris-a2897b4/ [Follow us on Twitter] https://twitter.com/jbarisk