On Sat, Jul 24, 2021, at 17:58, Juan Nunez-Iglesias wrote:
I'm very glad to hear from you, Josh 😊, but I'm 100% convinced that removing the automatic rescaling is the right path forward. Stéfan, "floats between [0, 1]" is easy enough to explain, except when it isn't (signed filters), or when we automatically rescale int32s in [0, 255] to floats in [0, 2**(-31)], or uint16s in [0, 4095] to floats in [0, 2**(-4)], etc. 

That's why I proposed not automatically scaling integer arrays, but erroring instead. 

I also don't understand what you mean by "excepted when it isn't (signed filters)". 

Can you motivate more carefully why our current approach is problematic and insufficient in some cases?