Hi. I noticed that when multiplying two matrices of type Float32, the result is Float64:  In [103]: a=NA.ones((2,2), NA.Float32) In [104]: b=NA.ones((2,2), NA.Float32) In [105]: c=NA.matrixmultiply(a,b) In [106]: c.type() Out[106]: Float64  Since the matrix I'm going to multiply in practice are quite big, I'd like to do the operation in Float32. Otherwise this is what I get: Traceback (most recent call last): File "/home/basso/work/python/port/apps/pcaheads.py", line 141, in ? pc = NA.array(NA.matrixmultiply(cent, c), NA.Float32) File "/home/basso/usr//lib/python/numarray/numarraycore.py", line 1150, in dot return ufunc.innerproduct(array1, _gen.swapaxes(array2, 1, 2)) File "/home/basso/usr//lib/python/numarray/ufunc.py", line 2047, in innerproduct r = a.__class__(shape=adots+bdots, type=rtype) MemoryError Any suggestion (apart from doing the operation one column at a time)? thanks
On Thu, 20040624 at 06:14, Curzio Basso wrote:
Hi.
I noticed that when multiplying two matrices of type Float32, the result is Float64:
 In [103]: a=NA.ones((2,2), NA.Float32)
In [104]: b=NA.ones((2,2), NA.Float32)
In [105]: c=NA.matrixmultiply(a,b)
In [106]: c.type() Out[106]: Float64 
Since the matrix I'm going to multiply in practice are quite big, I'd like to do the operation in Float32. Otherwise this is what I get:
Traceback (most recent call last): File "/home/basso/work/python/port/apps/pcaheads.py", line 141, in ? pc = NA.array(NA.matrixmultiply(cent, c), NA.Float32) File "/home/basso/usr//lib/python/numarray/numarraycore.py", line 1150, in dot return ufunc.innerproduct(array1, _gen.swapaxes(array2, 1, 2)) File "/home/basso/usr//lib/python/numarray/ufunc.py", line 2047, in innerproduct r = a.__class__(shape=adots+bdots, type=rtype) MemoryError
Any suggestion (apart from doing the operation one column at a time)?
I modified dot() and innerproduct() this morning to return Float32 and Complex32 for like inputs. This is in CVS now. numarray1.0 is dragging out, but will nevertheless be released relatively soon. I'm curious about what your array dimensions are. When I implemented matrixmuliply for numarray, I was operating under the assumption that no one would be multiplying truly huge arrays because it's an O(N^3) algorithm. Regards, Todd
thanks
 This SF.Net email sponsored by Black Hat Briefings & Training. Attend Black Hat Briefings & Training, Las Vegas July 2429  digital self defense, top technical experts, no vendor pitches, unmatched networking opportunities. Visit www.blackhat.com _______________________________________________ Numpydiscussion mailing list Numpydiscussion@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/numpydiscussion  Todd Miller <jmiller@stsci.edu>
On 24 Jun 2004, Todd Miller wrote:
On Thu, 20040624 at 06:14, Curzio Basso wrote:
I noticed that when multiplying two matrices of type Float32, the result is Float64:
I modified dot() and innerproduct() this morning to return Float32 and Complex32 for like inputs.
I wonder whether it would be worth providing an option to accumulate the sums using Float64 and to convert to Float32 before storing them in an array. I suspect that one reason this returned Float64 is that it is very easy to run into precision/roundoff problems in singleprecision matrix multiplies. You could avoid that by using doubles for the sum while still returning the result as a single. Rick
Rick White wrote:
On 24 Jun 2004, Todd Miller wrote:
On Thu, 20040624 at 06:14, Curzio Basso wrote:
I noticed that when multiplying two matrices of type Float32, the result is Float64:
I modified dot() and innerproduct() this morning to return Float32 and Complex32 for like inputs.
I wonder whether it would be worth providing an option to accumulate the sums using Float64 and to convert to Float32 before storing them in an array. I suspect that one reason this returned Float64 is that it is very easy to run into precision/roundoff problems in singleprecision matrix multiplies. You could avoid that by using doubles for the sum while still returning the result as a single. Rick
I definitely agree. I'm pretty certain the reason it was done with double precision floats is the sensitivity to roundoff issues with matrix operations. I think Rick is right though that only intermediate calculations need to be done in double precision and that doesn't require the whole output array to be kept that way. Perry
On Thu, 20040624 at 11:08, Perry Greenfield wrote:
Rick White wrote:
On 24 Jun 2004, Todd Miller wrote:
On Thu, 20040624 at 06:14, Curzio Basso wrote:
I noticed that when multiplying two matrices of type Float32, the result is Float64:
I modified dot() and innerproduct() this morning to return Float32 and Complex32 for like inputs.
I wonder whether it would be worth providing an option to accumulate the sums using Float64 and to convert to Float32 before storing them in an array. I suspect that one reason this returned Float64 is that it is very easy to run into precision/roundoff problems in singleprecision matrix multiplies. You could avoid that by using doubles for the sum while still returning the result as a single. Rick
I definitely agree. I'm pretty certain the reason it was done with double precision floats is the sensitivity to roundoff issues with matrix operations. I think Rick is right though that only intermediate calculations need to be done in double precision and that doesn't require the whole output array to be kept that way.
Perry
OK. I implemented intermediate sums using Float64 and Complex64 but single precision inputs will still result in single precision outputs. Todd
On Thu, 20040624 at 10:30, Rick White wrote:
On 24 Jun 2004, Todd Miller wrote:
On Thu, 20040624 at 06:14, Curzio Basso wrote:
I noticed that when multiplying two matrices of type Float32, the result is Float64:
I modified dot() and innerproduct() this morning to return Float32 and Complex32 for like inputs.
I wonder whether it would be worth providing an option to accumulate the sums using Float64 and to convert to Float32 before storing them in an array. I suspect that one reason this returned Float64 is that it is very easy to run into precision/roundoff problems in singleprecision matrix multiplies. You could avoid that by using doubles for the sum while still returning the result as a single. Rick
OK. I implemented intermediate sums using Float64 and Complex64 but single precision inputs will still result in single precision outputs. Todd
participants (4)

Curzio Basso

Perry Greenfield

Rick White

Todd Miller