Dot + add operation
Is it possible to add a method to perform a dot product and add the result to an existing matrix in a single operation ? Like C = dot_add(A, B, C) equivalent to C += A @ B.This behavior is natively proposed by the Blas *gemm primitive. The goal is to reduce the peak memory consumption. Indeed, during the computation of C += A @ B, the maximum allocated memory is twice the size of C.Using *gemm to add directly the result , the maximum memory consumption is less than 1.5x the size of C. This difference is significant for large matrices. Any people interested in it ?
On Wed, Mar 31, 2021 at 2:35 AM Guillaume Bethouart < guillaume.bethouart@eshard.com> wrote:
Is it possible to add a method to perform a dot product and add the result to an existing matrix in a single operation ?
Like C = dot_add(A, B, C) equivalent to C += A @ B.This behavior is natively proposed by the Blas *gemm primitive.
The goal is to reduce the peak memory consumption. Indeed, during the computation of C += A @ B, the maximum allocated memory is twice the size of C.Using *gemm to add directly the result , the maximum memory consumption is less than 1.5x the size of C. This difference is significant for large matrices.
Any people interested in it ?
Hi Guillaume, such fused operations cannot easily be done with NumPy alone, and it does not make sense to add separate APIs for that purpose because there are so many combinations of function calls that one might want to fuse. Instead, Numba, Pythran or numexpr can add this to some extent for numpy code. E.g. search for "loop fusion" in the Numba docs. Cheers, Ralf
Or just use SciPy's get_blas_funcs <https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.blas.get_b...> to access *gemm, which directly exposes this function: https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.blas.dgemm... Kevin On Wed, Mar 31, 2021 at 12:35 PM Ralf Gommers <ralf.gommers@gmail.com> wrote:
On Wed, Mar 31, 2021 at 2:35 AM Guillaume Bethouart < guillaume.bethouart@eshard.com> wrote:
Is it possible to add a method to perform a dot product and add the result to an existing matrix in a single operation ?
Like C = dot_add(A, B, C) equivalent to C += A @ B.This behavior is natively proposed by the Blas *gemm primitive.
The goal is to reduce the peak memory consumption. Indeed, during the computation of C += A @ B, the maximum allocated memory is twice the size of C.Using *gemm to add directly the result , the maximum memory consumption is less than 1.5x the size of C. This difference is significant for large matrices.
Any people interested in it ?
Hi Guillaume, such fused operations cannot easily be done with NumPy alone, and it does not make sense to add separate APIs for that purpose because there are so many combinations of function calls that one might want to fuse.
Instead, Numba, Pythran or numexpr can add this to some extent for numpy code. E.g. search for "loop fusion" in the Numba docs.
Cheers, Ralf _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion
Thanks for the quick reply. A was not aware of the fact that this kind of "fuse" function is not compatible with Numpy API. I understand the point. FYI, numba is not able to simplify this kind of calculus: C += A @ B. Nor numexpr which is not compatible with dot product. I did not test pythran. Thus, the only solution is to use the Blas functions through scipy, as recalled by Kevin. I'll play a bit with transposition and alignment issues ... Regards, -- Sent from: http://numpy-discussion.10968.n7.nabble.com/
participants (3)
-
Guillaume Bethouart
-
Kevin Sheppard
-
Ralf Gommers