[Numpy-discussion] [OT] Starving CPUs article featured in IEEE's ComputingNow portal

David Warde-Farley dwf at cs.toronto.edu
Fri Mar 19 18:18:17 EDT 2010


On 19-Mar-10, at 1:13 PM, Anne Archibald wrote:

> I'm not knocking numpy; it does (almost) the best it can. (I'm not
> sure of the optimality of the order in which ufuncs are executed; I
> think some optimizations there are possible.) But a language designed
> from scratch for vector calculations could certainly compile
> expressions into a form that would save a lot of memory accesses,
> particularly if an optimizer combined many lines of code. I've
> actually thought about whether such a thing could be done in python; I
> think the way to do it would be to build expression objects from
> variable objects, then have a single "apply" function that fed values
> in to all the variables.

Hey Anne,

Some folks across town from you at U de M have built just such at  
thing. :)

http://deeplearning.net/software/theano/

It does all that, plus automatic differentiation, detection and  
correction of numerical instabilities, etc.

Probably the most amazing thing about it is that with recent versions,  
you basically flip a switch and it will instead use an available CUDA- 
capable Nvidia GPU instead of the CPU. I'll admit, when James Bergstra  
initially told me about this plan to make it possible to transparently  
switch to running stuff on the GPU, I thought it was so ambitious that  
it would never happen. Then it did...

David



More information about the NumPy-Discussion mailing list