[Numpy-discussion] Looking for people interested in helping with Python compiler to LLVM

Dag Sverre Seljebotn d.s.seljebotn at astro.uio.no
Tue Mar 20 15:44:17 EDT 2012


Sorry, forgot to CC list on this. Lines staring with single greater-than are mine.
-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.

Dag Sverre Seljebotn <d.s.seljebotn at astro.uio.no> wrote:



Francesc Alted <francesc at continuum.io> wrote:

>On Mar 20, 2012, at 12:49 PM, mark florisson wrote:
>>> Cython and Numba certainly overlap. However, Cython requires:
>>> 
>>> 1) learning another language
>>> 2) creating an extension module --- loading bit-code files
>and dynamically executing (even on a different machine from the one
>that initially created them) can be a powerful alternative for run-time
>compilation and distribution of code.
>>> 
>>> These aren't show-stoppers obviously. But, I think some users
>would prefer an even simpler approach to getting fast-code than Cython
>(which currently doesn't do enought type-inference and requires
>building a dlopen extension module).
>> 
>> Dag and I have been discussing this at PyCon, and here is my take on
>> it (at this moment :).
>> 
>> Definitely, if you can avoid Cython then that is easier and more
>> desirable in many ways. So perhaps we can create a third project
>> called X (I'm not very creative, maybe ArrayExprOpt), that takes an
>> abstract syntax tree in a rather simple form, performs code
>> optimizations such as rewriting loops with array accesses to vector
>> expressions, fusing vector expressions and loops, etc, and spits out
>a
>> transformed AST containing these optimizations. If runtime
>information
>> is given such as actual shape and stride information the
>> transformations could figure out there and then whether to do things
>> like collapsing, axes swapping, blocking (as in, introducing more
>axes
>> or loops to retain discontiguous blocks in the cache), blocked memory
>> copies to contiguous chunks, etc. The AST could then also say whether
>> the final expressions are vectorizable. Part of this functionality is
>> already in numpy's nditer, except that this would be implicit and do
>> more (and hopefully with minimal overhead).
>> 
>> So numba, Cython and maybe numexpr could use the functionality,
>simply
>> by building the AST from Python and converting back (if necessary) to
>> its own AST. As such, the AST optimizer would be only part of any
>> (runtime) compiler's pipeline, and it should be very flexible to
>> retain any information (metadata regarding actual types, control flow
>> information, etc) provided by the original AST. It would not do
>> control flow analysis, type inference or promotion, etc, but only
>deal
>> with abstract types like integers, reals and arrays (C, Fortran or
>> partly contiguous or strided). It would not deal with objects, but
>> would allow to insert nodes like UnreorderableNode and SideEffectNode
>> wrapping parts of the original AST. In short, it should be as easy as
>> possible to convert from an original AST to this project's AST and
>> back again afterwards.
>
>I think this is a very interesting project, and certainly projects like
>numba can benefit of it. So, in order to us have an idea on what you
>are after, can we assume that your project (call it X) would be kind of
>an compiler optimizer, and then the produced, optimized code could be
>feed into numba for optimized LLVM code generation (that on its turn,
>can be run on top of CPUs or GPUs or a combination)? Is that correct?

I think so. Another way of thinking about it is that it is a reimplementation of the logic in the (good and closed source) Fortran 90 compilers, in a reusable component for inclusion in various compilers.

Various c++ metaprogramming libraries (like Blitz++) are similar too.

>
>Giving that my interpretation above is correct, it is bit more
>difficult to me to see how your X project could be of benefit for
>numexpr. In fact, I actually see this the other way round: once the
>optimizer has discovered the vectorization parts, then go one step
>further and generate code that uses numexpr automatically (call this,
>vectorization through numexpr). This is what you mean, or I'm missing
>something?

No. I think in some ways this is a competitor to numexpr -- you would gut out the middle of numexpr and keep the frontend and backend, but use this to optimize iteration order and blocking strategies.

I think the goal is for higher performance than what I understand numexpr can provide (in some cases, not all!). For instance, can numexpr deal well with

a + a.T

where a is a c-contiguous array? Any numpy-like iteration order will not work well, one needs to use higher-dimensional (eg 2D) blocking, not 1D blocking.

(if numexpr can do this then great, the task might then reduce to refactoring numexpr so that cython and numba can use the same logic)

Dag

>
>> As the project matures many optimizations may be added that deal with
>> all sorts of loop restructuring and ways to efficiently utilize the
>> cache as well as enable vectorization and possibly parallelism.
>> Perhaps it could even generate a different AST depending on whether
>> execution target the CPU or the GPU (with optionally available
>> information such as cache sizes, GPU shared/local memory sizes, etc).
>> 
>> Seeing that this would be a part of my master dissertation, my
>> supervisor would require me to write the code, so at least until
>> August I think I would have to write (at least the bulk of) this.
>> Otherwise I can also make other parts of my dissertation's project
>> more prominent to make up for it. Anyway, my question is, is there
>> interest from at least the numba and numexpr projects (if code can be
>> transformed into vector operations, it makes sense to use numexpr for
>> that, I'm not sure what numba's interest is in that).
>
>I'm definitely interested for the numexpr part. It is just that I'm
>still struggling to see the big picture on this. But the general idea
>is really appealing.
>
>Thanks,
>
>-- Francesc Alted

-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20120320/f428a229/attachment.html>


More information about the NumPy-Discussion mailing list