[Numpy-discussion] automatic differentiation with PyAutoDiff

Olivier Grisel olivier.grisel at ensta.org
Thu Jun 14 10:42:53 EDT 2012


2012/6/14 James Bergstra <bergstrj at iro.umontreal.ca>:
> On Thu, Jun 14, 2012 at 4:00 AM, Olivier Grisel
> <olivier.grisel at ensta.org> wrote:
>> 2012/6/13 James Bergstra <bergstrj at iro.umontreal.ca>:
>>> Further to the recent discussion on lazy evaluation & numba, I moved
>>> what I was doing into a new project:
>>>
>>> PyAutoDiff:
>>> https://github.com/jaberg/pyautodiff
>>>
>>> It currently works by executing CPython bytecode with a numpy-aware
>>> engine that builds a symbolic expression graph with Theano... so you
>>> can do for example:
>>>
>>>>>> import autodiff, numpy as np
>>>>>> autodiff.fmin_l_bfgs_b(lambda x: (x + 1) ** 2, [np.zeros(())])
>>>
>>> ... and you'll see `[array(-1.0)]` printed out.
>>>
>>> In the future, I think it should be able to export the
>>> gradient-computing function as bytecode, which could then be optimized
>>> by e.g. numba or a theano bytecode front-end. For now it just compiles
>>> and runs the Theano graph that it built.
>>>
>>> It's still pretty rough (you'll see if you look at the code!) but I'm
>>> excited about it.
>>
>> Very interesting. Would it be possible to use bytecode introspection
>> to printout the compute and display a symbolic representation of an
>> arbitrary python + numpy expression?
>>
>> E.g. something along the lines of:
>>
>>>>> g = autodiff.gradient(lambda x: (x + 1) ** 2, [np.zeros(())])
>>>>> print g
>> f(x) = 2 * x + 2
>>>>> g(np.arrange(3))
>> array[2, 4, 6]
>>
>> --
>> Olivier
>> http://twitter.com/ogrisel - http://github.com/ogrisel
>
> So... almost?
>
> I just hacked this gradient function to see what theano could print
> out, and the first thing that happened (after my own mistakes were
> sorted out) was an error because the lambda expression was defined to
> work on a 0-d array, but then you evaluated g on a vector. Was this
> part of the test? If so, I'm not sure I think it's a good idea, I'm
> assuming it was a cut-and-paste oversight and moving on....

Indeed, my bad. I wrote that email in a hurry while waiting in line
for boarding in a plane while still using the airport wifi...

> I settled on (https://github.com/jaberg/pyautodiff/blob/master/autodiff/tests/test_gradient.py)
> ```
> import numpy as np
> from autodiff import Gradient
>
> def test_basic():
>    g = Gradient(lambda x: ((x + 1) ** 2).sum(), [np.zeros(3)])
>    print g
>    print g(np.arange(3))
> ```
>
> The output is ... well... ugly but correct:
> Elemwise{Composite{[mul(i0, add(i1, i2))]}}(TensorConstant{(1,) of
> 2.0}, TensorConstant{(1,) of 1.0}, <TensorType(float64, vector)>)
> [array([ 2.,  4.,  6.])]

Indeed it's a bit hard to parse by a human :)

> So with some effort on pretty-printing I'm pretty confident that this
> could work, at least for simple examples. Pretty-printing is always a
> challenge for non-trivial examples.  One option might be to convert
> the internal symbolic graph to sympy?

Indeed that would be great as sympy already has already excellent math
expression rendering.

An alternative would be to output mathml or something similar that
could be understood by the mathjax rendering module of the IPython
notebook.

-- 
Olivier
http://twitter.com/ogrisel - http://github.com/ogrisel



More information about the NumPy-Discussion mailing list