[SciPy-Dev] Regarding taking up project ideas and GSoC 2015
Ralf Gommers
ralf.gommers at gmail.com
Wed Mar 25 15:19:05 EDT 2015
On Wed, Mar 25, 2015 at 6:59 PM, Maniteja Nandana <
maniteja.modesty067 at gmail.com> wrote:
> Hi everyone,
>
> I wanted to get some feedback on the application format and whether the
> mentioning of methods, API and other packages is necessary in the
> application or would it be preferable to provide a link to the Wiki page
> which contains that information.
>
Your proposal is already a lot more detailed than other proposals are, so I
suggest to at least not make it any longer. Moving/keeping some of the
background content to/in the wiki and linking to it in your proposal would
be even better.
> I would also update the timeline as early as possible, after I refine the
> ideas. It would also be great to have any other feedback.
>
Your timeline is now still empty, it's important to fill that in asap. It's
easier to comment on a draft and improve it than suggest something from
scratch. There are a number of things that have been suggested and you
could put in (and a few I just thought of):
- write set of univariate test functions with known first and higher order
derivatives
- same exercise for multivariate test functions
- define desired broadcasting behavior and implement
- refactor numdifftools.core.Derivative
- finalize API in a document
- integrate module into Scipy
- replace usages of numpy.diff with new scipy.diff functionality within
Scipy
- (bonus points for at the end):
- write a tutorial section about scipy.diff
- write a nice set of benchmarks
Cheers,
Ralf
> The link to my proposal is :
>
>
> http://www.google-melange.com/gsoc/proposal/review/org/google/gsoc2015/inspiremaniteja/5629499534213120
>
>
> https://github.com/maniteja123/GSoC/wiki/Proposal:-add-finite-difference-numerical-derivatives-as-%60%60scipy.diff%60%60
>
> Cheers,
> Maniteja
>
> _______________________________________________
> SciPy-Dev mailing list
> SciPy-Dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-dev
>
>
> On Mon, Mar 23, 2015 at 6:42 AM, Maniteja Nandana <
> maniteja.modesty067 at gmail.com> wrote:
>
>> Hi everyone,
>> I was thinking it would be nice to put forward my ideas regarding the
>> implementation of the package.
>>
>> Thanks to Per Brodtkorb for the feedback.
>>
>> On Thu, Mar 19, 2015 at 7:29 PM, <Per.Brodtkorb at ffi.no> wrote:
>>
>>> Hi,
>>>
>>>
>>>
>>> For your information I have reimplemented the approx._fprime and
>>> approx._hess code found in statsmodels and added the epsilon extrapolation
>>>
>>> method of Wynn. The result you can see here:
>>>
>>>
>>> https://github.com/pbrod/numdifftools/blob/master/numdifftools/nd_cstep.py
>>>
>>>
>>>
>> This is wonderful, The main aim now is to find a way to determine whether
>> the function is analytic, which is the necessity for the complex step to
>> work. Though differentiability is one of the main necessities for
>> analyticity, it would be really great if any new suggestions are there ?
>>
>> I have also compared the accuracy and runtimes for the different
>>> alternatives here:
>>>
>>>
>>> https://github.com/pbrod/numdifftools/blob/master/numdifftools/run_benchmark.py
>>>
>>>
>>>
>> Thanks for the information. This would help me better in understanding
>> the pros and cons for various methods.
>>
>>>
>>>
>>> Personally I like the class interface better than the functional one
>>> because you can pass the resulting object as function to other
>>> methods/functions and these functions/methods do not need to know what it
>>> does behind the scenes or what options are used. This simple use case is
>>> exemplified here:
>>>
>>>
>>>
>>> >>> g = lambda x: 1./x
>>>
>>> >>> dg = Derivative(g, **options)
>>>
>>> >>> my_plot(dg)
>>>
>>> >>> my_plot(g)
>>>
>>>
>>>
>>> In order to do this with a functional interface one could wrap it like
>>> this:
>>>
>>>
>>>
>>> >>> dg2 = lambda x: fprime(g, x, **options)
>>>
>>> >>> my_plot(dg2)
>>>
>>>
>>>
>>> If you like the one-liner that the function gives, you could call the
>>> Derivate class like this
>>>
>>>
>>>
>>> >>> Derivate(g, **options)(x)
>>>
>>>
>>>
>>> Which is very similar to the functional way:
>>>
>>> >>> fprime(g, x, **options)
>>>
>>
>> This is a really sound example for using classes. I agree that classes
>> are better than functions with multiple arguments, and also the Object
>> would e reusable for other evaluations.
>>
>>>
>>>
>>> Another argument for having it as a class is that a function will be
>>> large and
>>>
>>> “large functions are where classes go to hide
>>> <http://mikeebert.tumblr.com/post/25998669005/large-functions-are-where-classes-go-to-hide>”.
>>> This is a quote of Uncle Bob’s that we hear frequently in the third and
>>> fourth Clean Coders <http://www.cleancoders.com/> episodes. He states
>>> that when a function starts to get big it’s most likely doing too much— a
>>> function should do one thing only and do that one thing well. Those extra
>>> responsibilities that we try to cram into a long function (aka method) can
>>> be extracted out into separate classes or functions.
>>>
>>>
>>>
>>> The implementation in
>>> https://github.com/pbrod/numdifftools/blob/master/numdifftools/nd_cstep.py
>>> is an attempt to do this.
>>>
>>>
>>>
>>> For the use case where n>=1 and the Richardson/Romberg extrapolation
>>> method, I propose to factor this out in a separate class e.g. :
>>>
>>> >>> class NDerivative(object):
>>>
>>> …. def __init__(self, f, n=1, method=’central’, order=2,
>>> …**options):
>>>
>>>
>>>
>>> It is very difficult to guarantee a certain accuracy for derivatives
>>> from finite differences. In order to get error-estimates for the
>>> derivatives one must do several functions evaluations. In my experience
>>> with numdifftools it is very difficult to know exactly which step-size is
>>> best. Setting it too large or too small are equally bad and difficult to
>>> know in advance. Usually there is a very limited window of useful
>>> step-sizes which can be used for extrapolating the evaluated differences to
>>> a better final result. The best step-size can often be found around
>>> (10*eps)**(1./s)*maximum(log1p(abs(x)), 0.1) where s depends on the method
>>> and derivative order. Thus one cannot improve the results indefinitely by
>>> adding more terms. With finite differences you can hope the chosen sampling
>>> scheme gives you reasonable values and error-estimates, but many times, you
>>> just have to accept what you get.
>>>
>>>
>>>
>>> Regarding the proposed API I wonder how useful the input arguments
>>> epsabs, epsrel will be?
>>>
>> I was just then tinkering about controlling the absolute and relative
>> errors of the derivative, but now it seems like we should just let the
>> methods to take care of it.
>>
>> I also wonder how one can compute the outputs abserr_round,
>>> abserr_truncate accurately?
>>>
>> This idea was from the implementation in this
>> <https://github.com/ampl/gsl/blob/master/deriv/deriv.c#L59> function. I
>> am not sure of how accurate the errors would be, but I suppose this is
>> possible to implement.
>>
>>>
>>>
>>>
>>>
>>> Best regards
>>>
>>> *Per A. Brodtkorb*
>>>
>>>
>>> Regarding the API, after some discussion, the class implementation would
>> be something like
>>
>> Derivative()
>>
>> Def __init__(f, h=None, method=’central’, full_output=False)
>>
>> Def __call__(self, x, *args, **kwds)
>>
>>
>>
>> Gradient():
>>
>> Def __init__(f, h=None, method=’central’, full_output=False)
>>
>> Def __call__(self, x, *args, **kwds)
>>
>>
>>
>> Jacobian():
>>
>> Def __init__(f, h=None, method=’central’, full_output=False)
>>
>> Def __call__(self, x, *args, **kwds)
>>
>>
>>
>> Hessian():
>>
>> Def __init__(f, h=None, method=’central’, full_output=False)
>>
>> Def __call__(self, x, *args, **kwds)
>>
>>
>> NDerivative():
>>
>> Def __init__(f, n=1, h=None, method=’central’, full_output=False,
>> **options)
>>
>> Def __call__(self, x, *args, **kwds)
>>
>>
>>
>> Where options could be
>>
>> Options = dict(order=2, Romberg_terms=2)
>>
>>
>> I would like to hear opinion on this implementation, where the main
>> issues are
>>
>>
>> 1. whether the h=None default would mean best step-size found using
>> by around *(10*eps)**(1./s)*maximum(log1p(abs(x)), 0.1)* where s
>> depends on the method and derivative order or *StepGenerator*, based
>> on epsilon algorithm by wynn.
>> 2. Whether the *args and **kwds should be in __init__ or __call__,
>> the preference by Perk was for it being in __call__ makes these object
>> compatible with *scipy.optimize.**minimize**(**fun**, **x0**, *
>> *args=()**, **method=None**, **jac=None**, **hess=None**,…..) *where
>> the args are passed both to the function and jac/hess if they are supplied.
>> 3. Are the input arguments for the __init__ sufficient ?
>> 4. What should we compute and return for full_output=True, I was
>> thinking of the following options :
>>
>> *x*: *ndarray* solution array,
>>
>> *success :* *bool* a flag indicating if the derivative was calculated
>> successfully *message* : *str* which describes the cause of the
>> error, if occurred *nfev *: *int* number of function
>> evaluations
>>
>> *abserr_round * : *float* absolute value of the roundoff error, if
>> applicable* abserr_truncate *: *float* absolute value of the
>> truncation error, if applicable
>> It would be great any other opinions and suggestions on this.
>>
>> _______________________________________________
>>> SciPy-Dev mailing list
>>> SciPy-Dev at scipy.org
>>> http://mail.scipy.org/mailman/listinfo/scipy-dev
>>>
>>>
>> Cheers,
>>
>> Maniteja.
>>
>> _______________________________________________
>> SciPy-Dev mailing list
>> SciPy-Dev at scipy.org
>> http://mail.scipy.org/mailman/listinfo/scipy-dev
>>
>>
>>
>
> _______________________________________________
> SciPy-Dev mailing list
> SciPy-Dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/scipy-dev/attachments/20150325/7aa8ab29/attachment.html>
More information about the SciPy-Dev
mailing list