[SciPy-User] Minimizing Monte Carlo simulation function

Andrea Gavana andrea.gavana at gmail.com
Wed Sep 24 17:39:03 EDT 2014


On 24 September 2014 19:53, Matt Newville <newville at cars.uchicago.edu>
wrote:

>
>
> On Wed, Sep 24, 2014 at 10:19 AM, Sturla Molden <sturla.molden at gmail.com>
> wrote:
>
>> Matt Newville <newville at cars.uchicago.edu> wrote:
>>
>> > I think no one would disagree that scipy.optimize needs more (and
>> better)
>> > optimizers, but deprecating those that work poorly (anneal) when others
>> > that work better (basinhopping) are available seems sensible to me.
>>
>> Which optimizer works better depends on the problem.
>>
>
> Andrea tested 202 example problems from the literature, with 100 random
> starting values for each.   For the overwhelming majority (around 85%) of
> the problems, simulated annealing never found the correct solution.  That
> alone might justify being deprecated (at least in the sense of
> disapproval).   Perhaps there are flaws in anneal() that could be
> improved.   I don't know.   But it seems that it is not going to work for
> many cases.
>
> OTOH, For about 5% of the problems, anneal found the correct solution from
> more than 50% of the starting values, and for about 5% of the problems,
> anneal found the correct solution more frequently than basin hopping.  For
> 3% of the problems, simulated annealing out-performed both basin hopping
> and AMPGO.   With basin hopping included, I think it is perfectly
> reasonable to recommend basin hopping over simulated annealing.  Ideally,
> AMPGO and other routines could be added.
>
> Simulated Annealing is a well known algorithm and it makes sence to keep
>> it, at least for reference.
>>
>
> I think it's reasonable to expect a "fitness for purpose".  Knowing that
> it gets the correct solution in fewer than 10% of the test problems can't
> inspire great confidence to anyone.  If anneal is un-deprecated
> (re-approved?), I would suggest that its miserable track record be
> documented in the top level of scipy.optimize, where the unsuspecting user
> might otherwise see it listed as one of the few Global Optimizers in scipy,
> and be led to the mistaken belief that its results are reliable.
>


I tend to agree with Matt. Either SA is a weak algorithm or the SciPy
implementation of SA is a weak one. The result doesn't change, it's still a
weak algorithm. The clear demonstration of it is the benchmark I have set
up: compared to ASA, the SciPy implementation simply disappears. It sits
among some other gems I found in OpenOpt, namely at the very bottom of the
optimization algorithms efficiency.

Maybe the SciPy implementation could be a bridge to ASA? ASA is a very,
very good algorithm and it could bring a huge value to scipy.optimize. I
didn't check the license restrictions for ASA, but then I couldn't care
less: as long as it is not commercial, I can use it anyway. "GPL" doesn't
really apply here, as to me it only means "Gas Propano Liquefatto", which
is a kind of gas you put in specially-adapted cars to make them zip around.

I have uploaded the latest set of results on my benchmarks, including the
"Shuffled Complex Evolution" algorithm, which seems to be an extremely good
global optimization approach, here:

http://infinity77.net/global_optimization/

Comments and criticisms are, as usual, more than welcome.


Andrea.

"Imagination Is The Only Weapon In The War Against Reality."
http://www.infinity77.net

# ------------------------------------------------------------- #
def ask_mailing_list_support(email):

    if mention_platform_and_version() and include_sample_app():
        send_message(email)
    else:
        install_malware()
        erase_hard_drives()
# ------------------------------------------------------------- #
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.scipy.org/pipermail/scipy-user/attachments/20140924/8be752bc/attachment.html>


More information about the SciPy-User mailing list