[SciPy-Dev] updating = 'plus'/'worst' keywords added to differential_evolution
Andrew Nelson
andyfaff at gmail.com
Tue Apr 6 23:38:45 EDT 2021
I've been investigating an article by Tanabe. There they investigate two
approaches that can reduce the number of function evaluations with
differential evolution. I've been evaluating the approaches, and wanted to
float possible modifications with the list first.
Tanabe, Ryoji. Revisiting Population Models in Differential Evolution on a
Limited Budget of Evaluations." In International Conference on Parallel
Problem Solving from Nature, pp. 257-272. Springer, Cham, 2020.
An elink to the article is at
https://login.easychair.org/publications/preprint_download/MtB7.
With ``'worst'`` (algorithm 4), only the worst solution vectors, numbering
``floor(M * alpha)``, are updated per generation. From the article:
"Ali [1] demonstrated that a better individual is rarely replaced with its
trial vector. In other words, the better the individual xi (i ∈ {1, ...,
μ}) is, the more difficult it is to generate ui such that f(ui) ≤ f(xi). In
contrast, a worse individual is frequently replaced with its trial vector.
Based on this observation, Ali proposed the worst improvement model that
allows only the λ worst individuals to generate their λ trial vectors. The
number of function evaluations could possibly be reduced by not generating
the remaining μ − λ trial vectors that are unlikely to outperform their μ −
λ parent individuals."
With ``'plus'`` (algorithm 3), a random selection of population members,
numbering ``floor(M * alpha)``, are chosen to create a trial population.
Then the ``M`` best solution vectors from the union of the existing and
trial populations are chosen to go forward into the next iteration. From
the article:
"The elitist (μ + λ) model is general in the field of evolutionary
algorithms, including genetic algorithm and ES. For each iteration, a set
of λ trial vectors Q = {u1 , ..., uλ } are generated (lines 3–5 in
Algorithm 3). For each u, the target vector is randomly selected from the
population. Then, the best μ individuals in P ∪ Q survive to the next
iteration (line 6 in Algorithm 3). Unlike other evolutionary algorithms,
the (μ + λ) model has not received much attention in the DE community. Only
a few previous studies (e.g., [37,39,49]) considered the (μ + λ) model. As
pointed out in [37], the synchronous model may discard a trial vector that
performs worse than its parent but performs better than other individuals
in the population. The (μ + λ) model addresses such an issue."
Both ``'worst'`` and ``'plus'`` can reduce the total number of function
evaluations required, but at the increased risk of not finding a global
minimum.
_____________________________________
Dr. Andrew Nelson
_____________________________________
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.python.org/pipermail/scipy-dev/attachments/20210407/34474b91/attachment.html>
More information about the SciPy-Dev
mailing list