Runtux Blog (Posts about genetic algorithms)https://blog.runtux.com/enContents © 2024 <a href="mailto:rsc@runtux.com">Ralf Schlatterbeck</a> Thu, 09 May 2024 11:29:19 GMTNikola (getnikola.com)http://blogs.law.harvard.edu/tech/rss- Optimizing Floating-Point Problems with Evolutionary Algorithmshttps://blog.runtux.com/posts/2024/01/07/Ralf Schlatterbeck<p>[This is an extended abstract for a talk I'm planning]</p>
<p>Evolutionary Algorithms (EA) are a superset of genetic algorithms and
related optimization algorithms. Genetic algorithms usually work with
bits and small integer numbers.</p>
<p>There are other EAs that directly work with floating point numbers,
among the Differential Evolution (DE) <a class="brackets" href="https://blog.runtux.com/posts/2024/01/07/#footnote-1" id="footnote-reference-1" role="doc-noteref"><span class="fn-bracket">[</span>1<span class="fn-bracket">]</span></a> <a class="brackets" href="https://blog.runtux.com/posts/2024/01/07/#footnote-2" id="footnote-reference-2" role="doc-noteref"><span class="fn-bracket">[</span>2<span class="fn-bracket">]</span></a> <a class="brackets" href="https://blog.runtux.com/posts/2024/01/07/#footnote-3" id="footnote-reference-3" role="doc-noteref"><span class="fn-bracket">[</span>3<span class="fn-bracket">]</span></a>.</p>
<p>The talk gives an introduction to optimization of floating-point
problems with DE. It uses examples from electrical engineering as well
as from optimization of actuation waveforms of inkjet printers.
Piezo-electric inkjet printers use an actuation waveform to jet drops
out of a nozzle. This waveform (among other parameters like the jetted
fluid) determines the quality of the jetted drops.</p>
<p>For the software I'm using the Python bindings <a class="reference external" href="https://github.com/schlatterbeck/pgapy">PGApy</a> <a class="brackets" href="https://blog.runtux.com/posts/2024/01/07/#footnote-4" id="footnote-reference-4" role="doc-noteref"><span class="fn-bracket">[</span>4<span class="fn-bracket">]</span></a> for the
Parallel Genetic Algorithm Package PGAPack <a class="brackets" href="https://blog.runtux.com/posts/2024/01/07/#footnote-5" id="footnote-reference-5" role="doc-noteref"><span class="fn-bracket">[</span>5<span class="fn-bracket">]</span></a> which was originally
developed at Argonne National Laboratories. I'm maintaining both
packages for some years now. Among others, Differential Evolution (DE)
and strategies for multi-objective optimization (using NSGA-II <a class="brackets" href="https://blog.runtux.com/posts/2024/01/07/#footnote-6" id="footnote-reference-6" role="doc-noteref"><span class="fn-bracket">[</span>6<span class="fn-bracket">]</span></a>)
where newly implemented.</p>
<aside class="footnote-list brackets">
<aside class="footnote brackets" id="footnote-1" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="https://blog.runtux.com/posts/2024/01/07/#footnote-reference-1">1</a><span class="fn-bracket">]</span></span>
<p>Rainer Storn and Kenneth Price. Differential evolution -- a simple
and efficient adaptive scheme for global optimization over
continuous spaces. Technical Report TR-95-012, International
Computer Science Institute (ICSI), March 1995.</p>
</aside>
<aside class="footnote brackets" id="footnote-2" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="https://blog.runtux.com/posts/2024/01/07/#footnote-reference-2">2</a><span class="fn-bracket">]</span></span>
<p>Rainer Storn and Kenneth Price. Differential evolution -- a simple
and efficient heuristic for global optimization over continuous
spaces. Journal of Global Optimization, 11(4):341–359, December 1997.</p>
</aside>
<aside class="footnote brackets" id="footnote-3" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="https://blog.runtux.com/posts/2024/01/07/#footnote-reference-3">3</a><span class="fn-bracket">]</span></span>
<p>Kenneth V. Price, Rainer M. Storn, and Jouni A. Lampinen.
Differential Evolution: A Practical Approach to Global Optimization.
Springer, Berlin, Heidelberg, 2005.</p>
</aside>
<aside class="footnote brackets" id="footnote-4" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="https://blog.runtux.com/posts/2024/01/07/#footnote-reference-4">4</a><span class="fn-bracket">]</span></span>
<p><a class="reference external" href="https://github.com/schlatterbeck/pgapack">PGAPack</a>, a general-purpose, data-structure-neutral, parallel genetic
algorithm library</p>
</aside>
<aside class="footnote brackets" id="footnote-5" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="https://blog.runtux.com/posts/2024/01/07/#footnote-reference-5">5</a><span class="fn-bracket">]</span></span>
<p><a class="reference external" href="https://github.com/schlatterbeck/pgapy">PGAPy</a>: Python Wrapper for PGAPack Parallel Genetic Algorithm Library</p>
</aside>
<aside class="footnote brackets" id="footnote-6" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="https://blog.runtux.com/posts/2024/01/07/#footnote-reference-6">6</a><span class="fn-bracket">]</span></span>
<p>Kalyanmoy Deb, Samir Agrawal, Amrit Pratap, and T. Meyarivan.
A fast elitist non-dominated sorting genetic algorithm for
multi-objective optimization: NSGA-II. In Marc Schoenauer,
Kalyanmoy Deb, Günther Rudolph, Xin Yao, Evelyne Lutton, Juan
Julian Merelo, and Hans-Paul Schwefel, editors, Parallel Problem
Solving from Nature – PPSN VI, volume 1917 of Lecture Notes in
Computer Science, pages 849–858. Springer, Paris, France, September
2000.</p>
</aside>
</aside>documentationenglishgenetic algorithmshowtoopen sourceoptimizationhttps://blog.runtux.com/posts/2024/01/07/Sun, 07 Jan 2024 17:00:00 GMT
- Differential Evolution Illustratedhttps://blog.runtux.com/posts/2023/04/07/Ralf Schlatterbeck<p>This post again applies to <a class="reference external" href="https://github.com/schlatterbeck/pgapack">PGAPack</a> and <a class="reference external" href="https://github.com/schlatterbeck/pgapy">PGApy</a> or other Genetic
Algorithm (GA) implementations that support Differential Evolution (DE).</p>
<p>Differential Evolution (DE) <a class="brackets" href="https://blog.runtux.com/posts/2023/04/07/#footnote-1" id="footnote-reference-1" role="doc-noteref"><span class="fn-bracket">[</span>1<span class="fn-bracket">]</span></a>, <a class="brackets" href="https://blog.runtux.com/posts/2023/04/07/#footnote-2" id="footnote-reference-2" role="doc-noteref"><span class="fn-bracket">[</span>2<span class="fn-bracket">]</span></a>, <a class="brackets" href="https://blog.runtux.com/posts/2023/04/07/#footnote-3" id="footnote-reference-3" role="doc-noteref"><span class="fn-bracket">[</span>3<span class="fn-bracket">]</span></a> is a population based
optimizer that is similar to other evolutionary algorithms. It is quite
powerful and implemented in <a class="reference external" href="https://github.com/schlatterbeck/pgapack">PGAPack</a> (and usable in the Python wrapper
<a class="reference external" href="https://github.com/schlatterbeck/pgapy">PGAPy</a>). To illustrate some points about this algorithm I've constructed
a simple parcours in <a class="reference external" href="https://github.com/openscad/openscad">OpenSCAD</a>.</p>
<p> </p>
<img alt="/images/parcours.gif" src="https://blog.runtux.com/images/parcours.gif">
<p> </p>
<p>To test the optimizer with this parcours the gene in the genetic
algorithm (DE calls this the parameters) is the X and Y coordinate to
give a position on the parcours. In the following we're also calling
these coordinates a <em>vector</em> as is common in the DE literature. We
initialize the population in the region of the start of the ramp. We
allow the population to leave the initialization range.</p>
<p>Note that the corners of the parcours are flat areas with the same
height. Areas like this are generally difficult for optimization
algorithms because the algorithm cannot know in which direction better
values (if available at all) can be found. This is also the reason why
we do not put the initialization range inside the flat area at the
bottom of the parcours. Note that normally one would initialize the
population over the whole area to be searched. In our case we want to
demonstrate that the algorithm is well capable of finding a solution far
from the initialization range.</p>
<p>The algorithm of DE is quite simple (see, e.g. <a class="brackets" href="https://blog.runtux.com/posts/2023/04/07/#footnote-3" id="footnote-reference-4" role="doc-noteref"><span class="fn-bracket">[</span>3<span class="fn-bracket">]</span></a> p.37-47): Each
member <span class="math">\(\mathbf x_{i,g}\)</span> in the population of the current
generation <span class="math">\(g\)</span>, where <span class="math">\(i\)</span> is the index of the population
member is crossed over with a mutant vector <span class="math">\(v_{i,g}\)</span>. The mutant
vector is generated by selecting three population members at random
(without replacement) from the population and combining them as follows.</p>
<div class="math">
\begin{equation*}
\mathbf v_{i,g} = \mathbf x_{r_0,g}
+ F \cdot (\mathbf x_{r_1,g} - \mathbf x_{r_2,g})
\end{equation*}
</div>
<p>As can be seen, the weighted difference of two random members of the
population is added to a third population member. All indeces
<span class="math">\(r_0, r_1,\)</span> and <span class="math">\(r_2\)</span> are different and different from
<span class="math">\(i\)</span>. The weight <span class="math">\(F\)</span> is a configuration parameter that is
typically <span class="math">\(0.3 \le F < 1\)</span>. This algorithm is the <em>classic</em> DE,
also often termed <code class="docutils literal"><span class="pre">de-rand-1-bin</span></code>. A variant uses the index of the <em>best</em>
individual instead of <span class="math">\(r_0\)</span> and is called <code class="docutils literal"><span class="pre">de-best-1-bin</span></code>. If more
than a single difference is added, another variant would be, e.g.,
<code class="docutils literal"><span class="pre">de-rand-2-bin</span></code>. The last component of the name <code class="docutils literal">bin</code> refers to
uniform crossover <a class="brackets" href="https://blog.runtux.com/posts/2023/04/07/#footnote-4" id="footnote-reference-5" role="doc-noteref"><span class="fn-bracket">[</span>4<span class="fn-bracket">]</span></a>, a binomial distribution. In DE, parameters from
the mutant vector are selected with a certain probability <span class="math">\(Cr\)</span>.
Note that DE makes sure that at least one parameter from the mutant
vector is selected (otherwise the original vector <span class="math">\(\mathbf
x_{i,g}\)</span> would be unchanged).</p>
<p>More details about DE and the usage with <a class="reference external" href="https://github.com/schlatterbeck/pgapack">PGAPack</a> can be found in the
section <a class="reference external" href="https://pgapack.readthedocs.io/en/latest/part1.html#sec-differential-evolution">Differential Evolution</a> and in the <a class="reference external" href="https://pgapack.readthedocs.io/en/latest/part2.html#sec-mutation">Mutation</a> sections of the
<a class="reference external" href="https://github.com/schlatterbeck/pgapack">PGAPack</a> documentation.</p>
<p>To use the <a class="reference external" href="https://github.com/openscad/openscad">OpenSCAD</a> model for optimization, we convert it to a
greyscale heightmap using an <a class="reference external" href="https://fenrus75.github.io/FenrusCNCtools/javascript/stl2png.html">STL to PNG converter</a> (STL is a <a class="reference external" href="https://en.wikipedia.org/wiki/STL_(file_format)">3D
format</a> that can be generated by <a class="reference external" href="https://github.com/openscad/openscad">OpenSCAD</a>). The evaluation
function then simply returns the greyscale value at the given location
(after rounding the gene X and Y values to an integer). Values outside
the picture are returned as 0 (black). The resulting optimization run is
shown below in an animated image. The population is quite small, there
are only 6 individuals in the population. We clearly see that when the
differences between individuals gets large (on the straight ridges of
the parcours), the optimization proceeds in larger steps. We also see
that the flat corners can take quite some time to escape from and in the
corners the algorithm slows down. Finally in the last corner, the cone
is climbed and the individuals converge almost to a single point. The
algorithm is stopped when the first individual returns an evaluation of
the largest greyscale value (representing white). Note that I cheated a
little in that many optimization runs take longer (in particular the
optimization can get stuck for many generations in the flat parts of the
ridge) and I selected a run that produced a good animation. So the
selected run is not the average but one of the shorter runs.</p>
<p> </p>
<img alt="/images/animated.gif" src="https://blog.runtux.com/images/animated.gif">
<p> </p>
<aside class="footnote-list brackets">
<aside class="footnote brackets" id="footnote-1" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="https://blog.runtux.com/posts/2023/04/07/#footnote-reference-1">1</a><span class="fn-bracket">]</span></span>
<p>Rainer Storn and Kenneth Price. Differential evolution – a simple
and efficient heuristic for global optimization over continuous
spaces. <em>Journal of Global Optimization</em>, 11(4):341–359, December 1997.</p>
</aside>
<aside class="footnote brackets" id="footnote-2" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="https://blog.runtux.com/posts/2023/04/07/#footnote-reference-2">2</a><span class="fn-bracket">]</span></span>
<p>Rainer Storn and Kenneth Price. Differential evolution – a simple
and efficient adaptive scheme for global optimization over
continuous spaces. Technical Report TR-95-012, International
Computer Science Institute (ICSI), March 1995.
<a class="reference external" href="ftp://ftp.icsi.berkeley.edu/pub/techreports/1995/tr-95-012.pdf">Available from the International Computer Science Institute</a></p>
</aside>
<aside class="footnote brackets" id="footnote-3" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span>3<span class="fn-bracket">]</span></span>
<span class="backrefs">(<a role="doc-backlink" href="https://blog.runtux.com/posts/2023/04/07/#footnote-reference-3">1</a>,<a role="doc-backlink" href="https://blog.runtux.com/posts/2023/04/07/#footnote-reference-4">2</a>)</span>
<p>Kenneth V. Price, Rainer M. Storn, and Jouni A. Lampinen.
<em>Differential Evolution: A Practical Approach to Global Optimization.</em>
Springer, Berlin, Heidelberg, 2005.</p>
</aside>
<aside class="footnote brackets" id="footnote-4" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="https://blog.runtux.com/posts/2023/04/07/#footnote-reference-5">4</a><span class="fn-bracket">]</span></span>
<p>Gilbert Syswerda. Uniform crossover in genetic algorithms. In J.
David Schaffer, editor, Proceedings of the Third International
Conference on Genetic Algorithms (ICGA), pages 2–9. Morgan
Kaufmann, Fairfax, Virginia, June 1989.</p>
</aside>
</aside>documentationenglishgenetic algorithmshowtoopen sourceoptimizationhttps://blog.runtux.com/posts/2023/04/07/Fri, 07 Apr 2023 20:00:00 GMT
- Notes on Premature Convergence in Genetic Algorithmshttps://blog.runtux.com/posts/2023/01/06/Ralf Schlatterbeck<p>This post again applies to <a class="reference external" href="https://github.com/schlatterbeck/pgapack">PGAPack</a> and <a class="reference external" href="https://github.com/schlatterbeck/pgapy">PGApy</a> or other Genetic
Algorithm (GA) implementations.</p>
<p>When optimizing solutions with GA, it sometimes happens that a
sub-optimal solution is found because the whole population converges
early to a part of the search space where no better solutions are found
anymore. This effect is called <a class="reference external" href="https://en.wikipedia.org/wiki/Premature_convergence">premature convergence</a>.</p>
<p>It is usually hard to detect <a class="reference external" href="https://en.wikipedia.org/wiki/Premature_convergence">premature convergence</a>, a good measure is
the mean <a class="reference external" href="https://en.wikipedia.org/wiki/Hamming_distance">hamming distance</a> between individuals. In <a class="reference external" href="https://github.com/schlatterbeck/pgapack">PGAPack</a> reporting of
the <a class="reference external" href="https://en.wikipedia.org/wiki/Hamming_distance">hamming distance</a> can be enabled with the reporting option
<code class="docutils literal">PGA_REPORT_HAMMING</code>, set with the function <code class="docutils literal">PGASetPrintOptions</code> in
<a class="reference external" href="https://github.com/schlatterbeck/pgapack">PGAPack</a> and with the <code class="docutils literal">print_options</code> constructor parameter in <a class="reference external" href="https://github.com/schlatterbeck/pgapy">PGApy</a>.
Unfortunately this is only implemented for the binary datatype.</p>
<p>One reason for the effect of <a class="reference external" href="https://en.wikipedia.org/wiki/Premature_convergence">premature convergence</a> is the use of
<a class="reference external" href="https://en.wikipedia.org/wiki/Fitness_proportionate_selection">Fitness Proportionate Selection</a> as detailed in an earlier post in
this blog <a class="brackets" href="https://blog.runtux.com/posts/2023/01/06/#footnote-1" id="footnote-reference-1" role="doc-noteref"><span class="fn-bracket">[</span>1<span class="fn-bracket">]</span></a>. If during the early stages of the search an individual
is discovered that is much better than anything found so far, the chance
is high that this individual takes over the whole population when
<a class="reference external" href="https://en.wikipedia.org/wiki/Fitness_proportionate_selection">Fitness Proportionate Selection</a> is in use, preventing any further
progress of the algorithm. The reason is that an individual that is
much better than all others gets an unreasonably high proportion of the
roulette wheel when selecting individuals for the next generation
resulting in all other genetic material having only a slim chance of
making it into the next generation.</p>
<p>Another reason for premature convergence can be a small population size.
Goldberg et. al. <a class="brackets" href="https://blog.runtux.com/posts/2023/01/06/#footnote-9" id="footnote-reference-2" role="doc-noteref"><span class="fn-bracket">[</span>9<span class="fn-bracket">]</span></a> give estimates of the population size for classic
GA with a small alphabet (the number of different allele values, e.g. 0
and 1 for a binary GA) with cardinality <span class="math">\(\chi\)</span>, a problem specific
building block size that overcomes deception <span class="math">\(k \ll n\)</span> where
<span class="math">\(k\)</span> is the building block size (a measure for the difficulty of
the problem) and <span class="math">\(n\)</span> is the string (gene) length. They show
that the population size should be <span class="math">\(O(m\chi^k)\)</span> with <span class="math">\(m=n/k\)</span>
so that for a given difficulty of the problem <span class="math">\(k\)</span> the population
size is proportional to the string length <span class="math">\(n\)</span>. This result,
however, does not readily translate to problems with a large alphabet,
e.g. floating-point representations like the <code class="docutils literal">real</code> data type in
<a class="reference external" href="https://github.com/schlatterbeck/pgapack">PGAPack</a>. For floating point representations, difficulty usually
translates to how multimodal a problem is, i.e., how many <em>peaks</em> (in
case of a maximization problem) or <em>valleys</em> (in case of a minimization
problem) there are in the objective function.</p>
<p>Now if with an adequate population size <em>and</em> an appropriate selection
scheme premature convergence still occurs, there are some mechanisms
that can be used.</p>
<section id="prevention-of-duplicates">
<h2>Prevention of Duplicates</h2>
<p><a class="reference external" href="https://github.com/schlatterbeck/pgapack">PGAPack</a> implements a mechanism for preventing duplicate gene strings
(individuals). In previous implementations the computation effort was
quadratic in the population size <span class="math">\(N\)</span>, i.e. the effort was
<span class="math">\(O(N^2)\)</span> (it compared each new individual with <em>all</em> current
members of the population, once for each new individual). In the latest
versions it uses hashing for detecting duplicates, reducing the overhead
to <span class="math">\(O(N)\)</span>, a small constant overhead for each new individual.</p>
<p>For user-defined data types this means that the user has to define a
hash function for the data type in addition to the string comparison
function. For the built-in data types (binary, integer, real) this is
automatically available.</p>
<p>Prevention of duplicates works quite well for binary and integer data
types, especially if the genetic operators have a high probability of
producing duplicates. It does not work so well for the real data type
because new individuals tend to be different from other individuals even
if they can often be very close to already existing individuals.</p>
</section>
<section id="restricted-tournament-replacement">
<h2>Restricted Tournament Replacement</h2>
<p>An algorithm originally called <em>restricted tournament selection</em> by
Harik <a class="brackets" href="https://blog.runtux.com/posts/2023/01/06/#footnote-2" id="footnote-reference-3" role="doc-noteref"><span class="fn-bracket">[</span>2<span class="fn-bracket">]</span></a>, <a class="brackets" href="https://blog.runtux.com/posts/2023/01/06/#footnote-3" id="footnote-reference-4" role="doc-noteref"><span class="fn-bracket">[</span>3<span class="fn-bracket">]</span></a> and later adopted under the name of <em>restricted
tournament replacement</em> (RTR) by Pelikan <a class="brackets" href="https://blog.runtux.com/posts/2023/01/06/#footnote-4" id="footnote-reference-5" role="doc-noteref"><span class="fn-bracket">[</span>4<span class="fn-bracket">]</span></a> uses the old and new
population for deciding if a candidate individual will be allowed into
the new population. It works by randomly selecting a number of members
from the old population (called the selection window), chosing the
individual which is most similar to the candidate, and allows the
candidate into the new population only if it is better than this most
similar individual.</p>
<p>The default for the number of individuals randomly selected from the old
population (the window size) by default is <span class="math">\(\min (n,
\frac{N}{20})\)</span> <a class="brackets" href="https://blog.runtux.com/posts/2023/01/06/#footnote-4" id="footnote-reference-6" role="doc-noteref"><span class="fn-bracket">[</span>4<span class="fn-bracket">]</span></a> where <span class="math">\(n\)</span> is the string (gene) length and <span class="math">\(N\)</span>
is the population size. This can be set by the user with the
<code class="docutils literal">PGASetRTRWindowSize</code> function for <a class="reference external" href="https://github.com/schlatterbeck/pgapack">PGAPack</a> and with the
<code class="docutils literal">rtr_window_size</code> constructor parameter of <a class="reference external" href="https://github.com/schlatterbeck/pgapy">PGApy</a>.</p>
<p>The RTR algorithm needs a similarity metric for deciding how
close an individual is to another. By default this uses a <a class="reference external" href="https://en.wikipedia.org/wiki/Taxicab_geometry">manhattan
distance</a> (equivalent to the <a class="reference external" href="https://en.wikipedia.org/wiki/Hamming_distance">hamming distance</a> for binary genes), i.e. an
allele-by-allele sum of distances, but can be set to an euclidian
distance or a user-defined metric with the user-function mechanism of
<a class="reference external" href="https://github.com/schlatterbeck/pgapack">PGAPack</a>. Comparison of an euclidian distance metric for RTR is
in the <a class="reference external" href="https://github.com/schlatterbeck/pgapy/blob/master/examples/magic_square.py">magic square example</a> in <a class="reference external" href="https://github.com/schlatterbeck/pgapy">PGApy</a> where the use of the euclidian
distance can be turned on with a command-line option.</p>
<p>Restricted tournament replacement works well not only for binary and
integer genes but also for real genes. It can be combined with different
evolutionary algorithm settings.</p>
<p>The effect of RTR on a problem that tends to suffer from <a class="reference external" href="https://en.wikipedia.org/wiki/Premature_convergence">premature
convergence</a> can be seen in the test program <a class="reference external" href="https://github.com/schlatterbeck/pgapack/blob/master/examples/mgh/testprog.f"><code class="docutils literal">examples/mgh/testprog.f</code></a>,
this implements several test functions from an old benchmark of
nonlinear test problems <a class="brackets" href="https://blog.runtux.com/posts/2023/01/06/#footnote-5" id="footnote-reference-7" role="doc-noteref"><span class="fn-bracket">[</span>5<span class="fn-bracket">]</span></a>. The test function that exhibits premature
convergence is what the authors call a "Gaussian function", described as
example (9) in the paper <a class="brackets" href="https://blog.runtux.com/posts/2023/01/06/#footnote-5" id="footnote-reference-8" role="doc-noteref"><span class="fn-bracket">[</span>5<span class="fn-bracket">]</span></a> and implemented as function 3 in
<a class="reference external" href="https://github.com/schlatterbeck/pgapack/blob/master/examples/mgh/objfcn77.f"><code class="docutils literal">examples/mgh/objfcn77.f</code></a>. This function is given as</p>
<div class="math">
\begin{equation*}
f(x) = x_1 \cdot e^{\frac{x_2(t_i - x_3)^2}{2}} - y_i
\end{equation*}
</div>
<p>with</p>
<div class="math">
\begin{equation*}
t_i = (8 - i) / 2
\end{equation*}
</div>
<p>And tabulated values for <span class="math">\(y_i\)</span> given in the paper or the
implementation in <a class="reference external" href="https://github.com/schlatterbeck/pgapack/blob/master/examples/mgh/objfcn77.f"><code class="docutils literal">examples/mgh/objfcn77.f</code></a>. The minimization problem
from these equations is</p>
<div class="math">
\begin{equation*}
f (x) = \sum_{i=1}^{m} f_i^2(x)
\end{equation*}
</div>
<p>with <span class="math">\(m=15\)</span> for this test problem. The authors <a class="brackets" href="https://blog.runtux.com/posts/2023/01/06/#footnote-5" id="footnote-reference-9" role="doc-noteref"><span class="fn-bracket">[</span>5<span class="fn-bracket">]</span></a> give the vector
<span class="math">\(x_0 = (0.4, 1, 0)\)</span> for the minimum
<span class="math">\(f(x_0) = 1.12793 \cdot 10^{-8}\)</span>
they found. The original Fortran implementation in
<a class="reference external" href="https://github.com/schlatterbeck/pgapack/blob/master/examples/mgh/testprog.f"><code class="docutils literal">examples/mgh/testprog.f</code></a> uses a population size of 10000 with default
settings for the <code class="docutils literal">real</code> data type of <a class="reference external" href="https://github.com/schlatterbeck/pgapack">PGAPack</a>. The large population
size is chosen because otherwise the problem exhibits <a class="reference external" href="https://en.wikipedia.org/wiki/Premature_convergence">premature
convergence</a>. It finds a solution in 100 generations
<span class="math">\(x_0=(0.3983801, 0.9978369, 0.009918243)\)</span> with an evaluation value
of <span class="math">\(f(x_0)=2.849966\cdot 10^{-5}\)</span>. The number of function
evaluations needed were 105459 (this is a little less than
<span class="math">\(10000 + 100 \cdot 1000\)</span>, i.e. evaluation of the initial
generation plus evaluation of 10% of the population of each generation,
the probability of crossover and mutation is not 100%, so it happens
that none of the operators is performed on an individual and it is not
re-evaluated).</p>
<p>I've implemented the same problem with Differential Evolution <a class="brackets" href="https://blog.runtux.com/posts/2023/01/06/#footnote-6" id="footnote-reference-10" role="doc-noteref"><span class="fn-bracket">[</span>6<span class="fn-bracket">]</span></a>,
<a class="brackets" href="https://blog.runtux.com/posts/2023/01/06/#footnote-7" id="footnote-reference-11" role="doc-noteref"><span class="fn-bracket">[</span>7<span class="fn-bracket">]</span></a>, <a class="brackets" href="https://blog.runtux.com/posts/2023/01/06/#footnote-8" id="footnote-reference-12" role="doc-noteref"><span class="fn-bracket">[</span>8<span class="fn-bracket">]</span></a> in <a class="reference external" href="https://github.com/schlatterbeck/pgapack/blob/master/examples/mgh/testprogde.c"><code class="docutils literal">examples/mgh/testprogde.c</code></a> (the driver program
implemented in C because I really do not speak Fortran but using the
same functions from the Fortran implementation linking the Fortran and C
code into a common executable). This uses a population size of 2000
(large for most problems when using Differential Evolution, again for
the reason of premature convergence) and finds the solution
<span class="math">\(x_0=(0.3992372, 0.9990791, -0.0007697923)\)</span> with an evaluation
value of <span class="math">\(f(x_0)=7.393123\cdot 10^{-7}\)</span> in only 30 generations.
This amounts to 62000 function evaluations (Differential Evolution
creates all individuals for the new generation and decides afterwards
which to keep).</p>
<p>When using RTR with this problem in <a class="reference external" href="https://github.com/schlatterbeck/pgapack/blob/master/examples/mgh/testprogdertr.c"><code class="docutils literal">examples/mgh/testprogdertr.c</code></a>, the
population size can be reduced to 250 and even after 100 generations the
search has not converged to a suboptimal solution. After 100 generations
we find <span class="math">\(x_0=(0.398975, 1.000074, -6.719886 \cdot 10^{-5})\)</span> and
<span class="math">\(f(x_0)=1.339766\cdot 10^{-8}\)</span> (but also with some changed settings of
the Differential Evolution algorithm). This amounts to 25250 function
evaluations.</p>
</section>
<section id="restart">
<h2>Restart</h2>
<p>A last resort when the above mechanisms do not work is to regularly
restart the GA whenever the population has converged too much. The
restart mechanism implemented in <a class="reference external" href="https://github.com/schlatterbeck/pgapack">PGAPack</a> uses the best individual from
the population to re-seed a new population with variations created by
mutation from this best individual. Restarts can be enabled by setting
<code class="docutils literal">PGASetRestartFlag</code> in <a class="reference external" href="https://github.com/schlatterbeck/pgapack">PGAPack</a> or using the <code class="docutils literal">restart</code> constructor
parameter in <a class="reference external" href="https://github.com/schlatterbeck/pgapy">PGApy</a>. The frequency (default is every 50 generations) of
restarts can be set with <code class="docutils literal">PGASetRestartFrequencyValue</code> in <a class="reference external" href="https://github.com/schlatterbeck/pgapack">PGAPack</a> and
the <code class="docutils literal">restart_frequency</code> constructor parameter in <a class="reference external" href="https://github.com/schlatterbeck/pgapy">PGApy</a>.</p>
<aside class="footnote-list brackets">
<aside class="footnote brackets" id="footnote-1" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="https://blog.runtux.com/posts/2023/01/06/#footnote-reference-1">1</a><span class="fn-bracket">]</span></span>
<p>Ralf Schlatterbeck. <a class="reference external" href="https://blog.runtux.com/posts/2022/12/03/">Proportional Selection (Roulette-Wheel) in
Genetic Algorithms</a>. Blog post, Open Source Consulting, December
2022.</p>
</aside>
<aside class="footnote brackets" id="footnote-2" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="https://blog.runtux.com/posts/2023/01/06/#footnote-reference-3">2</a><span class="fn-bracket">]</span></span>
<p>Georges Harik. Finding multiple solutions in problems of bounded
difficulty. IlliGAL Report 94002, Illinois Genetic Algorithm Lab,
May 1994.</p>
</aside>
<aside class="footnote brackets" id="footnote-3" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="https://blog.runtux.com/posts/2023/01/06/#footnote-reference-4">3</a><span class="fn-bracket">]</span></span>
<p>Georges R. Harik. Finding multimodal solutions using restricted
tournament selection. In Larry J. Eshelman, editor, <em>Proceedings
of the International Conference on Genetic Algorithms (ICGA)</em>,
pages 24–31. Morgan Kaufmann, July 1995.</p>
</aside>
<aside class="footnote brackets" id="footnote-4" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span>4<span class="fn-bracket">]</span></span>
<span class="backrefs">(<a role="doc-backlink" href="https://blog.runtux.com/posts/2023/01/06/#footnote-reference-5">1</a>,<a role="doc-backlink" href="https://blog.runtux.com/posts/2023/01/06/#footnote-reference-6">2</a>)</span>
<p>Martin Pelikan. <em>Hierarchical Bayesian Optimization Algorithm:
Toward a New Generation of Evolutionary Algorithms</em>, volume 170 of
<em>Studies in Fuzziness and Soft Computing</em>. Springer, 2005.</p>
</aside>
<aside class="footnote brackets" id="footnote-5" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span>5<span class="fn-bracket">]</span></span>
<span class="backrefs">(<a role="doc-backlink" href="https://blog.runtux.com/posts/2023/01/06/#footnote-reference-7">1</a>,<a role="doc-backlink" href="https://blog.runtux.com/posts/2023/01/06/#footnote-reference-8">2</a>,<a role="doc-backlink" href="https://blog.runtux.com/posts/2023/01/06/#footnote-reference-9">3</a>)</span>
<p>Jorge J. Moré, Burton S. Garbow, and Kenneth E. Hillstrom.
<em>Testing unconstrained optimization software.</em> ACM Transactions
on Mathematical Software, 7(1):17–41, March 1981.</p>
</aside>
<aside class="footnote brackets" id="footnote-6" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="https://blog.runtux.com/posts/2023/01/06/#footnote-reference-10">6</a><span class="fn-bracket">]</span></span>
<p>Rainer Storn and Kenneth Price. Differential evolution – a simple
and efficient heuristic for global optimization over continuous
spaces. <em>Journal of Global Optimization</em>, 11(4):341–359, December 1997.</p>
</aside>
<aside class="footnote brackets" id="footnote-7" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="https://blog.runtux.com/posts/2023/01/06/#footnote-reference-11">7</a><span class="fn-bracket">]</span></span>
<p>Rainer Storn and Kenneth Price. Differential evolution – a simple
and efficient adaptive scheme for global optimization over
continuous spaces. Technical Report TR-95-012, International
Computer Science Institute (ICSI), March 1995.
<a class="reference external" href="ftp://ftp.icsi.berkeley.edu/pub/techreports/1995/tr-95-012.pdf">Available from the International Computer Science Institute</a></p>
</aside>
<aside class="footnote brackets" id="footnote-8" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="https://blog.runtux.com/posts/2023/01/06/#footnote-reference-12">8</a><span class="fn-bracket">]</span></span>
<p>Kenneth V. Price, Rainer M. Storn, and Jouni A. Lampinen.
<em>Differential Evolution: A Practical Approach to Global Optimization.</em>
Springer, Berlin, Heidelberg, 2005.</p>
</aside>
<aside class="footnote brackets" id="footnote-9" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="https://blog.runtux.com/posts/2023/01/06/#footnote-reference-2">9</a><span class="fn-bracket">]</span></span>
<p>David E. Goldberg, Kalyanmoy Deb, and James H. Clark. Genetic
algorithms, noise, and the sizing of populations. <em>Complex
Systems</em>, 6(4):333–362, 1992. <a class="reference external" href="https://wpmedia.wolfram.com/uploads/sites/13/2018/02/06-4-3.pdf">Available from Complex Systems</a></p>
</aside>
</aside>
</section>documentationenglishgenetic algorithmshowtoopen sourceoptimizationhttps://blog.runtux.com/posts/2023/01/06/Fri, 06 Jan 2023 17:45:00 GMT
- Proportional Selection (Roulette-Wheel) in Genetic Algorithmshttps://blog.runtux.com/posts/2022/12/03/Ralf Schlatterbeck<p>Some of you know that I'm maintaining <a class="reference external" href="https://github.com/schlatterbeck/pgapack">PGApack</a>, a genetic algorithm
package and a corresponding Python wrapper, <a class="reference external" href="https://github.com/schlatterbeck/pgapy">PGApy</a>. Recently the
question came up why <a class="reference external" href="https://github.com/schlatterbeck/pgapack">PGApack</a>, when using <a class="reference external" href="https://en.wikipedia.org/wiki/Fitness_proportionate_selection">Fitness Proportionate Selection</a>
(also known as Roulette-Wheel selection and also called proportional
selection) errors out because it <a class="reference external" href="https://github.com/schlatterbeck/pgapack/issues/7">cannot normalize the fitness value</a>.</p>
<p>In <a class="reference external" href="https://github.com/schlatterbeck/pgapack">PGApack</a>, the user supplies a real-valued evaluation function (which
is called for each individual) and specifies if the result of that
evaluation should be minimized or maximized (defining the optimization
direction).</p>
<p>When using <a class="reference external" href="https://en.wikipedia.org/wiki/Fitness_proportionate_selection">Fitness proportionate selection</a>, this evaluation value must be
mapped to a positive value, assigning each evaluation a share on the
roulette-wheel. If the optimization direction is <em>minimization</em>, small
values are better and need a larger part of the roulette-wheel so that
they have a higher probability of being selected. So we need to remap
the raw evaluation values to a nonnegative and monotonically increasing
<em>fitness</em>. For a <em>minimization</em> problem, we're computing the maximum
(worst) evaluation and then the fitness is the difference of this
maximum and the evaluation for that individual (after scaling the
maximum a little so that no fitness value is exactly zero):</p>
<div class="math">
\begin{equation*}
F = E_{max} - E
\end{equation*}
</div>
<p>where <span class="math">\(F\)</span> is the fitness of the current individual,
<span class="math">\(E_{max}\)</span> is the maximum of all evaluations in this generation,
and <span class="math">\(E\)</span> is the evaluation of that individual.</p>
<p>Now when evaluation values differ by several orders of magnitude, it can
happen that the difference in that formula ends up being <span class="math">\(E_{max}\)</span>
for many (different) evaluation values. I'm calling this an <em>overflow</em>
in the error message which is probably not the best name for it.</p>
<p>That overflow happens when <span class="math">\(E_{max}\)</span> is large compared to the current
evaluation value <span class="math">\(E\)</span> so that the difference ends up being
<span class="math">\(E_{max}\)</span> (i.e. the subtraction of <span class="math">\(E\)</span> had no effect).
In the code we check for this condition:</p>
<pre class="literal-block">if (cmax == cmax - evalue && evalue != 0)</pre>
<p>This condition triggers when subtracting the evaluation <span class="math">\(E\)</span> from
the current <span class="math">\(E_{max}\)</span> does not change <span class="math">\(E_{max}\)</span> even though
<span class="math">\(E\)</span> is not zero. So <span class="math">\(E\)</span> is so small compared to
<span class="math">\(E_{max}\)</span> that the double data type cannot represent the
difference. This happens whenever the <em>units in the last place</em> (called
<em>ulps</em> by Goldberg (not the same Goldberg who wrote the Genetic
Algorithms book afaik)) of the <a class="reference external" href="https://en.wikipedia.org/wiki/Significand">significand</a> (also called mantissa) is
larger than the value that should be subtracted <a class="brackets" href="https://blog.runtux.com/posts/2022/12/03/#footnote-1" id="footnote-reference-1" role="doc-noteref"><span class="fn-bracket">[</span>1<span class="fn-bracket">]</span></a>.</p>
<p>In our example <span class="math">\(E_{max} = 1.077688 * 10^{22}\)</span> and the
evaluation where this failed was <span class="math">\(E = 10000\)</span>. The IEEE 754 double
precision floating point format has 53 bit of <a class="reference external" href="https://en.wikipedia.org/wiki/Significand">significand</a> which can
represent numbers up to <span class="math">\(2^{54} - 1 = 18014398509481983\)</span> or about
<span class="math">\(1.8 * 10^{16}\)</span>. So you see that the number 10000 is just below the
<em>ulps</em>. We can try in python (which uses double for floating-point
values):</p>
<pre class="literal-block">>>> 1.077688e22 - 10000 == 1.077688e22
True</pre>
<p>Why do we make this check in the program? Letting the search continue
with such an overflow (or how we want to call it) would map many
different evaluation values to the same fitness. So the genetic
algorithm could not distinguishing these individuals.</p>
<p>So what can we do about it when that error happens?</p>
<p>The short answer: Use a different selection scheme. There is a reason
why the default in <a class="reference external" href="https://github.com/schlatterbeck/pgapack">PGApack</a> is <em>not</em> proportional selection.</p>
<p><a class="reference external" href="https://en.wikipedia.org/wiki/Fitness_proportionate_selection">Fitness proportionate selection</a> (aka roulette wheel selection) has other
problems, too. It has too much selection pressure in the beginning and
too low at the end (also mentioned in the Wikipedia article but beware,
I've written parts of it).</p>
<p>Blickle and Thiele <a class="brackets" href="https://blog.runtux.com/posts/2022/12/03/#footnote-2" id="footnote-reference-2" role="doc-noteref"><span class="fn-bracket">[</span>2<span class="fn-bracket">]</span></a> did a mathematical analysis of different
selection schemes and showed that proportional selection is typically not a
good idea (it was historically the first selection scheme and is
described in Goldberg's (the other Goldberg) book <a class="brackets" href="https://blog.runtux.com/posts/2022/12/03/#footnote-3" id="footnote-reference-3" role="doc-noteref"><span class="fn-bracket">[</span>3<span class="fn-bracket">]</span></a> which is probably
the reason it is still being used). Note that there is an earlier report
from Blickle and Thiele <a class="brackets" href="https://blog.runtux.com/posts/2022/12/03/#footnote-4" id="footnote-reference-4" role="doc-noteref"><span class="fn-bracket">[</span>4<span class="fn-bracket">]</span></a> that is more frank about the use of
proportional selection: "All the undesired properties together led us to
the conclusion that proportional selection is a very unsuited selection
scheme. Informally one can say that the only advantage of proportional
selection is that it is so difficult to prove the disadvantages" (<a class="brackets" href="https://blog.runtux.com/posts/2022/12/03/#footnote-4" id="footnote-reference-5" role="doc-noteref"><span class="fn-bracket">[</span>4<span class="fn-bracket">]</span></a>,
p. 42), they were not as outspoken in the final paper :-)</p>
<p>We're seeing this in the example above: We have very high differences
between good and bad evaluation (in fact so large that the fitness
cannot be computed, see above). So when using proportional selection the
very good individuals will be selected with too high probability
resulting in <a class="reference external" href="https://en.wikipedia.org/wiki/Premature_convergence">premature convergence</a>.</p>
<p>That all said: If you're doing optimization with genes of type 'real',
(represented by double values in <a class="reference external" href="https://github.com/schlatterbeck/pgapack">PGApack</a>)
you may want to try Differential Evolution <a class="brackets" href="https://blog.runtux.com/posts/2022/12/03/#footnote-5" id="footnote-reference-6" role="doc-noteref"><span class="fn-bracket">[</span>5<span class="fn-bracket">]</span></a>, <a class="brackets" href="https://blog.runtux.com/posts/2022/12/03/#footnote-6" id="footnote-reference-7" role="doc-noteref"><span class="fn-bracket">[</span>6<span class="fn-bracket">]</span></a>, <a class="brackets" href="https://blog.runtux.com/posts/2022/12/03/#footnote-7" id="footnote-reference-8" role="doc-noteref"><span class="fn-bracket">[</span>7<span class="fn-bracket">]</span></a>. At least in
my experiments with antenna optimization <a class="brackets" href="https://blog.runtux.com/posts/2022/12/03/#footnote-8" id="footnote-reference-9" role="doc-noteref"><span class="fn-bracket">[</span>8<span class="fn-bracket">]</span></a> the results outperform the
standard genetic algorithm, but this is reported by several
practitioners <a class="brackets" href="https://blog.runtux.com/posts/2022/12/03/#footnote-7" id="footnote-reference-10" role="doc-noteref"><span class="fn-bracket">[</span>7<span class="fn-bracket">]</span></a>. Examples of how to use this are in
<code class="docutils literal">examples/deb/optimize.c</code> or <code class="docutils literal">examples/mgh/testprogde.c</code> in
<a class="reference external" href="https://github.com/schlatterbeck/pgapack">PGApack</a>.</p>
<p>The <code class="docutils literal">PGASetDECrossoverProb</code> is critical, for problems where the dimensions
cannot typically be optimized separately the value should be set close
to 1 but not equal to 1.</p>
<aside class="footnote-list brackets">
<aside class="footnote brackets" id="footnote-1" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="https://blog.runtux.com/posts/2022/12/03/#footnote-reference-1">1</a><span class="fn-bracket">]</span></span>
<p>David Goldberg. What every computer scientist should know
about floating-point arithmetic. ACM Computing Surveys,
23(1):5–48, 1991.
<a class="reference external" href="https://dl.acm.org/doi/pdf/10.1145/103162.103163">ACM makes this available for free</a></p>
</aside>
<aside class="footnote brackets" id="footnote-2" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="https://blog.runtux.com/posts/2022/12/03/#footnote-reference-2">2</a><span class="fn-bracket">]</span></span>
<p>Blickle, Tobias; Thiele, Lothar (1996). "A Comparison of Selection
Schemes Used in Evolutionary Algorithms". Evolutionary Computation.
4 (4): 361–394. <a class="reference external" href="https://dl.acm.org/doi/pdf/10.1162/evco.1996.4.4.361">Available from ACM</a></p>
</aside>
<aside class="footnote brackets" id="footnote-3" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="https://blog.runtux.com/posts/2022/12/03/#footnote-reference-3">3</a><span class="fn-bracket">]</span></span>
<p>David E. Goldberg. Genetic Algorithms in Search, Optimization &
Machine Learning. Addison Wesley, October 1989.</p>
</aside>
<aside class="footnote brackets" id="footnote-4" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span>4<span class="fn-bracket">]</span></span>
<span class="backrefs">(<a role="doc-backlink" href="https://blog.runtux.com/posts/2022/12/03/#footnote-reference-4">1</a>,<a role="doc-backlink" href="https://blog.runtux.com/posts/2022/12/03/#footnote-reference-5">2</a>)</span>
<p>Tobias Blickle and Lothar Thiele. A comparison of selection
schemes used in genetic algorithms. TIK-Report 11 Version 2,
Computer Engineering and Communications Lab (TIK), December 1995.
<a class="reference external" href="https://tik-old.ee.ethz.ch/file/6c0e384dceb283cd4301339a895b72b8/TIK-Report11.pdf">Available from ETH</a></p>
</aside>
<aside class="footnote brackets" id="footnote-5" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="https://blog.runtux.com/posts/2022/12/03/#footnote-reference-6">5</a><span class="fn-bracket">]</span></span>
<p>Rainer Storn and Kenneth Price. Differential evolution – a simple
and efficient heuristic for global optimization over continuous
spaces. Journal of Global Optimization, 11(4):341–359, December 1997.</p>
</aside>
<aside class="footnote brackets" id="footnote-6" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="https://blog.runtux.com/posts/2022/12/03/#footnote-reference-7">6</a><span class="fn-bracket">]</span></span>
<p>Rainer Storn and Kenneth Price. Differential evolution – a simple
and efficient adaptive scheme for global optimization over
continuous spaces. Technical Report TR-95-012, International
Computer Science Institute (ICSI), March 1995.
<a class="reference external" href="ftp://ftp.icsi.berkeley.edu/pub/techreports/1995/tr-95-012.pdf">Available from the International Computer Science Institute</a></p>
</aside>
<aside class="footnote brackets" id="footnote-7" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span>7<span class="fn-bracket">]</span></span>
<span class="backrefs">(<a role="doc-backlink" href="https://blog.runtux.com/posts/2022/12/03/#footnote-reference-8">1</a>,<a role="doc-backlink" href="https://blog.runtux.com/posts/2022/12/03/#footnote-reference-10">2</a>)</span>
<p>Kenneth V. Price, Rainer M. Storn, and Jouni A. Lampinen.
Differential Evolution: A Practical Approach to Global Optimization.
Springer, Berlin, Heidelberg, 2005.</p>
</aside>
<aside class="footnote brackets" id="footnote-8" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="https://blog.runtux.com/posts/2022/12/03/#footnote-reference-9">8</a><span class="fn-bracket">]</span></span>
<p>Ralf Schlatterbeck. <a class="reference external" href="https://blog.runtux.com/posts/2021/12/27/">Multi-objective antenna optimization</a>. Blog post,
Open Source Consulting, December 2021.</p>
</aside>
</aside>documentationenglishgenetic algorithmshowtoopen sourceoptimizationhttps://blog.runtux.com/posts/2022/12/03/Sat, 03 Dec 2022 15:28:00 GMT
- Epsilon-Constrained Optimizationhttps://blog.runtux.com/posts/2022/08/29/Ralf Schlatterbeck<p>[Update 2022-10-18: Replace epsilon with delta in description of example
problem 7 in pgapack]</p>
<p>Many optimization problems involve constraints that valid solutions must
satisfy. A <a class="reference internal" href="https://blog.runtux.com/posts/2022/08/29/#constrained-optimization-problem">constrained optimization problem</a>
is typically written as a nonlinear programming problem <a class="brackets" href="https://blog.runtux.com/posts/2022/08/29/#footnote-1" id="footnote-reference-1" role="doc-noteref"><span class="fn-bracket">[</span>1<span class="fn-bracket">]</span></a>.</p>
<div class="math" id="constrained-optimization-problem">
\begin{align*}
\hbox{Minimize} \; & f_i(\vec{x}), & i &= 1, \ldots, I \\
\hbox{Subject to} \; & g_j(\vec{x}) \le 0, & j &= 1, \ldots, J \\
& h_k(\vec{x}) = 0, & k &= 1, \ldots, K \\
& x_m^l \le x_m \le x_m^u, & m &= 1, \ldots, M \\
\end{align*}
</div>
<p>In this problem, there are <span class="math">\(n\)</span> variables (the vector
<span class="math">\(\vec{x}\)</span> is of size <span class="math">\(n\)</span>), <span class="math">\(J\)</span> inequality constraints,
<span class="math">\(K\)</span> equality constraints and the variable <span class="math">\(x_m\)</span> must be in the
range <span class="math">\(|x_m^l, x_m^u|\)</span> (often called a box constraint). The
functions <span class="math">\(f_i\)</span> are called the objective functions. If there is more
than one objective function, the problem is said to be a multi
objective optimization problem as described previously in this blog <a class="brackets" href="https://blog.runtux.com/posts/2022/08/29/#footnote-2" id="footnote-reference-2" role="doc-noteref"><span class="fn-bracket">[</span>2<span class="fn-bracket">]</span></a>.
In the following I will use some terms that were introduced in that
earlier blog-post without further explanation.</p>
<p>The objective functions are not necessarily minimized (as in the
formula) but can also be maximized if the best solutions to a problem
requires maximization. Note that the inequality constraints are often
depicted with a <span class="math">\(\ge\)</span> sign but the formula can easily be changed
(e.g. by multiplying with -1) to use a <span class="math">\(\le\)</span> sign.</p>
<p>Since it is very hard to fulfil equality constraints, especially if they
involve nonlinear functions of the input variables, equality constraints
are often converted to inequality constraints using an δ‑neighborhood:</p>
<div class="math">
\begin{equation*}
-\delta \le h_k(\vec{x}) \le \delta
\end{equation*}
</div>
<p>Where δ is chosen according to the requirements of the problem
for which a solution is sought.</p>
<p>One very successful method of solving constrained optimization problems
is to consider a lexicographic ordering of constraints and objective
function values. Candidate solutions to the optimization problem are
first sorted by violated constraints (typically the sum over all
violated constraints) and then by the objective function value(s) <a class="brackets" href="https://blog.runtux.com/posts/2022/08/29/#footnote-1" id="footnote-reference-3" role="doc-noteref"><span class="fn-bracket">[</span>1<span class="fn-bracket">]</span></a>.
When comparing two individuals during selection in the genetic algorithm
there are three cases: If both individuals violate constraints, the
individual with the lower constraint violation wins. If one violates
constraints and the other does not, the one not violating constraints
wins. In the last case where both individuals do not violate constraints
the normal comparison is used (which depends on the algorithm and if
we're minimizing or maximizing).
This method, originally proposed by Deb <a class="brackets" href="https://blog.runtux.com/posts/2022/08/29/#footnote-1" id="footnote-reference-4" role="doc-noteref"><span class="fn-bracket">[</span>1<span class="fn-bracket">]</span></a>, is implemented in the
genetic algorithm package I'm maintaining, <a class="reference external" href="https://github.com/schlatterbeck/pgapack">PGAPack</a>, and my Python
wrapper <a class="reference external" href="https://github.com/schlatterbeck/pgapy">PGAPy</a> for it.</p>
<p>With this algorithm for handling constraints, the constraints are
optimized first before the algorithm "looks" at the objective
function(s) at all. It often happens that the algorithm ends up
searching in a region of the input space where no good solutions exist
(but no constraints are violated). Hard problems often contain equality
constraints (converted to inequality constraints as indicated earlier)
or other "hard" constraints. In my previous blog post <a class="brackets" href="https://blog.runtux.com/posts/2022/08/29/#footnote-2" id="footnote-reference-5" role="doc-noteref"><span class="fn-bracket">[</span>2<span class="fn-bracket">]</span></a> on
antenna optimization I wrote: "the optimization algorithm has a hard
time finding the director solutions at all. Only in one of a handful
experiments I was able to obtain the pareto front plotted above".</p>
<p>In that experiment I was running 50 searches and only 5 of them did
<em>not</em> get stuck in a local optimum. A similar thing happens for the
problem (7) in Deb's paper <a class="brackets" href="https://blog.runtux.com/posts/2022/08/29/#footnote-1" id="footnote-reference-6" role="doc-noteref"><span class="fn-bracket">[</span>1<span class="fn-bracket">]</span></a> which has equality constraints. I've
implemented this as <a class="reference external" href="https://github.com/schlatterbeck/pgapack/blob/master/examples/deb/deb7.c">example problem 7</a> in <a class="reference external" href="https://github.com/schlatterbeck/pgapack">PGAPack</a>. It only finds a
solution near the (known) optimum when <span class="math">\(\delta \ge 10^{-2}\)</span> for
all equality constraints (I didn't experiment with different random
seeds for the optimizer, maybe a better solution would be possible with
a different random seed). In the paper <a class="brackets" href="https://blog.runtux.com/posts/2022/08/29/#footnote-1" id="footnote-reference-7" role="doc-noteref"><span class="fn-bracket">[</span>1<span class="fn-bracket">]</span></a>, Deb uses
<span class="math">\(\delta = 10^{-3}\)</span> for the same reason.</p>
<p>One method for handling this problem was appealing because it is so
easy to understand and implement:
Takahama and Sakai were first experimenting with a method for relaxing
constraints during the early stages of optization with a formulation
they called an α‑constrained genetic algorithm <a class="brackets" href="https://blog.runtux.com/posts/2022/08/29/#footnote-3" id="footnote-reference-8" role="doc-noteref"><span class="fn-bracket">[</span>3<span class="fn-bracket">]</span></a>. They
later simplified the formulation and called the resulting algorithm
ε constrained optimization. It can be applied to different
optimization algorithms, not just genetic algorithms and variants <a class="brackets" href="https://blog.runtux.com/posts/2022/08/29/#footnote-4" id="footnote-reference-9" role="doc-noteref"><span class="fn-bracket">[</span>4<span class="fn-bracket">]</span></a>.
Of special interest is the application of the method to differential
evolution <a class="brackets" href="https://blog.runtux.com/posts/2022/08/29/#footnote-5" id="footnote-reference-10" role="doc-noteref"><span class="fn-bracket">[</span>5<span class="fn-bracket">]</span></a>, <a class="brackets" href="https://blog.runtux.com/posts/2022/08/29/#footnote-6" id="footnote-reference-11" role="doc-noteref"><span class="fn-bracket">[</span>6<span class="fn-bracket">]</span></a> but of course it can also be applied to other forms
of genetic algorithms.</p>
<p>Note that the ε in the name of the algorithm <em>can</em> be used for
the δ used when converting an equality constraint to inequality
constraints but is not limited to this case.</p>
<p>During the run of the optimizer in each generation a new value for
ε is computed. The comparison of individuals outlined above is
modified, so that an individual is handled like it was not violating any
constraints if the constraint violation is below ε. So if both
individuals have constraint violations larger than ε, the one
with lower violation wins. If one violation is below ε and the
other above, the individual with the violation below ε wins. And
finally if the constraint violations of both individuals are below
ε, the normal comparison takes place.</p>
<p>The last case is the key to the success of this algorithm: Even though
the search proceeds into a direction where the constraint violations are
minimized, <em>at the same time</em> good solutions in terms of the objective
function are found.</p>
<p>The algorithm begins by initializing <span class="math">\(\varepsilon_0\)</span> with the constraint
violation of the individuum with index <span class="math">\(\theta\)</span> from the initial
population sorted by constraint violation, where
<span class="math">\(\theta\)</span> is a parameter of the algorithm between 1 and the
population size, a good value uses the individuum at about 20%
of the population size which is also the default in <a class="reference external" href="https://github.com/schlatterbeck/pgapack">PGAPack</a>.
In each generation <span class="math">\(t\)</span>, <span class="math">\(\varepsilon_t\)</span> is computed by</p>
<div class="math">
\begin{equation*}
\varepsilon_t = \varepsilon_0 \left(1-\frac{t}{T_c}\right)^{cp}
\end{equation*}
</div>
<p>up to generation <span class="math">\(T_c\)</span>. After that generation, ε is set to
0. The exponent <span class="math">\(cp\)</span> is between 2 and 10. The 2010 paper <a class="brackets" href="https://blog.runtux.com/posts/2022/08/29/#footnote-6" id="footnote-reference-12" role="doc-noteref"><span class="fn-bracket">[</span>6<span class="fn-bracket">]</span></a>
recommends to set <span class="math">\(cp = 0.3 cp + 0.7 cp_\min\)</span> at
generation <span class="math">\(T_\lambda = 0.95 T_c\)</span> where <span class="math">\(cp_\min\)</span> is
the fixed value 3. The initial value of <span class="math">\(cp\)</span> is chosen so that
<span class="math">\(\varepsilon_\lambda=10^{-5}\)</span> at generation <span class="math">\(T_\lambda\)</span> unless
it is smaller than <span class="math">\(cp_\min\)</span> in which case it is set to
<span class="math">\(cp_\min\)</span>. <a class="reference external" href="https://github.com/schlatterbeck/pgapack">PGAPack</a> implements this
schedule for <span class="math">\(cp\)</span> by default but allows to change <span class="math">\(cp\)</span>
at start and during run of the optimizer, so it's possible to easily
implement a different schedule for <span class="math">\(cp\)</span> – the default works
quite nicely, though.</p>
<p>With the ε constraint method, example 7 from Deb <a class="brackets" href="https://blog.runtux.com/posts/2022/08/29/#footnote-1" id="footnote-reference-13" role="doc-noteref"><span class="fn-bracket">[</span>1<span class="fn-bracket">]</span></a> can be
optimized with a precision of <span class="math">\(10^{-6}\)</span> in my experiments, see the
<code class="docutils literal">epsilon_generation</code> parameter in the <a class="reference external" href="https://github.com/schlatterbeck/pgapack/blob/master/examples/deb/optimize.c">optimizer example</a></p>
<p>The <a class="reference external" href="https://github.com/schlatterbeck/antenna-optimizer">antenna-optimizer</a> with an ε‑generation of 50 (that's
the <span class="math">\(T_c\)</span> parameter of the algorithm) gets stuck in the local
optimum only in <em>one</em> of 50 cases, all other cases find good results:</p>
<iframe class="iframe-embed" width="640" height="480" scrolling="no" frameborder="no" src="https://blog.runtux.com/content/2022/plot-50.html">
</iframe><p>In that picture all the solutions that are dominated by solutions from
another run are drawn in black. It can be seen that the data from run
number 16 did not contribute any non-dominated solutions (on the right
side in the legend the number 16 is missing). You can turn off the
display of the dominated solutions by clicking on the black dot in the
legend.</p>
<p>When I increase the ε‑generation to 60, the run with random
seed 16 also finds a solution:</p>
<iframe class="iframe-embed" width="640" height="480" scrolling="no" frameborder="no" src="https://blog.runtux.com/content/2022/plot-50+60.html">
</iframe><p>We also see that the solutions are quite good (quite near to the pareto
front) for all runs, the black "shadow" of the dominated solutions is
quite near to the real pareto front and it is enough to do a single run
of the algorithm for finding a good set of solutions.</p>
<aside class="footnote-list brackets">
<aside class="footnote brackets" id="footnote-1" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span>1<span class="fn-bracket">]</span></span>
<span class="backrefs">(<a role="doc-backlink" href="https://blog.runtux.com/posts/2022/08/29/#footnote-reference-1">1</a>,<a role="doc-backlink" href="https://blog.runtux.com/posts/2022/08/29/#footnote-reference-3">2</a>,<a role="doc-backlink" href="https://blog.runtux.com/posts/2022/08/29/#footnote-reference-4">3</a>,<a role="doc-backlink" href="https://blog.runtux.com/posts/2022/08/29/#footnote-reference-6">4</a>,<a role="doc-backlink" href="https://blog.runtux.com/posts/2022/08/29/#footnote-reference-7">5</a>,<a role="doc-backlink" href="https://blog.runtux.com/posts/2022/08/29/#footnote-reference-13">6</a>)</span>
<p>Kalyanmoy Deb. An efficient constraint handling method for
genetic algorithms. Computer Methods in Applied Mechanics and
Engineering, 186(2–4):311–338, June 2000.</p>
</aside>
<aside class="footnote brackets" id="footnote-2" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span>2<span class="fn-bracket">]</span></span>
<span class="backrefs">(<a role="doc-backlink" href="https://blog.runtux.com/posts/2022/08/29/#footnote-reference-2">1</a>,<a role="doc-backlink" href="https://blog.runtux.com/posts/2022/08/29/#footnote-reference-5">2</a>)</span>
<p>Ralf Schlatterbeck. <a class="reference external" href="https://blog.runtux.com/posts/2021/12/27/">Multi-objective antenna optimization</a>.
Blog post, Open Source Consulting, December 2021.</p>
</aside>
<aside class="footnote brackets" id="footnote-3" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="https://blog.runtux.com/posts/2022/08/29/#footnote-reference-8">3</a><span class="fn-bracket">]</span></span>
<p>Tetsuyuki Takahama and Setsuko Sakai. Constrained optimization
by α constrained genetic algorithm (αGA). Systems
and Computers in Japan, 35(5):11–22, May 2004.</p>
</aside>
<aside class="footnote brackets" id="footnote-4" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="https://blog.runtux.com/posts/2022/08/29/#footnote-reference-9">4</a><span class="fn-bracket">]</span></span>
<p>Tetsuyuki Takahama and Setsuko Sakai. Constrained optimization
by ε constrained particle swarm optimizer with ε‑level control. In Abraham et al., editors, Soft Computing as
Transdisciplinary Science and Technology, volume 29 of Advances
in Soft Computing, pages 1019–1029. Springer, 2005.</p>
</aside>
<aside class="footnote brackets" id="footnote-5" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="https://blog.runtux.com/posts/2022/08/29/#footnote-reference-10">5</a><span class="fn-bracket">]</span></span>
<p>Tetsuyuki Takahama and Setsuko Sakai. Constrained optimization by
the ε constrained differential evolution with gradient-based
mutation and feasible elites. In IEEE International Conference on
Evolutionary Computation (CEC). Vancouver, BC, Canada, July 2006.</p>
</aside>
<aside class="footnote brackets" id="footnote-6" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span>6<span class="fn-bracket">]</span></span>
<span class="backrefs">(<a role="doc-backlink" href="https://blog.runtux.com/posts/2022/08/29/#footnote-reference-11">1</a>,<a role="doc-backlink" href="https://blog.runtux.com/posts/2022/08/29/#footnote-reference-12">2</a>)</span>
<p>Tetsuyuki Takahama and Setsuko Sakai. Constrained optimization by
the ε constrained differential evolution with an archive and
gradient-based mutation. In IEEE Congress on Evolutionary Computation
(CEC), Barcelona, Spain, July 2010.</p>
</aside>
</aside>documentationenglishgenetic algorithmshamradiohardwareopen sourceoptimizationhttps://blog.runtux.com/posts/2022/08/29/Mon, 29 Aug 2022 16:00:00 GMT
- Multi-Objective Antenna Optimizationhttps://blog.runtux.com/posts/2021/12/27/Ralf Schlatterbeck<p>For quite some time I'm optimizing antennas using genetic algorithms.
I'm using the <a class="reference external" href="https://github.com/schlatterbeck/pgapack">pgapack</a> parallel genetic algorithm package originally
by David Levine from Argonne National Laboratory which I'm maintaining.
Longer than maintaining <a class="reference external" href="https://github.com/schlatterbeck/pgapack">pgapack</a> I'm developing a <a class="reference external" href="https://python.org">Python</a> wrapper for
<a class="reference external" href="https://github.com/schlatterbeck/pgapack">pgapack</a> called <a class="reference external" href="https://github.com/schlatterbeck/pgapy">pgapy</a>.</p>
<p>For the antenna simulation part I'm using Tim Molteno's <a class="reference external" href="https://github.com/tmolteno/python-necpp">PyNEC</a>, a python
wrapper for the Numerical Electromagnetics Code (NEC) version 2 written
in C++ (aka NEC++) and wrapped for <a class="reference external" href="https://python.org">Python</a>.</p>
<p>Using these packages I've written a small open source framework to
optimize antennas called <a class="reference external" href="https://github.com/schlatterbeck/antenna-optimizer">antenna-optimizer</a>. This can use
traditional genetic algorithm method with bit-strings as genes as well
as a floating-point representation with operators suited for
floating-point genes.</p>
<p>The <em>parallel</em> in <a class="reference external" href="https://github.com/schlatterbeck/pgapack">pgapack</a> tells us that the evaluation function of the
genetic algorithm can be parallelized. When optimizing antennas we
simulate each candidate parameters for an antenna using the antenna
simulation of PyNEC. Antenna simulation is still (the original NEC code
is from the 1980s and was conceived using punched cards for I/O) a
CPU-intensive undertaking. So the fact that with <a class="reference external" href="https://github.com/schlatterbeck/pgapack">pgapack</a> we can run
many simulations in parallel using the message passing interface
(MPI) standard <a class="brackets" href="https://blog.runtux.com/posts/2021/12/27/#footnote-1" id="footnote-reference-1" role="doc-noteref"><span class="fn-bracket">[</span>1<span class="fn-bracket">]</span></a> is good news.</p>
<p>For <a class="reference external" href="https://github.com/schlatterbeck/pgapack">pgapack</a> – and also for <a class="reference external" href="https://github.com/schlatterbeck/pgapy">pgapy</a> – I've recently implemented some
classic algorithms that have proven very useful over time:</p>
<ul class="simple">
<li><p>Differential Evolution <a class="brackets" href="https://blog.runtux.com/posts/2021/12/27/#footnote-2" id="footnote-reference-2" role="doc-noteref"><span class="fn-bracket">[</span>2<span class="fn-bracket">]</span></a>, <a class="brackets" href="https://blog.runtux.com/posts/2021/12/27/#footnote-3" id="footnote-reference-3" role="doc-noteref"><span class="fn-bracket">[</span>3<span class="fn-bracket">]</span></a>, <a class="brackets" href="https://blog.runtux.com/posts/2021/12/27/#footnote-4" id="footnote-reference-4" role="doc-noteref"><span class="fn-bracket">[</span>4<span class="fn-bracket">]</span></a> is a very successful
optimization algorithm for floating-point genes that is very
interesting for electromagnetics problems</p></li>
<li><p>The elitist Nondominated Sorting Genetic Algorithm NSGA-II <a class="brackets" href="https://blog.runtux.com/posts/2021/12/27/#footnote-5" id="footnote-reference-5" role="doc-noteref"><span class="fn-bracket">[</span>5<span class="fn-bracket">]</span></a>
allows to optimize multiple objectives in a single run of the optimizer</p></li>
<li><p>We can have constraints on the optimization using constraint functions
that are minimized. For a solution to be valid, all constraints must
be zero or negative. <a class="brackets" href="https://blog.runtux.com/posts/2021/12/27/#footnote-6" id="footnote-reference-6" role="doc-noteref"><span class="fn-bracket">[</span>6<span class="fn-bracket">]</span></a></p></li>
</ul>
<p>Traditionally with genetic algorithms only a single evaluation function,
also called <em>objective function</em> is possible. With NSGA-II it is
possible to have several objective functions. We call such an algorithm
a <em>multi-objective</em> optimization algorithm.</p>
<p>For antenna simulation this means that we don't need to combine
different antenna criteria like gain, forward/backward ratio, and
standing wave ratio (VSWR) into a single evaluation function which I was
using in <a class="reference external" href="https://github.com/schlatterbeck/antenna-optimizer">antenna-optimizer</a>, but instead we can specify them
separately and leave the optimization to the genetic search.</p>
<p>With multiple objectives, however, typically when a solution is better
in one objective, it can be worse in another objective and vice-versa.
So we are searching for solutions that are <em>strictly</em> better than other
solutions. A solution is said to <em>dominate</em> another solution when it is
strictly better in one objective but not worse in any other objective.
All solutions that fulfill this criterion are said to be
<a class="reference external" href="https://en.wikipedia.org/wiki/Pareto_efficiency">pareto-optimal</a> named after the italian scientist <a class="reference external" href="https://en.wikipedia.org/wiki/Vilfredo_Pareto">Vilfredo
Pareto</a> who first defined the concept of pareto optimality. All
solutions that fulfill the pareto optimality criterion are said to lie
on a <em>pareto front</em>. For two objectives the pareto front can be shown in
a scatter-plot as we will see below.</p>
<p>Since <a class="reference external" href="https://github.com/schlatterbeck/pgapack">pgapack</a> follows a "mix and match" approach to genetic algorithms
we can combine successful strategies for different parts of a genetic
algorithm:</p>
<ul class="simple">
<li><p>We can use Differential Evolution just for the mutation/crossover part
of the genetic algorithm</p></li>
<li><p>We can combine this with the nondominated sorting replacement of
NSGA-II</p></li>
<li><p>We can define some of our objectives as constraints. For our problem
it makes sense to only allow antennas that do not exceed a given
standing-wave ratio. So we do not allow antennas with a VSWR > 1.8.
The necessary constraint function is <span class="math">\(S - 1.8 \le 0\)</span> where
<span class="math">\(S\)</span> is the voltage standing wave ratio (VSWR).</p></li>
</ul>
<p>With this combination we can successfully compute antennas for the 70cm
ham-radio band (430 MHz - 440 MHz). The antenna uses what we call a
folded dipole (the thing with the rounded corners) and a straight
element. The measures in the figure represent the lenghts optimized by
the genetic algorithm. The two dots in the middle of the folded dipole
element represent the point where the antenna feed-line is connected.</p>
<img alt="/images/2ele.png" src="https://blog.runtux.com/images/2ele.png">
<p>A first example simulates antenna
parameters for the lowest, the highest and the medium frequency. The
gain and forward/backward ratio are computed for the medium frequency
only:</p>
<img alt="/images/twoele-v1.png" src="https://blog.runtux.com/images/twoele-v1.png">
<p>In this graph (a scatter plot) the first objective (the gain) is graphed
against the second objective, the forward/backward ratio. All numbers
are taken from the medium frequency. Each dot represents a simulated
antenna. All antennas have a VSWR lower than 1.8 on the minimum, medium,
and maximum frequency.</p>
<p>With this success I was experimenting with different settings of the
Differential Evolution parameters. It is well-known that Differential
Evolution performance on decomposable problems is better with a low
crossover-rate, while it is better on non-decomposable problems with a
high crossover rate. A decomposable problem is one where the different
dimensions can be optimized separately, this was first observed by
Salomon in 1996 <a class="brackets" href="https://blog.runtux.com/posts/2021/12/27/#footnote-7" id="footnote-reference-7" role="doc-noteref"><span class="fn-bracket">[</span>7<span class="fn-bracket">]</span></a>. I had been using a crossover-rate of 0.2 and my
hope was that the optimization would be better and faster with a higher
crossover rate. The experiment below uses a crossover-rate of 0.9.</p>
<p>In addition I was experimenting with <em>dither</em>: Differential Evolution
allows to randomly change the scale-factor <span class="math">\(F\)</span>, by which the
difference of two vectors is multiplied slightly for each generated
variation. In the first implementation I was setting <em>dither</em> to 0, now
I had a dither of 0.2. Imagine my surprise when with these settings I
found a completely different Pareto front for the solution:</p>
<img alt="/images/twoele-v2.png" src="https://blog.runtux.com/images/twoele-v2.png">
<p>To make it easier to see that the second discovered front completely
dominates the front that was first discovered, I've plotted the two
fronts into a single graph:</p>
<img alt="/images/twoele-v3.png" src="https://blog.runtux.com/images/twoele-v3.png">
<p>Now since the second discovered front looks too good to be true (over
the whole frequency range) for a two-element antenna, lets take a look
what is happening here. First we show the orientation of the antenna and
the computed gain pattern for one of the antennas from the middle of the
lower front:</p>
<img alt="/images/middle-lower-antenna.png" src="https://blog.runtux.com/images/middle-lower-antenna.png" style="width: 45%;">
<img alt="/images/middle-lower-pattern.png" src="https://blog.runtux.com/images/middle-lower-pattern.png" style="width: 45%;">
<p>The antenna has – as already indicated in the pareto-front graphics
– a gain of about 6.6 dBi and a forward/backward ratio of about
11 dB in the middle of the band at 435 MHz.
The colors on the antenna denote the currents on the antenna structure.
If you want to look at this yourself, here is a link to the <a class="reference external" href="https://blog.runtux.com/content/2021/middle-lower.nec">NEC input
file for antenna 1</a></p>
<p>Now lets compare this with one of the antennas of the "orange front",
where we get a lot better values:</p>
<img alt="/images/middle-higher-antenna.png" src="https://blog.runtux.com/images/middle-higher-antenna.png" style="width: 45%;">
<img alt="/images/middle-higher-pattern.png" src="https://blog.runtux.com/images/middle-higher-pattern.png" style="width: 45%;">
<p>This antenna is in the middle of the pareto front above and has
a gain of about 6.7 dBi and a forward/backward ratio of about
16 dB in the middle of the band at 435 MHz. Can you spot the
difference to the first antenna? Yes: The maximum gain is in the
opposite direction of the first antenna. We say that for the first
antenna the straight element acts as a <em>reflector</em> while for the second
antenna it acts as a <em>director</em>.
If you want to look at this yourself, here is a link to the <a class="reference external" href="https://blog.runtux.com/content/2021/middle-higher.nec">NEC input
file for antenna 2</a></p>
<p>Now we look at the frequency plot of gain and forward/backward ratio of
the antennas, the plot for the first antenna (with the <em>reflector</em>
element) is on the left, while the plot for the antenna with the
<em>director</em> element is on the right.</p>
<img alt="/images/middle-lower-fplot.png" src="https://blog.runtux.com/images/middle-lower-fplot.png" style="width: 45%;">
<img alt="/images/middle-higher-fplot.png" src="https://blog.runtux.com/images/middle-higher-fplot.png" style="width: 45%;">
<p>We see that the forward/backward ratio of the <em>director</em> antenna ranges
from more than 10 dB to more than 25 dB while the <em>reflector</em>
design ranges from 9.3 dB to 11.75 dB. For the minimum gain the
<em>reflector</em> design is slightly better (from 6.35-6.85 dBi vs.
6.3-7.05 dBi). So this needs further experiments. When forcing a
<em>reflector</em> design and changing the evaluation function to return the
<em>minimum</em> gain and F/B ratio over the three (start, middle, end)
frequencies we get:</p>
<img alt="/images/twoele-reflector.png" src="https://blog.runtux.com/images/twoele-reflector.png">
<p>The same for a <em>director</em> design (also with the <em>minimum</em> gain and F/B
ratio over the three frequencies start, middle, end) we get:</p>
<img alt="/images/twoele-director.png" src="https://blog.runtux.com/images/twoele-director.png">
<p>With these result, the sweet spot for an antenna to build is probably at
or above 10 dB F/B ratio and a gain of about 6.2 dBi. Going for some
1/10 dBi more gain and sacrificing several dB of F/B ratio doesn't
seem sensible. Comparing the director vs. reflector design we notice
(contrary to at least <em>my</em> intuition) that the director design has a
better F/B ratio over the whole frequency range. If, however the antenna
is to be used for relay operation, where the sending frequency (the
relay input) is in the lower half of the frequency range and the relay
output (the receiving frequency) is in the upper half, we will probably
chose a <em>reflector</em> design because there the gain is higher when sending
and the F/B ratio is higher when receiving (compare the two earlier
gain and F/B ratio plots).</p>
<p>Also note that the optimization algorithm has a hard time finding the
director solutions at all. Only in one of a handful experiments I was
able to obtain the pareto front plotted above. The design is more
narrowband than the reflector design and the algorithm often converges
to a local optimimum. The higher difference in gain and F/B range of
the director design also tells us that it will be harder to build: Not
getting the dimensions exactly right will probably not reach the
predicted simulation results. The reflector design is a little more
tolerant in this regard.</p>
<aside class="footnote-list brackets">
<aside class="footnote brackets" id="footnote-1" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="https://blog.runtux.com/posts/2021/12/27/#footnote-reference-1">1</a><span class="fn-bracket">]</span></span>
<p>MPI: A message-passing interface standard, version 4.0.
Standard, <a class="reference external" href="https://www.mpi-forum.org/">Message Passing Interface Forum</a>, June 2021.</p>
</aside>
<aside class="footnote brackets" id="footnote-2" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="https://blog.runtux.com/posts/2021/12/27/#footnote-reference-2">2</a><span class="fn-bracket">]</span></span>
<p>Rainer Storn and Kenneth Price. Differential evolution – a simple
and efficient adaptive scheme for global optimization over
continuous spaces. Technical Report TR-95-012, International
Computer Science Institute (ICSI), March 1995.</p>
</aside>
<aside class="footnote brackets" id="footnote-3" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="https://blog.runtux.com/posts/2021/12/27/#footnote-reference-3">3</a><span class="fn-bracket">]</span></span>
<p>Rainer Storn and Kenneth Price. Differential evolution – a simple
and efficient heuristic for global optimization over continuous spaces.
Journal of Global Optimization, 11(4):341–359, December 1997.</p>
</aside>
<aside class="footnote brackets" id="footnote-4" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="https://blog.runtux.com/posts/2021/12/27/#footnote-reference-4">4</a><span class="fn-bracket">]</span></span>
<p>Kenneth V. Price, Rainer M. Storn, and Jouni A. Lampinen.
Differential Evolution: A Practical Approach to Global
Optimization. Springer, Berlin, Heidelberg, 2005.</p>
</aside>
<aside class="footnote brackets" id="footnote-5" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="https://blog.runtux.com/posts/2021/12/27/#footnote-reference-5">5</a><span class="fn-bracket">]</span></span>
<p>Kalyanmoy Deb, Amrit Pratap, Sameer Agarwal, and T. Meyarivan.
A fast and elitist multiobjective genetic algorithm: NSGA-II.
IEEE Transactions on Evolutionary Computation, 6(2):182–197,
April 2002.</p>
</aside>
<aside class="footnote brackets" id="footnote-6" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="https://blog.runtux.com/posts/2021/12/27/#footnote-reference-6">6</a><span class="fn-bracket">]</span></span>
<p>Kalyanmoy Deb. An efficient constraint handling method for genetic
algorithms. Computer Methods in Applied Mechanics and Engineering,
186(2–4):311–338, June 2000.</p>
</aside>
<aside class="footnote brackets" id="footnote-7" role="doc-footnote">
<span class="label"><span class="fn-bracket">[</span><a role="doc-backlink" href="https://blog.runtux.com/posts/2021/12/27/#footnote-reference-7">7</a><span class="fn-bracket">]</span></span>
<p>Ralf Salomon. Re-evaluating genetic algorithm performance under
coordinate rotation of benchmark functions. a survey of some
theoretical and practical aspects of genetic algorithms.
Biosystems, 39(3):263–278, 1996.</p>
</aside>
</aside>documentationenglishgenetic algorithmshamradiohardwarehowtoopen sourceoptimizationhttps://blog.runtux.com/posts/2021/12/27/Mon, 27 Dec 2021 17:05:00 GMT