No-free-lunch top image

Without question, optimization is central to much of human activity. Due to the complexity of certain optimization problems, a plethora of heuristic optimization algorithms have been suggested by many researchers. The theoretical basis for black box search has received relatively small attention compared to the number of heuristic algorithms suggested with little or no theoretical framework to support them.

The No free lunch theorem states that if one cannot make any prior assumptions about the optimization problem we are trying to solve, no strategy can be expected to perform better than any other. Put in another way, a general-purpose universal optimization algorithm is theoretically impossible, and the only way one algorithm can outperform another if it is specialized to the specific problem under consideration while being outperformed by other algorithms over other sets of functions.

I extended the theory of NFL theorems (including Sharpened and Focused), which applies to benchmarks and to stochastic algorithms. Some of the most interesting results are that there exists a set of permutations that act on a benchmark, and under this action, average performance of stochastic algorithms is constant. Also, the performance of any stochastic algorithm over a benchmark can be matched by an infinity of other algorithms over a different benchmark.

This work is currently in press in the Evolutionary Computation journal. Fetch the preprint PDF.

About Me

Edgar Edgar A. Duéñez Guzmán is now a Software Engineer. Previously he was a Postdoctoral fellow at the Department of Biology at KU Leuven working with Tom Wenseleers in social evolution in microbes;
and a Research Associate at the Department of Organismic and Evolutionary Biology at Harvard University working with David Haig in social evolution and imprinting.
Learn more...

Contact Info

E-mail: eaduenez {at} gmail {dot} com