-
Notifications
You must be signed in to change notification settings - Fork 0
/
bbob2022.qmd
217 lines (183 loc) · 10.2 KB
/
bbob2022.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
# GECCO Workshop on Black-Box Optimization Benchmarking (BBOB 2022) {#bbob2022page}
Welcome to the web page of the 11th GECCO Workshop on Black-Box
Optimization Benchmarking (BBOB 2022) which took place during GECCO 2022.
> **WORKSHOP ON BLACK-BOX OPTIMIZATION BENCHMARKING (BBOB 2022)**
>
> | held as part of the
> |
> | **2022 Genetic and Evolutionary Computation Conference
> (GECCO-2022)**
> | July 9\--13, Boston, MA, USA
> | <http://gecco-2022.sigevo.org>
| Submission opening: February 11, 2022
| Submission deadline: April 11, 2022
| Notification: April 25, 2022
| Camera-ready: May 2, 2022
| Presenter mandatory registration: May 2, 2022
------------------------------------------------------- ------------------------------------------------------------------------ -----------------------------------------------------------------
[register for news](http://numbbo.github.io/register) [COCO quick start (scroll down a bit)](https://github.com/numbbo/coco) [latest COCO release](https://github.com/numbbo/coco/releases/)
------------------------------------------------------- ------------------------------------------------------------------------ -----------------------------------------------------------------
<br /><br />
Benchmarking optimization algorithms is a crucial part in the design and
application of them in practice. Since 2009, the Blackbox Optimization
Benchmarking Workshop at GECCO has been a place to discuss general
recent advances of benchmarking practices and the concrete results from
actual benchmarking experiments with a large variety of (blackbox)
optimizers.
The Comparing Continuous Optimizers platform (COCO[^1],
<https://github.com/numbbo/coco>) has been developed in this context to
support algorithm developers and practicioners alike by automating
benchmarking experiments for blackbox optimization algorithms in single-
and bi-objective, unconstrained continuous problems in exact and noisy,
as well as expensive and non-expensive scenarios. In 2022, we plan to
provide, for the first time, a new bbob-constrained test suite (work
still in progress).
For the next BBOB 2022 edition of the workshop, we invite participants
to discuss all kind of aspects of (blackbox) benchmarking but welcome in
particular contributions related to constrained optimization. As in
previous years, presenting benchmarking results on the supported test
suites of COCO are a focus, but submissions are not limited to those
topics:
- single-objective unconstrained problems (bbob)
- single-objective unconstrained problems with noise (bbob-noisy)
- biobjective unconstrained problems (bbob-biobj)
- large-scale single-objective problems (bbob-largescale) and
- mixed-integer single- and bi-objective problems (bbob-mixint and
bbob-biobj-mixint)
- constrained optimization (bbob-constrained)
We encourage particularly submissions about algorithms from outside the
evolutionary computation community and papers analyzing the large amount
of already publicly available algorithm data of COCO (see
<https://numbbo.github.io/data-archive/>). Like for the previous
editions, we will provide source code in various languages (C/C++,
Matlab/Octave, Java, and Python) to benchmark algorithms on the various
test suites mentioned. Postprocessing data and comparing algorithm
performance will be equally automatized with COCO (up to already
prepared ACM-compliant LaTeX templates for writing papers).
For more details, please see below.
## Updates and News
Get updated about the latest news regarding the workshop and releases
and bugfixes of the supporting NumBBO/COCO platform, by registering at
<http://numbbo.github.io/register>.
## Accepted papers
- Charles Audet, Sébastien Le Digabel, Ludovic Salomon, Christophe
Tribes: Constrained blackbox optimization with the NOMAD solver on
the COCO constrained test suite
([paper](https://doi.org/10.1145/3520304.3534019))
- Paul Dufossé, Asma Atamna: Benchmarking several strategies to update
the penalty parameters in AL-CMA-ES on the bbob-constrained testbed
([paper](https://doi.org/10.1145/3520304.3534014))
- Mohamed Gharafi: Benchmarking of two implementations of CMA-ES with
diagonal decoding on the bbob test suite
([paper](https://doi.org/10.1145/3520304.3534011))
- Ryoki Hamano, Shota Saito, Masahiro Nomura, Shinichi Shirakawa:
Benchmarking CMA-ES with margin on the bbob-mixint testbed
([paper](https://doi.org/10.1145/3520304.3534043))
- Michael Hellwig, Hans-Georg Beyer: Benchmarking ϵMAg-ES and
BP-ϵMAg-ES on the bbob-constrained testbed
([paper](https://doi.org/10.1145/3520304.3534010))
- Zachary Hoffman, Steve Huntsman: Benchmarking an algorithm for
expensive high-dimensional objectives on the bbob and
bbob-largescale testbeds
([paper](https://doi.org/10.1145/3520304.3534006))
- Duc Manh Nguyen: Benchmarking some variants of the CMAES-APOP using
keeping search points and mirrored sampling combined with active CMA
on the BBOB noiseless testbed
([paper](https://doi.org/10.1145/3520304.3534001))
- Ryoji Tanabe: Benchmarking the hooke-jeeves method, MTS-LS1, and
BSrr on the large-scale BBOB function set ([paper
\<https://doi.org/10.1145/3520304.3533951\>]{.title-ref})
## Submissions
We encourage any submission that is concerned with black-box
optimization benchmarking of continuous optimizers, for example papers
that:
- describe and benchmark new or not-so-new algorithms on one of the
above testbeds,
- compare new or existing algorithms from the COCO/BBOB database[^2],
- analyze the data obtained in previous editions of BBOB[^3], or
- discuss, compare, and improve upon any benchmarking methodology for
continuous optimizers such as design of experiments, performance
measures, presentation methods, benchmarking frameworks, test
functions, \...
Paper submissions are expected to be done through the official GECCO
submission system at <https://ssl.linklings.net/conferences/gecco/>
until the deadline. ACM-compliant LaTeX templates are available in the
github repository under
[code-postprocessing/latex-templates/](https://github.com/numbbo/coco/tree/master/code-postprocessing/latex-templates).
In order to finalize your submission, we kindly ask you to submit your
data files if this applies by clicking on \"Submit a COCO data set\"
here: <https://github.com/numbbo/coco/issues/new/choose>. To upload your
data to the web, you might want to use <https://zenodo.org/> which
offers uploads of data sets up to 50GB in size or any other provider of
online data storage.
## Supporting material
The basis of the workshop is the Comparing Continuous Optimizer platform
(<https://github.com/numbbo/coco>), written in ANSI C with other
languages calling the C code. Languages currently available are C, Java,
MATLAB/Octave, and Python.
Most likely, you want to read the [COCO quick
start](https://github.com/numbbo/coco) (scroll down a bit). This page
also provides the code for the benchmark functions[^4], for running the
experiments in C, Java, Matlab, Octave, and Python, and for
postprocessing the experiment data into plots, tables, html pages, and
publisher-conform PDFs via provided LaTeX templates. Please refer to
<http://numbbo.github.io/coco-doc/experimental-setup/> for more details
on the general experimental set-up for black-box optimization
benchmarking.
The latest (hopefully) stable release of the COCO software can be
downloaded as a whole [here](https://github.com/numbbo/coco/releases/).
Please use at least version v2.5 for running your benchmarking
experiments in 2022.
Documentation of the functions used in the different test suites can be
found here:
- `bbob` suite at
<https://numbbo.github.io/gforge/downloads/download16.00/bbobdocfunctions.pdf>
- `bbob-noisy` suite at
<http://coco.lri.fr/downloads/download15.03/bbobdocnoisyfunctions.pdf>
- `bbob-biobj` suite at <https://numbbo.github.io/bbob-biobj/>
- `bbob-largescale` suite at <https://arxiv.org/pdf/1903.06396.pdf>
- `bbob-mixint` and `bbob-biobj-mixint` suites at
<https://hal.inria.fr/hal-02067932/document> and at
<https://numbbo.github.io/gforge/preliminary-bbob-mixint-documentation/bbob-mixint-doc.pdf>
- `bbob-constrained` suite at:
<http://numbbo.github.io/coco-doc/bbob-constrained/>
## Important Dates
- **2022-04-11** *paper and data submission deadline*
- **2022-04-25** decision notification
- **2022-05-02** deadline camera-ready papers
- **2022-05-02** deadline author registration
- **2022-07-09** or **2022-07-10** workshop
All dates are given in ISO 8601 format (yyyy-mm-dd).
## Organizers
- Anne Auger, Inria and CMAP, Ecole Polytechnique, Institut
Polytechnique de Paris, France
- Dimo Brockhoff, Inria and CMAP, Ecole Polytechnique, Institut
Polytechnique de Paris, France
- Konstantin Dietrich, TU Köln, Germany
- Paul Dufossé, Inria and Thales Defense Mission Systems, France
- Tobias Glasmachers, Ruhr-Universität Bochum, Germany
- Nikolaus Hansen, Inria and CMAP, Ecole Polytechnique, Institut
Polytechnique de Paris, France
- Olaf Mersmann, TU Köln, Germany
- Petr Pošík, Czech Technical University, Czech Republic
- Tea Tušar, Jozef Stefan Institute (JSI), Slovenia
[^1]: Nikolaus Hansen, Anne Auger, Raymond Ros, Olaf Mersmann, Tea
Tušar, and Dimo Brockhoff. \"COCO: A platform for comparing
continuous optimizers in a black-box setting.\" Optimization Methods
and Software (2020): 1-31.
[^2]: The data of previously compared algorithms can be found at
<https://numbbo.github.io/data-archive> and are easily accessible by
name in the `cocopp` post-processing and from the python
`cocopp.archives` module or in (fixed) html form at
<https://numbbo.github.io/ppdata-archive>.
[^3]: The data of previously compared algorithms can be found at
<https://numbbo.github.io/data-archive> and are easily accessible by
name in the `cocopp` post-processing and from the python
`cocopp.archives` module or in (fixed) html form at
<https://numbbo.github.io/ppdata-archive>.
[^4]: Note that the current release of the new COCO platform does not
contain the original noisy BBOB testbed yet, such that you must use
the old code at
<https://numbbo.github.io/coco/oldcode/bboball15.03.tar.gz> for the
time being if you want to compare your algorithm on the noisy
testbed.