-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.html
666 lines (642 loc) · 44.9 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>BRAVO Workshop @ICCV 2023</title>
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link
href="https://fonts.googleapis.com/css2?family=Merriweather+Sans:ital,wght@0,300;0,400;0,500;0,700;1,300;1,400;1,500&display=swap"
rel="stylesheet">
<link href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css" rel="stylesheet"
integrity="sha384-EVSTQN3/azprG1Anm3QDgpJLIm9Nao0Yz1ztcQTwFspd3yD65VohhpuuCOmLASjC" crossorigin="anonymous">
<script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/js/bootstrap.bundle.min.js"
integrity="sha384-MrcW6ZMFYlzcLA8Nl+NtUVF0sA7MsXsP1UyJoMp4YLEuNSfAP+JcXn/tWtIaxVXM"
crossorigin="anonymous"></script>
<link href="css/style.css" rel="stylesheet">
<!-- social media tags -->
<meta name="description"
content="The BRAVO Workshop @ICCV 2023 presents a unique opportunity for researchers, industry experts, and policymakers to come together and address the critical challenge of trustworthy validation for autonomous vehicle systems on open roads.">
<meta property="og:url" content="https://valeoai.github.io/bravo/">
<meta property="og:type" content="website">
<meta property="og:title" content="BRAVO Workshop @ICCV 2023">
<meta property="og:description"
content="The BRAVO Workshop @ICCV 2023 presents a unique opportunity for researchers, industry experts, and policymakers to come together and address the critical challenge of trustworthy validation for autonomous vehicle systems on open roads.">
<meta property="og:image" content="https://valeoai.github.io/bravo/images/hero/preview.jpg">
<meta name="twitter:card" content="summary_large_image">
<meta property="twitter:domain" content="valeoai.github.io">
<meta property="twitter:url" content="https://valeoai.github.io/bravo/">
<meta name="twitter:title" content="BRAVO Workshop @ICCV 2023">
<meta name="twitter:description"
content="The BRAVO Workshop @ICCV 2023 presents a unique opportunity for researchers, industry experts, and policymakers to come together and address the critical challenge of trustworthy validation for autonomous vehicle systems on open roads.">
<meta name="twitter:image" content="https://valeoai.github.io/bravo/images/hero/preview.jpg">
</head>
<body>
<div id="page-hero" class="hero bg-image vh-100">
<a href="#main-content" class="skip-to-main-content-link">Skip to main content</a>
<div id="header" class="container header-container">
<h1 class="bravo-logo">
<div class="emojiFlip">🚙</div>BRAV<span class="emoji">🌍</span>
</h1>
<h1>roBustness and Reliability of Autonomous <br>Vehicles in the Open-world</h1>
<p class="subtitle">
An <a href="https://iccv2023.thecvf.com/list.of.accepted.workshops-90.php" target="_blank">
ICCV'23 workshop</a> · October 3rd, 2023 · Paris, France
</p>
<nav class="navbar navbar-expand-md navbar-light">
<div class="container-fluid justify-content-center">
<button class="navbar-toggler" type="button" data-bs-toggle="collapse" data-bs-target="#navbarNav"
aria-controls="navbarNav" aria-expanded="false" aria-label="toggle navigation bar">
<span class="navbar-toggler-icon"></span>
</button>
<div class="collapse navbar-collapse justify-content-center" id="navbarNav">
<ul class="navbar-nav">
<li class="nav-item">
<a class="nav-link" href="#speakers">Speakers</a>
</li>
<li class="nav-item">
<a class="nav-link" href="#program">Program</a>
</li>
<li class="nav-item">
<a class="nav-link" href="#challenge">Challenge 🔥</a>
</li>
<li class="nav-item">
<a class="nav-link" href="#cfp">Submissions</a>
</li>
<li class="nav-item">
<a class="nav-link" href="#dates">Dates</a>
</li>
<li class="nav-item">
<a class="nav-link" href="#organizers">Organizers</a>
</li>
</ul>
</div>
</div>
</nav>
<!-- <p class="announcement">
The <a href="#challenge">BRAVO challenge </a> is online! 🔥🔥🔥
</p> -->
</div>
</div>
<div id="main" class="container page-container">
<div id="abstract" class="container-md section-container section-first">
<div id="main-content">
<p class="lead">The BRAVO workshop presents a unique opportunity for researchers, industry experts, and
policymakers to come together and address the critical challenge of trustworthy validation for
autonomous vehicle systems on open roads.</p>
<p>The advances in artificial intelligence and computer vision propel the rise of highly automated
<acronym title="advanced driver-assistance systems">ADAS</acronym> and <acronym
title="autonomous vehicles">AVs</acronym>, with the potential to revolutionize transportation
and mobility services. However, deploying data-driven safety-critical systems with limited onboard
resources and enduring guarantees on open roads remains a significant challenge.
</p>
<p>To ensure safe deployment, ADAS/AVs must demonstrate the ability to navigate a wide range of driving
conditions, including rare and dangerous situations, severe perturbations, and even adversarial
attacks. Additionally, those capabilities must be ascertained to regulatory bodies, to secure
certification, and to users, to earn their confidence.</p>
<p>The BRAVO workshop seeks to foster collaboration and innovation in developing tools and testbeds for
assessing and enhancing the robustness, generalization power, transparency, and verification of
computer vision models for ADAS/AVs. By working together, we can contribute to a safer, more
efficient, and sustainable future for transportation.</p>
<p>We invite you to join us at the BRAVO workshop to explore solutions and contribute to developing
reliable, robust computer vision for autonomous vehicles. Together, we can shape the future of
transportation, ensuring safety and efficiency for all road users.</p>
</div>
</div>
<div id="speakers" class="container-md section-container">
<h2>Keynote Speakers</h2>
<div id="speakers_container"
class="d-flex flex-wrap justify-content-around align-items-center person-container">
<div class="card person-card">
<a class="person-link" href="https://jamie.shotton.org/" target="_blank">
<div class="card-img-top framed-photo"><img src="images/people/jamie.jpg" alt=""></div>
<div class="card-title person-name">Jamie Shotton</div>
<div class="card-text person-affiliation">Wayve</div>
</a>
</div>
<div class="card person-card">
<a class="person-link" href="http://ai.bu.edu/ksaenko.html" target="_blank">
<div class="card-img-top framed-photo"><img src="images/people/kate.jpg" alt=""></div>
<div class="card-title person-name" class=>Kate Saenko</div>
<div class="card-text person-affiliation">Boston University</div>
</a>
</div>
<div class="card person-card">
<a class="person-link" href="https://cispa.saarland/group/fritz/" target="_blank">
<div class="card-img-top framed-photo"><img src="images/people/mario.jpg" alt=""></div>
<div class="card-title person-name">Mario Fritz</div>
<div class="card-text person-affiliation">CISPA Helmholtz Center<br>for Information Security
</div>
</a>
</div>
<div class="card person-card">
<a class="person-link" href="https://team.inria.fr/rits/membres/raoul-de-charette" target="_blank">
<div class="card-img-top framed-photo"><img src="images/people/raoul.jpg" alt=""></div>
<div class="card-title person-name">Raoul de Charette</div>
<div class="card-text person-affiliation">INRIA</div>
</a>
</div>
<div class="card person-card">
<a class="person-link" href="https://pages.cs.wisc.edu/~sharonli/" target="_blank">
<div class="card-img-top framed-photo"><img src="images/people/sharon.jpg" alt=""></div>
<div class="card-title person-name">Sharon Yixuan Li</div>
<div class="card-text person-affiliation">UW–Madison</div>
</a>
</div>
<div class="card person-card">
<a class="person-link" href="http://www.tatianatommasi.com/" target="_blank">
<div class="card-img-top framed-photo"><img src="images/people/tatiana.jpg" alt=""></div>
<div class="card-title person-name">Tatiana Tommasi</div>
<div class="card-text person-affiliation">Politecnico di Torino</div>
</a>
</div>
</div>
</div>
<div id="program" class="container-md section-container">
<h2>Program</h2>
<p>All quoted times refer to <a href="http://heurelegalefrancaise.fr" target="_blank">CEST</a>.
<div id="program_container" class="container program-container">
<div class="row program-row">
<div class="col-md-2">8:45 - 9:00</div>
<div class="col-md-10">Opening remarks</div>
</div>
<div class="row program-row">
<div class="col-md-2">9:00 - 9:45</div>
<div class="col-md-10">Invited talk #1: “Open-world Scene Understanding with Intuitive Priors” by
Raoul de Charette</div>
</div>
<div class="row program-row">
<div class="col-md-2">9:45 - 10:30</div>
<div class="col-md-10">Invited talk #2: “Real World End-to-End Learnt Driving Models — an
Invitation” by Jamie Shotton</div>
</div>
<div class="row program-row">
<div class="col-md-2">10:30 - 11:15</div>
<div class="col-md-10"><b>Poster session #1 + Coffee break</b></div>
</div>
<div class="row program-row">
<div class="col-md-2">11:15 - 12:00</div>
<div class="col-md-10">Invited talk #3: “3D Open World: Generalize and Recognize Novelty” by Tatiana
Tommasi</div>
</div>
<div class="row program-row">
<div class="col-md-2">12:00 - 12:45</div>
<div class="col-md-10">Invited talk #4: “How to Safely Handle Out-of-Distribution Data in the Open
World: Challenges, Methods, and Path Forward” by Sharon Yixuan Li</div>
</div>
<div class="row program-row">
<div class="col-md-2">12:45 - 13:45</div>
<div class="col-md-10"><b>Lunch break</b></div>
</div>
<div class="row program-row">
<div class="col-md-2">13:45 - 14:15</div>
<div class="col-md-10">Spotlight presentations:</div>
</div>
<div class="row program-row">
<div class="col-md-2"></div>
<div class="col-md-10">GPS-GLASS: Learning Nighttime Semantic Segmentation Using Daytime Video and
GPS data</div>
</div>
<div class="row program-row">
<div class="col-md-2"></div>
<div class="col-md-10">T-FFTRadNet: Object Detection with Swin Vision Transformers from Raw ADC
Radar Signals</div>
</div>
<div class="row program-row">
<div class="col-md-2"></div>
<div class="col-md-10">An Empirical Analysis of Range for 3D Object Detection</div>
</div>
<div class="row program-row">
<div class="col-md-2">14:15 - 14:45</div>
<div class="col-md-10">BRAVO Challenge</div>
</div>
<div class="row program-row">
<div class="col-md-2">14:45 - 15:30</div>
<div class="col-md-10">Invited talk #5: “Fake it till you Make It: Can Synthetic Data Improve Model
Robustness” by Kate Saenko</div>
</div>
<div class="row program-row">
<div class="col-md-2">15:30 - 16:15</div>
<div class="col-md-10"><b>Poster session# 2 + Coffee break</b></div>
</div>
<div class="row program-row">
<div class="col-md-2">16:15 - 17:00</div>
<div class="col-md-10">Invited talk #6: “Efficient and Effective Certification for Street Scene
Segmentation” by Mario Fritz</div>
</div>
<div class="row program-row">
<div class="col-md-2">17:00 - 17:45</div>
<div class="col-md-10">Panel discussion + Q&A</div>
</div>
<div class="row program-row">
<div class="col-md-2">17:45 - 17:55</div>
<div class="col-md-10">Closing remarks</div>
</div>
</div>
<p style="margin-top: 2em">Plase check the <a
href="https://iccv2023.thecvf.com/paris.convention.center-36700-3-13-7.php"
target="_blank">conference attendance details</a> in advance, including the <a
href="https://iccv2023.thecvf.com/list.of.accepted.workshops-90.php" target="_blank">room
assignments</a> for the workshops.</p>
</div>
<div id="cfp" class="container-md section-container">
<h2>Accepted Works</h2>
<p>Workshop proceedings at
<a target="_blank" href="https://openaccess.thecvf.com/ICCV2023_workshops/BRAVO">TheCVF Open
Access</a>,
<a target="_blank" href="https://www.computer.org/csdl/proceedings/iccvw/2023/1TangBBH5cI">IEEE Computer
Society</a>,
and <a target="_blank" href="https://ieeexplore.ieee.org/xpl/conhome/10350357/proceeding">IEEE
Xplore.</a>
</p>
<h3>Poster session #1 (morning):</h3>
<ul class="accepted-papers">
<li><em>A Glimpse at the First Results of the AutoBehave Project: a Multidisciplinary Approach to
Evaluate the Usage of our Travel Time in Self-Driving Cars.</em> Carlos F Crispim-Junior, Romain
Guesdon, Christophe Jallais, Florent Laroche, Stephanie Souche-Le Corvec, Georges Beurier, Xuguang
Wang, Laure Tougne Rodet. (<a target="_blank"
href="submissions/bravo_abstract_autobehave_project.pdf">Abstract</a>)</li>
<li><em>Anomaly-Aware Semantic Segmentation via Style-Aligned OoD Augmentation.</em> Dan Zhang, Kaspar
Sakmann, William Beluch, Robin Hutmacher, Yumeng Li. (<a target="_blank"
href="https://openaccess.thecvf.com/content/ICCV2023W/BRAVO/html/Zhang_Anomaly-Aware_Semantic_Segmentation_via_Style-Aligned_OoD_Augmentation_ICCVW_2023_paper.html">Full
Paper</a>, <a target="_blank"
href="submissions/bravo_poster_anomaly_aware_semantic_segmentation.pdf">Poster</a>)</li>
<li><em>Camera-Based Road Snow Coverage Estimation.</em> Kai Cordes, Hellward Broszio.</em> (<a
target="_blank"
href="https://openaccess.thecvf.com/content/ICCV2023W/BRAVO/html/Cordes_Camera-Based_Road_Snow_Coverage_Estimation_ICCVW_2023_paper.html">Full
Paper</a>, <a target="_blank"
href="submissions/bravo_poster_road_snow_coverave_estimation.pdf">Poster</a>)
</li>
<li><em>You Can Have Your Ensemble and Run It Too — Deep Ensembles Spread Over Time.</em> Isak P
Meding, Alexander Bodin, Adam Tonderski, Joakim Johnander, Christoffer Petersson, Lennart Svensson.
(<a target="_blank"
href="https://openaccess.thecvf.com/content/ICCV2023W/BRAVO/html/Meding_You_can_have_your_ensemble_and_run_it_too_-_ICCVW_2023_paper.html">Full
Paper</a>)</li>
<li><em>On the Interplay of Convolutional Padding and Adversarial Robustness.</em> Paul Gavrikov, Janis
Keuper. (<a target="_blank"
href="https://openaccess.thecvf.com/content/ICCV2023W/BRAVO/html/Gavrikov_On_the_Interplay_of_Convolutional_Padding_and_Adversarial_Robustness_ICCVW_2023_paper.html">Full
Paper</a>, <a target="_blank"
href="submissions/bravo_poster_interplay_padding_adversarial_robustness.pdf">Poster</a>)</li>
<li><em>Synthetic Dataset Acquisition for a Specific Target Domain.</em> Joshua Niemeijer, Sudhanshu
Mittal, Thomas Brox. (<a target="_blank"
href="https://openaccess.thecvf.com/content/ICCV2023W/BRAVO/html/Niemeijer_Synthetic_Dataset_Acquisition_for_a_Specific_Target_Domain_ICCVW_2023_paper.html">Full
Paper</a>)</li>
<li><em>Unsupervised Domain Adaptation for Self-Driving from Past Traversal Features.</em> Travis Zhang,
Katie Z Luo, Cheng Perng Phoo, Yurong You, Mark Campbell, Bharath Hariharan, Kilian Weinberger.
(<a target="_blank"
href="https://openaccess.thecvf.com/content/ICCV2023W/BRAVO/html/Zhang_Unsupervised_Domain_Adaptation_for_Self-Driving_from_Past_Traversal_Features_ICCVW_2023_paper.html">Full
Paper</a>) </li>
<li><em>What Does Really Count? Estimating Relevance of Corner Cases for Semantic Segmentation in
Automated Driving.</em> Jasmin Breitenstein, Florian Heidecker, Maria Lyssenko, Daniel Bogdoll,
Maarten Bieshaar, Marius Zöllner, Bernhard Sick, Tim Fingscheidt. (<a target="_blank"
href="https://openaccess.thecvf.com/content/ICCV2023W/BRAVO/html/Breitenstein_What_Does_Really_Count_Estimating_Relevance_of_Corner_Cases_for_ICCVW_2023_paper.html">Full
Paper</a>)</li>
</ul>
<h3>Poster session #2 (afternoon):</h3>
<ul class="accepted-papers">
<li><em>A Subdomain-Specific Knowledge Distillation Method for Unsupervised Domain Adaptation in Adverse
Weather Conditions.</em> Yejin Lee, Gyuwon Choi, Donggon Jang, Daeshik Kim (<a target="_blank"
href="submissions/bravo_abstract_subdomain_specific_distillation.pdf">Abstract</a>, <a
target="_blank" href="submissions/bravo_poster_subdomain_specific_distillation.pdf">Poster</a>)
</li>
<li><em>An Empirical Analysis of Range for 3D Object Detection.</em> Neehar Peri, Mengtian Li, Benjamin
Wilson, Yu-Xiong Wang, James Hays, Deva Ramanan. (<a target="_blank"
href="https://openaccess.thecvf.com/content/ICCV2023W/BRAVO/html/Peri_An_Empirical_Analysis_of_Range_for_3D_Object_Detection_ICCVW_2023_paper.html">Full
Paper</a>, <a target="_blank"
href="submissions/bravo_poster_analysis_range_3d_detection.pdf">Poster</a>)</li>
<li><em>Fusing Pseudo Labels with Weak Supervision for Dynamic Traffic Scenarios.</em> Harshith Mohan
Kumar, Sean Lawrence. (<a target="_blank"
href="submissions/bravo_abstract_fusing_pseudo_labels.pdf">Abstract</a>)</li>
<li><em>GPS-GLASS: Learning Nighttime Semantic Segmentation Using Daytime Video and GPS data.</em>
Hongjae Lee, Changwoo Han, Jun-Sang Yoo, Seung-Won Jung. (<a target="_blank"
href="https://openaccess.thecvf.com/content/ICCV2023W/BRAVO/html/Lee_GPS-GLASS_Learning_Nighttime_Semantic_Segmentation_Using_Daytime_Video_and_GPS_ICCVW_2023_paper.html">Full
Paper</a>)</li>
<li><em>Identifying Systematic Errors in Object Detectors with the SCROD Pipeline.</em> Valentyn
Boreiko, Matthias Hein, Jan Hendrik Metzen. (<a target="_blank"
href="https://openaccess.thecvf.com/content/ICCV2023W/BRAVO/html/Boreiko_Identifying_Systematic_Errors_in_Object_Detectors_with_the_SCROD_Pipeline_ICCVW_2023_paper.html">Full
Paper</a>)</li>
<li><em>Introspection of 2D Object Detection using Processed Neural Activation Patterns in Automated
Driving Systems.</em> Hakan Y Yatbaz, Mehrdad Dianati, Konstantinos Koufos, Roger Woodman. (<a
target="_blank"
href="https://openaccess.thecvf.com/content/ICCV2023W/BRAVO/html/Yatbaz_Introspection_of_2D_Object_Detection_Using_Processed_Neural_Activation_Patterns_ICCVW_2023_paper.html">Full
Paper</a>) </li>
<li><em>On Offline Evaluation of 3D Object Detection for Autonomous Driving.</em> Tim Schreier, Katrin
Renz, Andreas Geiger, Kashyap Chitta. (<a target="_blank"
href="https://openaccess.thecvf.com/content/ICCV2023W/BRAVO/html/Schreier_On_Offline_Evaluation_of_3D_Object_Detection_for_Autonomous_Driving_ICCVW_2023_paper.html">Full
Paper</a>)</li>
<li><em>Sensitivity analysis of AI-based algorithms for autonomous driving on optical wavefront
aberrations induced by the windshield.</em> Dominik W Wolf, Markus Ulrich, Nikhil Kapoor. (<a
target="_blank"
href="https://openaccess.thecvf.com/content/ICCV2023W/BRAVO/html/Wolf_Sensitivity_Analysis_of_AI-Based_Algorithms_for_Autonomous_Driving_on_Optical_ICCVW_2023_paper.html">Full
Paper</a>, <a target="_blank"
href="submissions/bravo_poster_sensitivity_analysis_optical_wavefront.pdf">Poster</a>)</li>
<li><em>T-FFTRadNet: Object Detection with Swin Vision Transformers from Raw ADC Radar Signals.</em>
James Giroux, Martin Bouchard, Robert Laganiere. (<a target="_blank"
href="https://openaccess.thecvf.com/content/ICCV2023W/BRAVO/html/Giroux_T-FFTRadNet_Object_Detection_with_Swin_Vision_Transformers_from_Raw_ADC_ICCVW_2023_paper.html">Full
Paper</a>)</li>
</ul>
<h3>Reviewers</h3>
<p>We extend our warmest thanks to the team of reviewers who made this call for contributions possible:</p>
<div class="container mt-5">
<div class="row">
<div class="col-lg-6">
<!-- Column 1: Names and Institutions (1st half) -->
<div class="d-flex flex-column">
<div class="reviewer d-flex justify-content-between"><span>Adrien Lafage</span><span>ENSTA
Paris</span></div>
<div class="reviewer d-flex justify-content-between"><span>Alexandre
Boulch</span><span>Valeo.ai</span></div>
<div class="reviewer d-flex justify-content-between"><span>Alexandre
Ramé</span><span>LIP6</span></div>
<div class="reviewer d-flex justify-content-between"><span>Antoine
Saporta</span><span>Meero</span></div>
<div class="reviewer d-flex justify-content-between"><span>Antonin
Vobecky</span><span>Valeo.ai / CTU, FEE / CIIRC</span></div>
<div class="reviewer d-flex justify-content-between"><span>Arthur
Ouaknine</span><span>McGill University / Mila</span></div>
<div class="reviewer d-flex justify-content-between"><span>Cédric
Rommel</span><span>Valeo.ai</span></div>
<div class="reviewer d-flex justify-content-between"><span>Charles
Corbiere</span><span>Valeo.ai</span></div>
<div class="reviewer d-flex justify-content-between"><span>David
Hurych</span><span>Valeo.ai</span></div>
<div class="reviewer d-flex justify-content-between"><span>Dmitry
Kangin</span><span>Lancaster University</span></div>
<div class="reviewer d-flex justify-content-between"><span>Eduard
Zamfir</span><span>University of Wurzburg</span></div>
<div class="reviewer d-flex justify-content-between"><span>Emanuel
Aldea</span><span>Paris-Saclay University</span></div>
<div class="reviewer d-flex justify-content-between"><span>Emilie
Wirbel</span><span>Nvidia</span></div>
<div class="reviewer d-flex justify-content-between"><span>Fabio
Arnez</span><span>Université Paris-Saclay, CEA, List</span></div>
<div class="reviewer d-flex justify-content-between"><span>Fabio
Pizzati</span><span>University of Oxford</span></div>
<div class="reviewer d-flex justify-content-between"><span>Fredrik
Gustafsson</span><span>Uppsala University</span></div>
<div class="reviewer d-flex justify-content-between"><span>Himalaya
Jain</span><span>Helsing</span></div>
</div>
</div>
<div class="col-lg-6">
<!-- Column 2: Names and Institutions (2nd half) -->
<div class="d-flex flex-column">
<div class="reviewer d-flex justify-content-between"><span>Krzysztof
Lis</span><span>EPFL</span></div>
<div class="reviewer d-flex justify-content-between"><span>Loïck
Chambon</span><span>Valeo.ai</span></div>
<div class="reviewer d-flex justify-content-between"><span>Matej
Grcić</span><span>University of Zagreb</span></div>
<div class="reviewer d-flex justify-content-between"><span>Matthieu
Cord</span><span>Valeo.ai / Sorbonne University</span></div>
<div class="reviewer d-flex justify-content-between"><span>Maximilian
Jaritz</span><span>Amazon</span></div>
<div class="reviewer d-flex justify-content-between"><span>Mickael
Chen</span><span>Valeo.ai</span></div>
<div class="reviewer d-flex justify-content-between"><span>Nazir Nayal</span><span>Koç
University</span></div>
<div class="reviewer d-flex justify-content-between"><span>Olivier
Laurent</span><span>Université Paris-Saclay</span></div>
<div class="reviewer d-flex justify-content-between"><span>Oriane
Siméoni</span><span>Valeo.ai</span></div>
<div class="reviewer d-flex justify-content-between"><span>Patrick
Pérez</span><span>Valeo.ai</span></div>
<div class="reviewer d-flex justify-content-between"><span>Pau de Jorge
Aranda</span><span>University of Oxford</span></div>
<div class="reviewer d-flex justify-content-between"><span>Raffaello
Camoriano</span><span>Politecnico di Torino</span></div>
<div class="reviewer d-flex justify-content-between"><span>Raoul de
Charette</span><span>Inria</span></div>
<div class="reviewer d-flex justify-content-between"><span>Renaud
Marlet</span><span>Valeo.ai / École des Ponts ParisTech</span></div>
<div class="reviewer d-flex justify-content-between"><span>Riccardo Volpi</span><span>Naver
Labs</span></div>
<div class="reviewer d-flex justify-content-between"><span>Spyros
Gidaris</span><span>Valeo.ai</span></div>
<div class="reviewer d-flex justify-content-between"><span>Suha
Kwak</span><span>POSTECH</span></div>
</div>
</div>
</div>
</div>
<p style="margin-top: 1em">...and three other reviewers who preferred to remain anonymous.</p>
<h2 style="margin-top: 1em;"><s>Call for Contributions</s></h2>
<p>We invite participants to submit their work to the BRAVO Workshop as full papers or extended abstracts.
</p>
<h3>Full-Paper Submissions</h3>
<p>Full papers must present original research, not published elsewhere, and follow the <a
href="https://iccv2023.thecvf.com/submission.guidelines-361600-2-20-16.php" target="_blank">ICCV
main conference format</a> with a length of 4 to 8 pages (extra pages with references only are
allowed). Supplemental materials are <b>not</b> allowed. Accepted full papers will be included in the
conference proceedings.
</p>
<h3>Extended Abstract Submissions</h3>
<p>We welcome extended abstracts, which may serve works of a more speculative or preliminary nature that may
not be fit for a full-length paper. Authors are also welcome to submit extended abstracts for previously
or concomitantly published works that could foster the workshop objectives.
<p>
<p>Extended abstracts must have no more than 1000 words, in addition to a single illustration and
references. We suggest authors use the <a href="files/extended-abstract-template.zip" download>extended
abstract template</a> provided.</p>
<p>Accepted extended abstracts will be presented <b>without</b> inclusion in the proceedings.</p>
<h3>Topics of Interest</h3>
<p>The workshop welcomes submissions on all topics related to robustness, generalization, transparency, and
verification of computer vision for autonomous driving systems. Topics of interest include but are not
limited to:</p>
<ol>
<li>Robustness & Domain Generalization</li>
<li>Domain Adaptation & Shift</li>
<li>Long-tail Recognition</li>
<li>Perception in Adverse Conditions</li>
<li>Out-of-distribution Detection</li>
<li>Applications of Uncertainty Quantification</li>
<li>Monitoring, Failure Prediction & Anomaly Detection</li>
<li>Confidence Calibration</li>
<li>Image Enhancement Techniques</li>
</ol>
<h3>Guidelines</h3>
<p>All submissions must be made through the <a href="https://cmt3.research.microsoft.com/BRAVO2023"
target="_blank">CMT system</a>, before the <a href=" #dates">deadline</a>.</p>
<p>The BRAVO Workshop reviewing is <b>double-blind</b>. Authors of all submissions must
follow the <a href="https://iccv2023.thecvf.com/policies-361500-2-20-15.php" target="_blank">main
conference policy on anonymity</a>. We encourage authors to follow the <a
href="https://iccv2023.thecvf.com/suggested.practices.for.authors-362500-2-24-25.php"
target="_blank"> ICCV 2023 Suggested Practices for Authors</a>, except in what concerns supplemental
material, which is not allowed.</p>
<p>While we encourage reproducibility, we welcome preliminary/speculative works where source codes or data
might need more time until broad disclosure. We still expect evidence of ethics clearance if the
submission uses novel data sources from human subjects.</p>
<p>BRAVO Workshop reviewers must follow the <a
href="https://iccv2023.thecvf.com/ethics.for.reviewing.papers-362100-2-16-21.php"
target="_blank">ICCV 2023 Ethics Guidelines for Reviewers</a>. We encourage reviewers to follow the
<a href="https://iccv2023.thecvf.com/reviewer.guidelines-362000-2-16-20.php" target="_blank">ICCV 2023
Reviewer Guidelines</a>, and <a
href="https://iccv2023.thecvf.com/additional.tips.for.writing.good.reviews-362200-2-16-22.php"
target="_blank">Tips to Write Good Reviews</a>.
</p>
<h3>Camera-ready instructions</h3>
<p>The submission guidelines are detailed <a
href="https://docs.google.com/document/d/1Bo6JagywppxKc1TbGaCpnip3PXxmDaW3KU8coUlM98E/edit?usp=sharing"
target="_blank">here</a>.
</p>
<h3>Posters</h3>
<p>We will organize two poster sessions, in the morning and afternoon, inside the workshop room. All
accepted works will be assigned to one of the poster sessions, including those selected for the oral
spotlights.</p <p><b>The poster size for workshops differs from the main conference's.</b> The panel
size will be 95.4 cm wide x 138.8 cm tall (aspect ratio 0.69:1). A0 paper in portrait orientation will fit
the panel with some margin.</p>
<p>The ICCV organizers partnered with an on-site printing service from which you may collect your printed
poster: more information at the <a
href="https://iccv2023.thecvf.com/local.printing.service-364200-3-38-42.php" target="_blank">
main conference attendance info site</a>.</p>
</div>
<div id="dates" class="container-md section-container">
<h2>Important Dates</h2>
<div id="dates_container" class="container dates-container">
<div class="row dates-row">
<div class="col-md-3 col-lg-2">
<span class="announce_date">2023-07-20 Thu</span>
</div>
<div class="col-md-9">
Contributed submissions deadline (23:59 GMT)
</div>
</div>
<div class="row dates-row">
<div class="col-md-3 col-lg-2">
<span class="announce_date">2023-08-03 Thu</span>
</div>
<div class="col-md-9">
Acceptance of contributions announced to authors
</div>
</div>
<div class="row dates-row">
<div class="col-md-3 col-lg-2">
<span class="announce_date">2023-08-20 Sun</span>
</div>
<div class="col-md-9">
Full-paper camera-ready submission deadline
</div>
</div>
<div class="row dates-row">
<div class="col-md-3 col-lg-2">
<span class="announce_date">2023-09-15 Fri</span>
</div>
<div class="col-md-9">
Extended-abstract final-version submission deadline
</div>
</div>
<div class="row dates-row">
<div class="col-md-3 col-lg-2">
<span class="announce_date">2023-10-03 Tue</span>
</div>
<div class="col-md-9">
Workshop day (full day)
</div>
</div>
</div>
</div>
<div id="challenge" class="container-md section-container">
<h2>BRAVO Challenge</h2>
<p>In conjunction with the <a href="https://uncertainty-cv.github.io/2024/">Workshop on Uncertainty
Quantification for Computer Vision</a>, we are organizing a challenge on the robustness of
autonomous driving in the open world. The 2024 BRAVO Challenge aims at benchmarking segmentation models
on urban scenes undergoing diverse forms of natural degradation and realistic-looking synthetic
corruptions.</p>
<p>For more information, please check the <a href="https://github.com/valeoai/bravo_challenge">BRAVO
Challenge Repository</a> and the <a
href="https://benchmarks.elsa-ai.eu/?ch=1&com=introduction">Challenge Task Website at
ELLIS/ELSA</a>.</p>
<h3>Acknowledgements</h3>
<p>We extend our heartfelt gratitude to the authors of
<a href="https://acdc.vision.ee.ethz.ch/contact/" target="_blank">ACDC</a>,
<a href="https://segmentmeifyoucan.com/" target="_blank">SegmentMeIfYouCan</a> and
<a href="https://arxiv.org/abs/2108.00968" target="_blank">Out-of-context Cityscapes</a> for generously
granting us
permission to repurpose their benchmarking data. We are also thankful to the authors of
<a href="https://github.com/astra-vision/GuidedDisent" target="_blank">GuidedDisent</a> and
<a href="https://github.com/google-research/google-research/tree/master/flare_removal"
target="_blank">Flare Removal</a>
for providing the amazing toolboxes that helped synthesize realistic-looking raindrops and light
flares. All those people collectively contributed to creating BRAVO, a unified benchmark
for robustness in autonomous driving.
</p>
<p>We are excited to unveil the BRAVO Challenge as an initiative within
<a href="https://www.elsa-ai.eu/" target="_blank">ELSA — European Lighthouse on Secure and Safe AI</a>,
a network of excellence funded by the European Union. The BRAVO Challenge is officially featured on the
<a href="https://benchmarks.elsa-ai.eu/" target="_blank">ELSA Benchmarks website</a> as
the Autonomous Driving/Robust Perception task.
</p>
</div>
<div id="organizers" class="container-md section-container">
<h2>Organizers</h2>
<div id="organizers_container"
class="d-flex flex-wrap justify-content-around align-items-center person-container">
<div class="card person-card">
<a class="person-link" href="https://tuanhungvu.github.io/" target="_blank">
<div class="card-img-top framed-photo"><img src="images/people/tuanhung.jpg" alt=""></div>
<div class="card-title person-name" class=>Tuan-Hung Vu</div>
<div class="card-text person-affiliation">Valeo.AI</div>
</a>
</div>
<div class="card person-card">
<a class="person-link" href="https://abursuc.github.io/" target="_blank">
<div class="card-img-top framed-photo"><img src="images/people/andrei.jpg" alt=""></div>
<div class="card-title person-name">Andrei Bursuc</div>
<div class="card-text person-affiliation">Valeo.AI</div>
</a>
</div>
<div class="card person-card">
<a class="person-link" href="https://ptrckprz.github.io/" target="_blank">
<div class="card-img-top framed-photo"><img src="images/people/patrick.jpg" alt=""></div>
<div class="card-title person-name">Patrick Perez</div>
<div class="card-text person-affiliation">Valeo.AI</div>
</a>
</div>
<div class="card person-card">
<a class="person-link" href="https://eduardovalle.com/" target="_blank">
<div class="card-img-top framed-photo"><img src="images/people/eduardo.jpg" alt=""></div>
<div class="card-title person-name">Eduardo Valle</div>
<div class="card-text person-affiliation">Valeo.AI</div>
</a>
</div>
<div class="card person-card">
<a class="person-link" href="https://vas.mpi-inf.mpg.de/dengxin/" target="_blank">
<div class="card-img-top framed-photo"><img src="images/people/dengxin.jpg" alt=""></div>
<div class="card-title person-name">Dengxin Dai</div>
<div class="card-text person-affiliation">MPI</div>
</a>
</div>
<div class="card person-card">
<a class="person-link" href="https://mpi-inf.mpg.de/~schiele" target="_blank">
<div class="card-img-top framed-photo"><img src="images/people/bernt.jpg" alt=""></div>
<div class="card-title person-name">Bernt Schiele</div>
<div class="card-text person-affiliation">MPI</div>
</a>
</div>
<div class="card person-card">
<a class="person-link" href="https://fr.linkedin.com/in/emilie-wirbel-3ba43233" target="_blank">
<div class="card-img-top framed-photo"><img src="images/people/emilie.jpg" alt=""></div>
<div class="card-title person-name">Emilie Wirbel</div>
<div class="card-text person-affiliation">NVIDIA</div>
</a>
</div>
</div>
</div>
<div id="supporters" class="container-md section-container">
<p>Supported by</p>
<div id="supporters_container"
class="d-flex flex-wrap justify-content-left align-items-top person-container">
<div class="card supporter-card">
<a class="supporter-link" href="https://elsa-ai.eu/" target="_blank">
<div class="card-body"><img class="supporter-image" src="images/logos/elsa.png" alt=""></div>
</a>
</div>
</div>
</div>
<div id="supporters" class="container-md section-colophon">
<p>Original photo by Kai Gradert on Unsplash, modified to illustrate stable diffusion augmentations.</p>
</div>
</div>
</body>
</html>