-
Notifications
You must be signed in to change notification settings - Fork 0
/
paper.html
716 lines (673 loc) · 50.5 KB
/
paper.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
<!DOCTYPE html>
<html>
<head>
<link rel="icon" href="vLAR/imgs/polyu_icon.png">
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width,initial-scale=1,minimum-scale=1,maximum-scale=1,user-scalable=no" />
<meta name="generator" content="HTML Tidy for Linux/x86 (vers 11 February 2007), see www.w3.org">
<title>Research | vLAR Group</title>
<script type="text/javascript" src="vLAR/js/jquery-1.11.0.min.js"></script>
<script type="text/javascript" src="vLAR/js/vlar.js"></script>
<link href="vLAR/css/mui.min.css" type="text/css" rel="stylesheet">
<link href="vLAR/css/viewer.css" rel="stylesheet">
<link href="vLAR/css/vlar.css" type="text/css" rel="stylesheet">
<link href="vLAR/css/vlar.s1.css" type="text/css" rel="stylesheet" media="(max-width:800px)">
<link href="vLAR/css/vlar.s2.css" type="text/css" rel="stylesheet" media="(min-width:801px) and (max-width:1280px)">
<link href="vLAR/css/vlar.s3.css" type="text/css" rel="stylesheet" media="(min-width:1281px)">
</head>
<body>
<!--top begin-->
<div id="vlar-top">
<div id="vlar-top-con">
<div id="vlar-top-logo"><img src="vLAR/imgs/logo.png" /> </div>
<!--导航栏 begin-->
<div id="vlar-top-menus">
<ul class="vlar-top-menu-u">
<li><a href="index.html">Home</a></li>
<li class="focusLi">Research</li>
<li><a href="people.html">People</a></li>
</ul>
</div>
<!--导航栏 end-->
<div class="vlar-clear"></div>
</div>
</div>
<!--top end-->
<!--侧边滑动导航 begin-->
<div id="vlar-top-menus-laermenuTag" onclick="setSlidemenu(1)"><span class="mui-icon mui-icon-bars"></span></div>
<div id="vlar-layermenu-box-bg" onClick="setSlidemenu(0)"></div>
<div id="vlar-layermenu-box">
<ul class="vlar-top-menu-u">
<li><a href="index.html">Home</a></li>
<li class="focusLi">Research</li>
<li><a href="people.html">People</a></li>
</ul>
</div>
<!--侧边滑动导航 end-->
<div id="vlar-contents">
<!--内容组 begin-->
<div class="vlar-pap-cont-item">
<div class="vlar-pap-cont-item-t">Research Themes</div>
<div class="vlar-pap-cont-item-c">
<!--单条记录 begin-->
<div class="vlar-pap-cont-item-c-si vlar-import-content">
<ul>
<li><strong>ML</strong>: unsupervised learning, disentangled representation learning, zero-shot learning, etc.</li>
<li><strong>CV</strong>: 3D reconstruction, 3D semantic/instance segmentation, neural rendering, etc.</li>
<li><strong>Robotics</strong>: interaction with 3D scenes, autonomous navigation, path planning, etc.</li>
</ul>
</div>
<!--单条记录 end-->
</div>
</div>
<!--内容组 end-->
<!--内容组 begin-->
<div class="vlar-pap-cont-item">
<div class="vlar-pap-cont-item-t">Research Papers</div>
<div class="vlar-pap-cont-item-c">
<!--单条paper记录 begin-->
<div class="vlar-pap-cont-item-c-si vlar-separatorLine">
<div class="vlar-pap-cont-item-c-si-img"><img src="vLAR/papers/24_icml_osn.gif" ></div>
<div class="vlar-pap-cont-item-c-si-c">
<p><a href="https://arxiv.org/abs/2407.05615">
<div class="papertitle">OSN: Infinite Representations of Dynamic 3D Scenes from Monocular Videos</div></a>
Z. Song, J. Li, <strong>B. Yang</strong> <br>
<em>International Conference on Machine Learning (ICML)</em>, 2024
<br>
<a href="https://arxiv.org/abs/2407.05615">arXiv</a> /
<a href="https://github.com/vLAR-group/OSN"><font color="red">Code</font></a>
<iframe src="https://ghbtns.com/github-btn.html?user=vLAR-group&repo=OSN&type=star&count=true&size=small" frameborder="0" scrolling="0" width="120px" height="20px"></iframe>
<p></p>
<p align="justify" style="font-size:13px">We present the first framework to represent dynamic 3D scenes in infinitely many ways from a monocular RGB video.</p>
</div>
<div class="vlar-clear"></div>
</div>
<!--单条paper记录 end-->
<!--单条paper记录 begin-->
<div class="vlar-pap-cont-item-c-si vlar-separatorLine">
<div class="vlar-pap-cont-item-c-si-img"><img src="vLAR/papers/22_neurips_ogc.gif" ></div>
<div class="vlar-pap-cont-item-c-si-c">
<p><a href="https://ieeexplore.ieee.org/abstract/document/10551495">
<div class="papertitle">Unsupervised 3D Object Segmentation of Point Clouds by Geometry Consistency</div></a>
Z. Song, <strong>B. Yang</strong><br>
<em>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)</em>, 2024 <font color="red"><strong>(IF=20.8)</strong></font><br>
<a href="https://ieeexplore.ieee.org/abstract/document/10551495">IEEE Xplore</a> /
<a href="https://github.com/vLAR-group/OGC"><font color="red">Code</font></a>
<iframe src="https://ghbtns.com/github-btn.html?user=vLAR-group&repo=OGC&type=star&count=true&size=small"
frameborder="0" scrolling="0" width="100px" height="20px"></iframe>
<p align="justify" style="font-size:13px">The journal version of our OGC at NeurIPS 2022. More experiments and analysis are included.</p>
</div>
<div class="vlar-clear"></div>
</div>
<!--单条paper记录 end-->
<!--单条paper记录 begin-->
<div class="vlar-pap-cont-item-c-si vlar-separatorLine">
<div class="vlar-pap-cont-item-c-si-img"><img src="vLAR/papers/24_icra_dyncatch.gif" ></div>
<div class="vlar-pap-cont-item-c-si-c">
<p><a href="https://ieeexplore.ieee.org/document/10611106">
<div class="papertitle">Learning to Catch Reactive Objects with a Behavior Predictor</div></a>
K. Lu, JX. Zhong, <strong>B. Yang</strong>, B. Wang, A. Markham <br>
<em>IEEE International Conference on Robotics and Automation (ICRA)</em>, 2024
<br>
<a href="https://kl-research.github.io/dyncatch">Project Page</a>
<p></p>
<p align="justify" style="font-size:13px">We present a novel framework to track and catch reactive objects in a dynamic 3D world.</p>
</div>
<div class="vlar-clear"></div>
</div>
<!--单条paper记录 end-->
<!--单条paper记录 begin-->
<div class="vlar-pap-cont-item-c-si vlar-separatorLine">
<div class="vlar-pap-cont-item-c-si-img"><img src="vLAR/papers/24_ijcv_unobjseg.png" ></div>
<div class="vlar-pap-cont-item-c-si-c">
<p><a href="https://arxiv.org/abs/2312.04947">
<div class="papertitle">Benchmarking and Analysis of Unsupervised Object Segmentation from Real-World Single Images</div></a>
Y. Yang, <strong>B. Yang</strong> <br>
<em>International Journal of Computer Vision (IJCV)</em>, 2024 <font color="red"><strong>(IF=11.6)</strong></font><br>
<a href="https://arxiv.org/abs/2312.04947">arXiv</a> /
<a href="https://link.springer.com/article/10.1007/s11263-023-01973-w">Springer Access</a> /
<a href="https://github.com/vLAR-group/UnsupObjSeg"><font color="red">Code</font></a>
<iframe src="https://ghbtns.com/github-btn.html?user=vLAR-group&repo=UnsupObjSeg&type=star&count=true&size=small"
frameborder="0" scrolling="0" width="100px" height="20px"></iframe>
<p></p>
<p align="justify" style="font-size:13px">The journal version of our paper at NeurIPS 2022. Complete benchmark and analysis are included.</p>
</div>
<div class="vlar-clear"></div>
</div>
<!--单条paper记录 end-->
<!--单条paper记录 begin-->
<div class="vlar-pap-cont-item-c-si vlar-separatorLine">
<div class="vlar-pap-cont-item-c-si-img"><img src="vLAR/papers/23_neurips_nvfi.gif" ></div>
<div class="vlar-pap-cont-item-c-si-c">
<p><a href="https://arxiv.org/abs/2312.06398">
<div class="papertitle">NVFi: Neural Velocity Fields for 3D Physics Learning from Dynamic Videos</div></a>
J. Li, Z. Song, <strong>B. Yang</strong> <br>
<em>Advances in Neural Information Processing Systems (NeurIPS)</em>, 2023
<br>
<a href="https://arxiv.org/abs/2312.06398">arXiv</a> /
<a href="https://github.com/vLAR-group/NVFi"><font color="red">Code</font></a>
<iframe src="https://ghbtns.com/github-btn.html?user=vLAR-group&repo=NVFi&type=star&count=true&size=small" frameborder="0" scrolling="0" width="120px" height="20px"></iframe>
<p></p>
<p align="justify" style="font-size:13px">We present a novel framework to simultaneously learn the geometry, appearance, and physical velocity of 3D scenes.</p>
</div>
<div class="vlar-clear"></div>
</div>
<!--单条paper记录 end-->
<!--单条paper记录 begin-->
<div class="vlar-pap-cont-item-c-si vlar-separatorLine">
<div class="vlar-pap-cont-item-c-si-img"><img src="vLAR/papers/23_neurips_raydf.gif" ></div>
<div class="vlar-pap-cont-item-c-si-c">
<p><a href="https://arxiv.org/abs/2310.19629">
<div class="papertitle">RayDF: Neural Ray-surface Distance Fields with Multi-view Consistency</div></a>
Z. Liu, <strong>B. Yang*</strong>, Y. Luximon, A. Kumar, J. Li<br>
<em>Advances in Neural Information Processing Systems (NeurIPS)</em>, 2023
<br>
<a href="https://arxiv.org/abs/2310.19629">arXiv</a> /
<a href="https://vlar-group.github.io/RayDF.html">Project Page</a> /
<a href="https://github.com/vLAR-group/RayDF"><font color="red">Code</font></a>
<iframe src="https://ghbtns.com/github-btn.html?user=vLAR-group&repo=RayDF&type=star&count=true&size=small" frameborder="0" scrolling="0" width="120px" height="20px"></iframe>
<br>(* indicates corresponding author)
<p align="justify" style="font-size:13px">We propose a novel ray-based 3D shape representation, achieving a 1000x faster speed in rendering.</p>
</div>
<div class="vlar-clear"></div>
</div>
<!--单条paper记录 end-->
<!--单条paper记录 begin-->
<div class="vlar-pap-cont-item-c-si vlar-separatorLine">
<div class="vlar-pap-cont-item-c-si-img"><img src="vLAR/papers/23_cvpr_growsp.gif" ></div>
<div class="vlar-pap-cont-item-c-si-c">
<p><a href="https://arxiv.org/abs/2305.16404">
<div class="papertitle">GrowSP: Unsupervised Semantic Segmentation of 3D Point Clouds</div></a>
Z. Zhang, <strong>B. Yang*</strong>, B. Wang, B. Li<br>
<em>IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</em>, 2023
<br>
<a href="https://arxiv.org/abs/2305.16404">arXiv</a> /
<a href="https://github.com/vLAR-group/GrowSP"><font color="red">Code</font></a>
<iframe src="https://ghbtns.com/github-btn.html?user=vLAR-group&repo=GrowSP&type=star&count=true&size=small" frameborder="0" scrolling="0" width="120px" height="20px"></iframe>
<br>(* indicates corresponding author)
<p align="justify" style="font-size:13px">We propose the first unsupervised 3D semantic segmentation method, learning from growing superpoints in point clouds.</p>
</div>
<div class="vlar-clear"></div>
</div>
<!--单条paper记录 end-->
<!--单条paper记录 begin-->
<div class="vlar-pap-cont-item-c-si vlar-separatorLine">
<div class="vlar-pap-cont-item-c-si-img"><img src="vLAR/papers/23_iclr_dmnerf.gif" ></div>
<div class="vlar-pap-cont-item-c-si-c">
<p><a href="https://arxiv.org/abs/2208.07227">
<div class="papertitle">DM-NeRF: 3D Scene Geometry Decomposition and Manipulation from 2D Images</div></a>
B. Wang, L. Chen, <strong>B. Yang*</strong><br>
<em>International Conference on Learning Representations (ICLR)</em>, 2023
<br>
<a href="https://arxiv.org/abs/2208.07227">arXiv</a> /
<a href="https://twitter.com/vLAR_Group/status/1564216685640695808?s=20&t=h9Bd66jzr-mDM8eSXbMelg">Tweet</a> /
<a href="https://github.com/vLAR-group/DM-NeRF"><font color="red">Code</font></a>
<iframe src="https://ghbtns.com/github-btn.html?user=vLAR-group&repo=DM-NeRF&type=star&count=true&size=small" frameborder="0" scrolling="0" width="120px" height="20px"></iframe>
<br>(* indicates corresponding author)
<p align="justify" style="font-size:13px">We introduce a single pipeline to simultaneously reconstruct, decompose, manipulate and render complex 3D scenes.</p>
</div>
<div class="vlar-clear"></div>
</div>
<!--单条paper记录 end-->
<!--单条paper记录 begin-->
<div class="vlar-pap-cont-item-c-si vlar-separatorLine">
<div class="vlar-pap-cont-item-c-si-img"><img src="vLAR/papers/23_icra_decouple_maniskill.gif" ></div>
<div class="vlar-pap-cont-item-c-si-c">
<p><a href="https://deerkk.github.io/DecoupleManiSkill">
<div class="papertitle">Decoupling Skill Learning from Robotic Control for Generalizable Manipulation</div></a>
K. Lu, <strong>B. Yang</strong>, B. Wang, A. Markham<br>
<em>IEEE International Conference on Robotics and Automation (ICRA)</em>, 2023
<br>
<a href="https://deerkk.github.io/DecoupleManiSkill">Project Page</a>
<p></p>
<p align="justify" style="font-size:13px">We propose a generalizable framework for robotic manipulation.</p>
</div>
<div class="vlar-clear"></div>
</div>
<!--单条paper记录 end-->
<!--单条paper记录 begin-->
<div class="vlar-pap-cont-item-c-si vlar-separatorLine">
<div class="vlar-pap-cont-item-c-si-img"><img src="vLAR/papers/22_neurips_ogc.gif" ></div>
<div class="vlar-pap-cont-item-c-si-c">
<p><a href="https://arxiv.org/abs/2210.04458">
<div class="papertitle">OGC: Unsupervised 3D Object Segmentation from Rigid Dynamics of Point Clouds</div></a>
Z. Song, <strong>B. Yang</strong><br>
<em>Advances in Neural Information Processing Systems (NeurIPS)</em>, 2022
<br>
<a href="https://arxiv.org/abs/2210.04458">arXiv</a> /
<a href="https://www.youtube.com/watch?v=dZBjvKWJ4K0">Video</a> /
<a href="https://github.com/vLAR-group/OGC"><font color="red">Code</font></a>
<iframe src="https://ghbtns.com/github-btn.html?user=vLAR-group&repo=OGC&type=star&count=true&size=small" frameborder="0" scrolling="0" width="120px" height="20px"></iframe>
<p></p>
<p align="justify" style="font-size:13px">We introduce the first unsupervised 3D object segmentation method on point clouds.</p>
</div>
<div class="vlar-clear"></div>
</div>
<!--单条paper记录 end-->
<!--单条paper记录 begin-->
<div class="vlar-pap-cont-item-c-si vlar-separatorLine">
<div class="vlar-pap-cont-item-c-si-img"><img src="vLAR/papers/22_neurips_unsupobjseg.gif" ></div>
<div class="vlar-pap-cont-item-c-si-c">
<p><a href="https://arxiv.org/abs/2210.02324">
<div class="papertitle">Promising or Elusive? Unsupervised Object Segmentation from Real-world Single Images</div></a>
Y. Yang, <strong>B. Yang</strong><br>
<em>Advances in Neural Information Processing Systems (NeurIPS)</em>, 2022
<br>
<a href="https://arxiv.org/abs/2210.02324">arXiv</a> /
<a href="https://vlar-group.github.io/UnsupObjSeg.html">Project Page</a> /
<a href="https://github.com/vLAR-group/UnsupObjSeg"><font color="red">Code</font></a>
<iframe src="https://ghbtns.com/github-btn.html?user=vLAR-group&repo=UnsupObjSeg&type=star&count=true&size=small" frameborder="0" scrolling="0" width="120px" height="20px"></iframe>
<p></p>
<p align="justify" style="font-size:13px">We systematically investigate the effectiveness of existing unsupervised models on challenging real-world images.</p>
</div>
<div class="vlar-clear"></div>
</div>
<!--单条paper记录 end-->
<!--单条paper记录 begin-->
<div class="vlar-pap-cont-item-c-si vlar-separatorLine">
<div class="vlar-pap-cont-item-c-si-img"><img src="vLAR/papers/22_eccv_sqn.gif" ></div>
<div class="vlar-pap-cont-item-c-si-c">
<p><a href="https://arxiv.org/abs/2104.04891">
<div class="papertitle">SQN: Weakly-Supervised Semantic Segmentation of Large-Scale 3D Point Clouds</div></a>
Q. Hu, <strong>B. Yang*</strong>, G. Fang, Y. Guo, A. Leonardis, N. Trigoni, A. Markham<br>
<em>European Conference on Computer Vision (ECCV)</em>, 2022
<br>
<a href="https://arxiv.org/abs/2104.04891">arXiv</a> /
<a href="https://github.com/QingyongHu/SQN"><font color="red">Code</font></a>
<iframe src="https://ghbtns.com/github-btn.html?user=QingyongHu&repo=SQN&type=star&count=true&size=small" frameborder="0" scrolling="0" width="120px" height="20px"></iframe>
<br>(* indicates corresponding author)
<p align="justify" style="font-size:13px">We introduce a simple weakly-supervised neural network to learn precise 3D semantics for large-scale point clouds.</p>
</div>
<div class="vlar-clear"></div>
</div>
<!--单条paper记录 end-->
<!--单条paper记录 begin-->
<div class="vlar-pap-cont-item-c-si vlar-separatorLine">
<div class="vlar-pap-cont-item-c-si-img"><img src="vLAR/papers/22_tpami_spinnet.png" ></div>
<div class="vlar-pap-cont-item-c-si-c">
<p><a href="https://ieeexplore.ieee.org/abstract/document/9792207">
<div class="papertitle">You Only Train Once: Learning General and Distinctive 3D Local Descriptors</div></a>
S. Ao, Y. Guo, Q. Hu, <strong>B. Yang</strong>, A. Markham, Z. Chen<br>
<em>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)</em>, 2022 <font color="red"><strong>(IF=16.39)</strong></font><br>
<a href="https://ieeexplore.ieee.org/abstract/document/9792207">IEEE Xplore</a> /
<a href="https://github.com/QingyongHu/SpinNet"><font color="red">Code</font></a>
<iframe src="https://ghbtns.com/github-btn.html?user=QingyongHu&repo=SpinNet&type=star&count=true&size=small" frameborder="0" scrolling="0" width="120px" height="20px"></iframe>
<p></p>
<p align="justify" style="font-size:13px">The journal version of our SpinNet. More experiments and analysis are included.</p>
</div>
<div class="vlar-clear"></div>
</div>
<!--单条paper记录 end-->
<!--单条paper记录 begin-->
<div class="vlar-pap-cont-item-c-si vlar-separatorLine">
<div class="vlar-pap-cont-item-c-si-img"><img src="vLAR/papers/22_arXiv_rangeUDF.gif" ></div>
<div class="vlar-pap-cont-item-c-si-c">
<p><a href="https://arxiv.org/abs/2204.09138">
<div class="papertitle">RangeUDF: Semantic Surface Reconstruction from 3D Point Clouds</div></a>
B. Wang, Z. Yu, <strong>B. Yang*</strong>, J. Qin, T. Breckon, L. Shao, N. Trigoni, A. Markham<br>
<a href="https://arxiv.org/abs/2204.09138">arXiv</a> /
<a href="https://www.youtube.com/watch?v=YahEnX1z-yw">Demo</a> /
<a href="https://github.com/vLAR-group/RangeUDF"><font color="red">Project page</font></a>
<iframe src="https://ghbtns.com/github-btn.html?user=vLAR-group&repo=RangeUDF&type=star&count=true&size=small" frameborder="0" scrolling="0" width="120px" height="20px"></iframe>
<br>(* indicates corresponding author)
<p align="justify" style="font-size:13px">We propose a new method to recover the geometry and semantics of continuous 3D scene surfaces from point clouds.</p>
</div>
<div class="vlar-clear"></div>
</div>
<!--单条paper记录 end-->
<!--单条paper记录 begin-->
<div class="vlar-pap-cont-item-c-si vlar-separatorLine">
<div class="vlar-pap-cont-item-c-si-img"><img src="vLAR/papers/21_ijcv_sensaturban.gif" ></div>
<div class="vlar-pap-cont-item-c-si-c">
<p><a href="http://arxiv.org/abs/2009.03137">
<div class="papertitle">SensatUrban: Learning Semantics from Urban-Scale Photogrammetric Point Clouds</div></a>
Q. Hu, <strong>B. Yang*</strong>, S. Khalid, W. Xiao, N. Trigoni, A. Markham<br>
<em>International Journal of Computer Vision (IJCV)</em>, 2022 <font color="red"><strong>(IF=7.41)</strong></font><br>
<a href="https://arxiv.org/abs/2201.04494">arXiv</a> /
<a href="https://link.springer.com/article/10.1007/s11263-021-01554-9">Springer Access</a> /
<a href="https://www.youtube.com/watch?v=IG0tTdqB3L8">Demo</a> /
<a href="https://github.com/QingyongHu/SensatUrban"><font color="red">Project page</font></a>
<iframe src="https://ghbtns.com/github-btn.html?user=QingyongHu&repo=SensatUrban&type=star&count=true&size=small" frameborder="0" scrolling="0" width="120px" height="20px"></iframe>
<br>(* indicates corresponding author)
<p align="justify" style="font-size:13px">The journal version of our SensatUrban. More experiments and analysis are included.</p>
</div>
<div class="vlar-clear"></div>
</div>
<!--单条paper记录 end-->
<!--单条paper记录 begin-->
<div class="vlar-pap-cont-item-c-si vlar-separatorLine">
<div class="vlar-pap-cont-item-c-si-img"><img src="vLAR/papers/21_senj_pointloc.png" ></div>
<div class="vlar-pap-cont-item-c-si-c">
<p><a href="https://arxiv.org/abs/2003.02392">
<div class="papertitle">PointLoc: Deep Pose Regressor for LiDAR Point Cloud Localization</div></a>
Wei Wang, Bing Wang, Peijun Zhao, Changhao Chen, Ronald Clark, <strong>B. Yang</strong>, Andrew Markham, Niki Trigoni <br>
<em>IEEE Sensor Journal</em>, 2022 <font color="red"><strong>(IF=3.30)</strong></font><br>
<a href="https://arxiv.org/abs/2003.02392">arXiv</a> /
<a href="https://ieeexplore.ieee.org/abstract/document/9617633">IEEE Xplore</a>
<p></p>
<p align="justify" style="font-size:13px">We present a learning-based LiDAR relocalization framework to efficiently estimate 6-DoF poses from LiDAR point clouds. </p>
</div>
<div class="vlar-clear"></div>
</div>
<!--单条paper记录 end-->
<!--单条paper记录 begin-->
<div class="vlar-pap-cont-item-c-si vlar-separatorLine">
<div class="vlar-pap-cont-item-c-si-img"><img src="vLAR/papers/21_iccv_grf.gif" > </div>
<div class="vlar-pap-cont-item-c-si-c ">
<p><a href="http://arxiv.org/abs/2010.04595">
<div class="papertitle">GRF: Learning a General Radiance Field for 3D Representation and Rendering</div></a>
A. Trevithick, <strong>B. Yang</strong> <br>
<em>IEEE International Conference on Computer Vision (ICCV)</em>, 2021
<br>
<a href="http://arxiv.org/abs/2010.04595">arXiv</a> /
<font color="red"> News:</font>
<a href="https://mp.weixin.qq.com/s/s2j9D-ovrU9WDt-aS9V-Iw"><font color="red">CVer</font></a> /
<a href="https://github.com/alextrevithick/GRF"><font color="red">Code</font></a>
<iframe src="https://ghbtns.com/github-btn.html?user=alextrevithick&repo=GRF&type=star&count=true&size=small" frameborder="0" scrolling="0" width="120px" height="20px"></iframe>
<p></p>
<p align="justify" style="font-size:13px">We introduce a simple implicit neural function to represent complex 3D geometries purely from 2D images.</p>
</div>
<div class="vlar-clear"></div>
</div>
<!--单条paper记录 end-->
<!--单条paper记录 begin-->
<div class="vlar-pap-cont-item-c-si vlar-separatorLine">
<div class="vlar-pap-cont-item-c-si-img"><img src="vLAR/papers/21_tpami_randla.gif" ></div>
<div class="vlar-pap-cont-item-c-si-c">
<p><a href="http://arxiv.org/abs/2010.04595">
<div class="papertitle">Learning Semantic Segmentation of Large-Scale Point Clouds with Random Sampling</div></a>
Q. Hu, <strong>B. Yang*</strong>, L. Xie, S. Rosa, Y. Guo, Z. Wang, N. Trigoni, A. Markham<br>
<em>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)</em>, 2021 <font color="red"><strong>(IF=16.39)</strong></font><br>
<a href="https://arxiv.org/abs/2107.02389">arXiv</a> /
<a href="https://ieeexplore.ieee.org/document/9440696">IEEE Xplore</a> /
<a href="https://github.com/QingyongHu/RandLA-Net"><font color="red">Code</font></a>
<iframe src="https://ghbtns.com/github-btn.html?user=QingyongHu&repo=RandLA-Net&type=star&count=true&size=small"
frameborder="0" scrolling="0" width="100px" height="20px"></iframe>
<br>(* indicates corresponding author)
<p align="justify" style="font-size:13px">The journal version of our RandLA-Net. More experiments and analysis are included.</p>
</div>
<div class="vlar-clear"></div>
</div>
<!--单条paper记录 end-->
<!--单条paper记录 begin-->
<div class="vlar-pap-cont-item-c-si vlar-separatorLine">
<div class="vlar-pap-cont-item-c-si-img"><img src="vLAR/papers/21_cvpr_spinnet.png" ></div>
<div class="vlar-pap-cont-item-c-si-c">
<p><a href="https://arxiv.org/abs/2011.12149">
<div class="papertitle">SpinNet: Learning a General Surface Descriptor for 3D Point Cloud Registration</div></a>
S. Ao^, Q. Hu^, <strong>B. Yang</strong>, A. Markham, Y. Guo<br>
<em>IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</em>, 2021
<br>
<a href="https://arxiv.org/abs/2011.12149">arXiv</a> /
<a href="https://github.com/QingyongHu/SpinNet"><font color="red">Code</font></a>
<iframe src="https://ghbtns.com/github-btn.html?user=QingyongHu&repo=SpinNet&type=star&count=true&size=small" frameborder="0" scrolling="0" width="120px" height="20px"></iframe>
<br>(^ indicates equal contributions)
<p align="justify" style="font-size:13px">We introduce a simple and general neural network to register pieces of 3D point clouds.
</p>
</div>
<div class="vlar-clear"></div>
</div>
<!--单条paper记录 end-->
<!--单条paper记录 begin-->
<div class="vlar-pap-cont-item-c-si vlar-separatorLine">
<div class="vlar-pap-cont-item-c-si-img"><img src="vLAR/papers/21_cvpr_sensaturban.gif" ></div>
<div class="vlar-pap-cont-item-c-si-c">
<p><a href="http://arxiv.org/abs/2009.03137">
<div class="papertitle">Towards Semantic Segmentation of Urban-Scale 3D Point Clouds: A Dataset, Benchmarks and Challenges</div></a>
Q. Hu, <strong>B. Yang*</strong>, S. Khalid, W. Xiao, N. Trigoni, A. Markham<br>
<em>IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</em>, 2021
<br>
<!--<font color="red"><strong>..</strong></font><br>-->
<a href="http://arxiv.org/abs/2009.03137">arXiv</a> /
<a href="https://www.youtube.com/watch?v=IG0tTdqB3L8">Demo</a> /
<a href="https://github.com/QingyongHu/SensatUrban"><font color="red">Project page</font></a>
<iframe src="https://ghbtns.com/github-btn.html?user=QingyongHu&repo=SensatUrban&type=star&count=true&size=small" frameborder="0" scrolling="0" width="120px" height="20px"></iframe>
<br>(* indicates corresponding author)
<p align="justify" style="font-size:13px">We introduce an urban-scale photogrammetric point cloud dataset and extensively evaluate and analyze the state-of-the-art algorithms on the dataset.</p>
</div>
<div class="vlar-clear"></div>
</div>
<!--单条paper记录 end-->
<!--单条paper记录 begin-->
<div class="vlar-pap-cont-item-c-si vlar-separatorLine">
<div class="vlar-pap-cont-item-c-si-img"><img src="vLAR/papers/21_icra_radarloc.png" ></div>
<div class="vlar-pap-cont-item-c-si-c">
<p><a href="#">
<div class="papertitle">RadarLoc: Learning to Relocalize in FMCW Radar</div></a>
W. Wang, P.P.B. de Gusmao, <strong>B. Yang</strong>, A. Markham, N. Trigoni<br>
<em>IEEE International Conference on Robotics and Automation (ICRA) </em>, 2021
<br>
<a href="https://arxiv.org/abs/2103.11562">arXiv</a> /
<a href="https://ieeexplore.ieee.org/document/9560858">IEEE Xplore</a>
<p></p>
<p align="justify" style="font-size:13px">We introduce a simple end-to-end neural network with self-attention to estimate global poses from FMCW radar scans.</p>
</div>
<div class="vlar-clear"></div>
</div>
<!--单条paper记录 end-->
<!--单条paper记录 begin-->
<div class="vlar-pap-cont-item-c-si vlar-separatorLine">
<div class="vlar-pap-cont-item-c-si-img"><img src="vLAR/papers/20_cvpr_randlanet.gif" ></div>
<div class="vlar-pap-cont-item-c-si-c">
<p><a href="https://arxiv.org/abs/1911.11236">
<div class="papertitle">RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds</div></a>
Q. Hu, <strong>B. Yang*</strong>, L. Xie, S. Rosa, Y. Guo, Z. Wang, N. Trigoni, A. Markham
<br>
<em>IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</em>, 2020
<br>
<a href="https://arxiv.org/abs/1911.11236">arXiv</a> /
<a href="http://www.semantic3d.net/view_results.php">Semantic3D Benchmark</a> /
<font color="red"> News:</font>
<a href="https://mp.weixin.qq.com/s/k_oROm1Zr6l0YNKGELx3Bw"><font color="red">(新智元,</font></a>
<a href="https://mp.weixin.qq.com/s/Ed9v6I6l2tLTHmMW7B3O3g"><font color="red">AI科技评论,</font></a>
<a href="https://mp.weixin.qq.com/s/TTv6pSPjmdsEF4kvVY-ZzQ"><font color="red">CVer)</font></a> /
<a href="https://www.youtube.com/watch?v=Ar3eY_lwzMk"><font color="red">Video</font></a> /
<a href="https://github.com/QingyongHu/RandLA-Net"><font color="red">Code</font></a>
<iframe src="https://ghbtns.com/github-btn.html?user=QingyongHu&repo=RandLA-Net&type=star&count=true&size=small"
frameborder="0" scrolling="0" width="100px" height="20px"></iframe>
<br>(* indicates corresponding author)
<p align="justify" style="font-size:13px">We introduce an efficient and lightweight neural architecture to directly infer per-point semantics for large-scale point clouds. </p>
</div>
<div class="vlar-clear"></div>
</div>
<!--单条paper记录 end-->
<!--单条paper记录 begin-->
<div class="vlar-pap-cont-item-c-si vlar-separatorLine">
<div class="vlar-pap-cont-item-c-si-img"><img src="vLAR/papers/19_neurips_3d_bonet.gif" ></div>
<div class="vlar-pap-cont-item-c-si-c">
<p><a href="https://arxiv.org/abs/1906.01140">
<div class="papertitle">Learning Object Bounding Boxes for 3D Instance Segmentation on Point Clouds</div></a>
<strong>B. Yang</strong>, J. Wang, R. Clark, Q. Hu, S. Wang, A. Markham, N. Trigoni
<br>
<em>Advances in Neural Information Processing Systems (NeurIPS)</em>, 2019 <font color="red"><strong>(Spotlight, 200/6743)</strong></font>
<br>
<!--<font color="red"><strong>..</strong></font><br>-->
<a href="https://arxiv.org/abs/1906.01140">arXiv</a> /
<a href="http://kaldir.vc.in.tum.de/scannet_benchmark/result_details?id=118">ScanNet Benchmark</a> /
<a href="https://www.reddit.com/r/MachineLearning/comments/bx8jhz/r_new_sota_for_3d_object_detection/">Reddit Discussion</a> /
<font color="red"> News:</font>
<a href="https://mp.weixin.qq.com/s/jHbWf_SSZE_J6NRJR-96sQ"><font color="red">(新智元,</font></a>
<a href="https://mp.weixin.qq.com/s/4GPkmTri4Vk7Xy0J8TiBNw"><font color="red">图像算法,</font></a>
<a href="https://mp.weixin.qq.com/s/C1FDPkAkmnmAZ_gvvtzBHw"><font color="red">AI科技评论,</font></a>
<a href="https://mp.weixin.qq.com/s/wViZITtsb4j3oFtOpJI9wQ"><font color="red">将门创投,</font></a>
<a href="https://mp.weixin.qq.com/s/S7mHrOxOwTIhDGPhu1SI4A"><font color="red">CVer,</font></a>
<a href="https://mp.weixin.qq.com/s/gybhVw3D4ykAHsVGzazWNw"><font color="red">泡泡机器人)</font></a> /
<a href="https://www.youtube.com/watch?v=Bk727Ec10Ao"><font color="red">Video</font></a> /
<a href="https://github.com/Yang7879/3D-BoNet"><font color="red">Code</font></a>
<iframe src="https://ghbtns.com/github-btn.html?user=Yang7879&repo=3D-BoNet&type=star&count=true&size=small"
frameborder="0" scrolling="0" width="100px" height="20px"></iframe>
<p></p>
<p align="justify" style="font-size:13px">We propose a simple and efficient neural architecture for accurate 3D instance segmentation on point clouds.
It achieves the SOTA performance on ScanNet and S3DIS (June 2019).</p>
</div>
<div class="vlar-clear"></div>
</div>
<!--单条paper记录 end-->
<!--单条paper记录 begin-->
<div class="vlar-pap-cont-item-c-si vlar-separatorLine">
<div class="vlar-pap-cont-item-c-si-img"><img src="vLAR/papers/19_iros_deeppco.jpg" ></div>
<div class="vlar-pap-cont-item-c-si-c">
<p><a href="https://arxiv.org/abs/1910.11088">
<div class="papertitle">DeepPCO: End-to-End Point Cloud Odometry through Deep Parallel Neural Network</div></a>
W. Wang, M.R.U. Saputra, P. Zhao, P. Gusmao, <strong>B. Yang</strong>, C. Chen, A. Markham, N. Trigoni<br>
<em>IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)</em>, 2019 <br>
<a href="https://arxiv.org/abs/1910.11088">arXiv</a> /
<a href="https://ieeexplore.ieee.org/abstract/document/8967756">IEEE Xplore</a>
<p></p>
<p align="justify" style="font-size:13px">We propose a novel end-to-end deep parallel neural network to estimate the 6-DOF poses using consecutive 3D point clouds.</p>
</div>
<div class="vlar-clear"></div>
</div>
<!--单条paper记录 end-->
<!--单条paper记录 begin-->
<div class="vlar-pap-cont-item-c-si vlar-separatorLine">
<div class="vlar-pap-cont-item-c-si-img"><img src="vLAR/papers/19_ijcv_attsets.jpg" ></div>
<div class="vlar-pap-cont-item-c-si-c">
<p><a href="https://link.springer.com/article/10.1007/s11263-019-01217-w">
<div class="papertitle">Robust Attentional Aggregation of Deep Feature Sets for Multi-view 3D Reconstruction</div></a>
<strong>B. Yang</strong>, S. Wang, A. Markham, N. Trigoni<br>
<em>International Journal of Computer Vision (IJCV)</em>, 2019 <font color="red"><strong>(IF=6.07)</strong></font><br>
<a href="https://arxiv.org/abs/1808.00758">arXiv</a> /
<a href="https://link.springer.com/article/10.1007/s11263-019-01217-w">Springer Open Access</a> /
<a href="https://github.com/Yang7879/AttSets"><font color="red">Code</font></a>
<iframe src="https://ghbtns.com/github-btn.html?user=Yang7879&repo=AttSets&type=star&count=true&size=small"
frameborder="0" scrolling="0" width="120px" height="20px"></iframe>
<p></p>
<p align="justify" style="font-size:13px"> We propose an attentive aggregation module together
with a training algorithm for multi-view 3D object reconstruction.</p>
</div>
<div class="vlar-clear"></div>
</div>
<!--单条paper记录 end-->
<!--单条paper记录 begin-->
<div class="vlar-pap-cont-item-c-si vlar-separatorLine">
<div class="vlar-pap-cont-item-c-si-img"><img src="vLAR/papers/19_cvprw_embeddings.jpg" ></div>
<div class="vlar-pap-cont-item-c-si-c">
<p><a href="http://openaccess.thecvf.com/content_CVPRW_2019/html/Explainable_AI/Lin_Learning_Semantically_Meaningful_Embeddings_Using_Linear_Constraints_CVPRW_2019_paper.html">
<div class="papertitle">Learning Semantically Meaningful Embeddings Using Linear Constraints</div></a>
S. Lin, <strong>B. Yang</strong>, R. Birke, R. Clark<br>
<em>IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPR-W)</em>, 2019<br>
<a href="http://openaccess.thecvf.com/content_CVPRW_2019/html/Explainable_AI/Lin_Learning_Semantically_Meaningful_Embeddings_Using_Linear_Constraints_CVPRW_2019_paper.html">CVF Open Access</a>
<p></p>
<p align="justify" style="font-size:13px">We propose a simple embedding learning method that jointly optimises for an auto-encoding reconstruction task
and for estimating the corresponding attribute labels.</p>
</div>
<div class="vlar-clear"></div>
</div>
<!--单条paper记录 end-->
<!--单条paper记录 begin-->
<div class="vlar-pap-cont-item-c-si vlar-separatorLine">
<div class="vlar-pap-cont-item-c-si-img"><img src="vLAR/papers/18_tpami_3d_recgan++.png" ></div>
<div class="vlar-pap-cont-item-c-si-c">
<p><a href="https://arxiv.org/abs/1802.00411">
<div class="papertitle">Dense 3D Object Reconstruction from a Single Depth View</div></a>
<strong>B. Yang</strong>, S. Rosa, A. Markham, N. Trigoni, H. Wen<br>
<em>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)</em>, 2018 <font color="red"><strong>(IF=17.73)</strong></font><br>
<a href="https://arxiv.org/abs/1802.00411">arXiv</a> /
<a href="https://ieeexplore.ieee.org/abstract/document/8453803">IEEE Xplore</a> /
<a href="https://github.com/Yang7879/3D-RecGAN-extended"><font color="red">Code</font></a>
<iframe src="https://ghbtns.com/github-btn.html?user=Yang7879&repo=3D-RecGAN-extended&type=star&count=true&size=small"
frameborder="0" scrolling="0" width="120px" height="20px"></iframe>
<p></p>
<p align="justify" style="font-size:13px">We propose a novel neural architecture to reconstruct the complete 3D structure of a given object
from a single arbitrary depth view using generative adversarial networks.</p>
</div>
<div class="vlar-clear"></div>
</div>
<!--单条paper记录 end-->
<!--单条paper记录 begin-->
<div class="vlar-pap-cont-item-c-si vlar-separatorLine">
<div class="vlar-pap-cont-item-c-si-img"><img src="vLAR/papers/18_ijcai_3d_physnet.gif" ></div>
<div class="vlar-pap-cont-item-c-si-c">
<p><a href="https://arxiv.org/abs/1805.00328">
<div class="papertitle">3D-PhysNet: Learning the Intuitive Physics of Non-Rigid Object Deformations</div></a>
Z. Wang, S. Rosa, <strong>B. Yang</strong>, S. Wang, N. Trigoni, A. Markham<br>
<em>International Joint Conference on Artificial Intelligence (IJCAI)</em>, 2018 <br>
<a href="https://arxiv.org/abs/1805.00328">arXiv</a> /
<a href="https://github.com/vividda/3D-PhysNet"><font color="red">Code</font> </a>
<iframe src="https://ghbtns.com/github-btn.html?user=vividda&repo=3D-PhysNet&type=star&count=true&size=small"
frameborder="0" scrolling="0" width="120px" height="20px"></iframe>
<p></p>
<p align="justify" style="font-size:13px">We present a neural framework to predict how a 3D object will deform
under an applied force using intuitive physics modelling.</p>
</div>
<div class="vlar-clear"></div>
</div>
<!--单条paper记录 end-->
<!--单条paper记录 begin-->
<div class="vlar-pap-cont-item-c-si vlar-separatorLine">
<div class="vlar-pap-cont-item-c-si-img"><img src="vLAR/papers/18_cvprw_3r_d.jpg" ></div>
<div class="vlar-pap-cont-item-c-si-c">
<p><a href="http://openaccess.thecvf.com/content_cvpr_2018_workshops/w9/html/Yang_Learning_3D_Scene_CVPR_2018_paper.html">
<div class="papertitle">Learning 3D Scene Semantics and Structure from a Single Depth Image</div></a>
<strong>B. Yang*</strong>, Z. Lai*, X. Lu, S. Lin, H. Wen, A. Markham, N. Trigoni<br>
<em>IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPR-W)</em>, 2018 <br>
<a href="http://openaccess.thecvf.com/content_cvpr_2018_workshops/w9/html/Yang_Learning_3D_Scene_CVPR_2018_paper.html">CVF Open Access</a> /
<a href="https://ieeexplore.ieee.org/abstract/document/8575531">IEEE Xplore</a>
<br>(* indicates equal contribution)
<p></p>
<p align="justify" style="font-size:13px">We propose an efficient and holistic pipeline to simultaneously learn
the semantics and structure of a scene from a single depth image.</p>
</div>
<div class="vlar-clear"></div>
</div>
<!--单条paper记录 end-->
<!--单条paper记录 begin-->
<div class="vlar-pap-cont-item-c-si vlar-separatorLine">
<div class="vlar-pap-cont-item-c-si-img"><img src="vLAR/papers/18_icra_defonet.jpg" ></div>
<div class="vlar-pap-cont-item-c-si-c">
<p><a href="https://arxiv.org/abs/1804.05928">
<div class="papertitle">Defo-Net: Learning Body Deformation Using Generative Adversarial Networks</div></a>
Z. Wang, S. Rosa, L. Xie, <strong>B. Yang</strong>, S. Wang, N. Trigoni, A. Markham<br>
<em>IEEE International Conference on Robotics and Automation (ICRA) </em>, 2018 <br>
<a href="https://arxiv.org/abs/1804.05928">arXiv</a> /
<a href="https://ieeexplore.ieee.org/abstract/document/8462832">IEEE Xplore</a> /
<a href="https://www.youtube.com/watch?v=noG5DDX3coQ"><font color="red">Video</font></a> /
<a href="https://github.com/vividda/Defo-Net"><font color="red">Code</font> </a>
<iframe src="https://ghbtns.com/github-btn.html?user=vividda&repo=Defo-Net&type=star&count=true&size=small"
frameborder="0" scrolling="0" width="120px" height="20px"></iframe>
<p></p>
<p align="justify" style="font-size:13px">We present a novel generative adversarial network to predict
body deformations under external forces from a single RGB-D image.</p>
</div>
<div class="vlar-clear"></div>
</div>
<!--单条paper记录 end-->
<!--单条paper记录 begin-->
<div class="vlar-pap-cont-item-c-si">
<div class="vlar-pap-cont-item-c-si-img"><img src="vLAR/papers/17_iccvw_3d_recgan.jpg" ></div>
<div class="vlar-pap-cont-item-c-si-c">
<p><a href="https://arxiv.org/abs/1708.07969">
<div class="papertitle">3D Object Reconstruction from a Single Depth View with Adversarial Learning</div></a>
<strong>B. Yang</strong>, H. Wen, S. Wang, R. Clark, A. Markham, N. Trigoni<br>
<em>IEEE International Conference on Computer Vision Workshops (ICCV-W) </em>, 2017 <br>
<a href="https://arxiv.org/abs/1708.07969">arXiv</a> /
<a href="https://ieeexplore.ieee.org/abstract/document/8265295">IEEE Xplore</a> /
<a href="https://mp.weixin.qq.com/s?__biz=MzA3MzI4MjgzMw==&mid=2650730434&idx=4&sn=4a03526f020f30cc65b52976bb56f352&scene=0"><font color="red"> News: 机器之心</font></a> /
<a href="https://github.com/Yang7879/3D-RecGAN"><font color="red">Code</font> </a>
<iframe src="https://ghbtns.com/github-btn.html?user=Yang7879&repo=3D-RecGAN&type=star&count=true&size=small"
frameborder="0" scrolling="0" width="120px" height="20px"></iframe>
<p></p>
<p align="justify" style="font-size:13px">We propose a novel approach to reconstruct the complete 3D structure of a given
object from a single arbitrary depth view using generative adversarial networks.</p>
</div>
<div class="vlar-clear"></div>
</div>
<!--单条paper记录 end-->
</div>
</div>
<!--内容组 end-->
</div>
<script type="text/javascript" src="vLAR/js/viewer.js"></script>
<script type="text/javascript" src="vLAR/js/jquery-viewer.js"></script>
<script>
$('#vlar-contents').viewer({
navbar: false,
button: true,
toolbar: false,
title: false
}
);
</script>
</body>
</html>