-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.xml
1911 lines (1742 loc) · 131 KB
/
index.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<title>Shadi Albarqouni</title>
<link>https://albarqouni.github.io/</link>
<atom:link href="https://albarqouni.github.io/index.xml" rel="self" type="application/rss+xml" />
<description>Shadi Albarqouni</description>
<generator>Source Themes Academic (https://sourcethemes.com/academic/)</generator><language>en-us</language><copyright>©Shadi Albarqouni 2022</copyright><lastBuildDate>Tue, 30 Nov 2021 09:00:00 +0000</lastBuildDate>

<item>
<title>Experience</title>
<link>https://albarqouni.github.io/resume/experience/</link>
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/resume/experience/</guid>
<description></description>
</item>
<item>
<title>Organizing a workshop on the Next Generation of AI in Medicine</title>
<link>https://albarqouni.github.io/talk/hida2021/</link>
<pubDate>Tue, 30 Nov 2021 09:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/talk/hida2021/</guid>
<description></description>
</item>
<item>
<title>Affordable AI and Healthcare</title>
<link>https://albarqouni.github.io/project/affordable-ai/</link>
<pubDate>Wed, 15 Sep 2021 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/project/affordable-ai/</guid>
<description></description>
</item>
<item>
<title>BigPicture Project</title>
<link>https://albarqouni.github.io/project/bigpicture/</link>
<pubDate>Sat, 30 Jan 2021 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/project/bigpicture/</guid>
<description><h2 id="a-new-consortium-of-the-eu-innovative-medicines-initiative-imi-will-establish-the-biggest-database-of-pathology-images-to-accelerate-the-development-of-artificial-intelligence-in-medicine">A new consortium of the EU Innovative Medicines Initiative (IMI) will establish the biggest database of pathology images to accelerate the development of artificial intelligence in medicine.</h2>
<p>To take AI development in pathology to the next level, a European consortium combining leading European research centres, hospitals as well as major pharmaceutical industries, is going to develop a repository for the sharing of pathology data. The 6-year, €70 million project called
<a href="https://www.bigpicture.eu/" target="_blank" rel="noopener">BIGPICTURE</a>, will herald a new era in pathology.</p>
<h3 id="background">Background</h3>
<p>Pathology is the cornerstone of the workup of many diseases such as cancer, autoimmune diseases, of the follow up after transplantation and is also critical for the evaluation of the safety of drugs. It’s based on the examination of tissue samples (slides) under the microscope. However, despite its pivotal role, it still relies heavily on the qualitative interpretation by a qualified pathologist.</p>
<p>While the microscope symbolizes the profession, the digitalisation of slides in recent years ignited a revolution: not only images can now be shared and accessed from distant locations, they can also be processed by computers. This opens the door for artificial intelligence (AI) applications to assist the pathologist and help study diseases, find better treatments and contribute to the 3Rs (replace, reduce, and refine animal use in research). However, the development of robust AI applications requires large amounts of data, which in the case of pathology means a huge collection of digital slides and the medical data necessary for their interpretation. Sharing these has so far remained challenging due to the data storage capacity required to host a sufficiently large collection and to concerns regarding the confidential character of the medical information.</p>
<p>To allow the fast development of AI in pathology, the
<a href="https://www.bigpicture.eu/" target="_blank" rel="noopener">BIGPICTURE</a> project aims to create the first European, ethical and GDPR-compliant (General Data Protection Regulation), quality-controlled platform, in which both large-scale data and AI algorithms will coexist. The
<a href="https://www.bigpicture.eu/" target="_blank" rel="noopener">BIGPICTURE</a> platform will be developed in a sustainable and inclusive way by connecting communities of pathologists, researchers, AI developers, patients, and industry parties.</p>
<h3 id="tu-munich">TU Munich</h3>
<p>In this project,
<a href="http://campar.in.tum.de/Main/NassirNavab" target="_blank" rel="noopener">Prof. Nassir Navab</a>, and
<a href="../../#about">Dr. Shadi Albarqouni</a> from
<a href="www.tum.de">TU Munich</a>, together with
<a href="https://owkin.com/" target="_blank" rel="noopener">Owkin</a> will be leading and contributing to the development of Federated Deep Learning algorithms leveraging massive amounts of data, distributed in multiple sources, in a privacy-preserved fashion. This will enable deep learning models to be trained using sensitive data that cannot be made publicly available due to GPDR or sensitivity, e.g. rare diseases. Please visit the website of
<a href="https://www.bigpicture.eu/" target="_blank" rel="noopener">BIGPICTURE</a> for further details.</p>
<h3 id="intended-results">Intended results</h3>
<p>The project is divided into four main aspects that concern the large-scale collection of data. First, an infrastructure (hardware and software) must be created to store, share and process millions of images that can be gigabytes each. Second, legal and ethical constraints must be put in place to ensure adequate usage of data while fully respecting patient’s privacy and data confidentiality. Then, an initial set of 3 million digital slides from humans and laboratory animals will be collected and stored into the repository to provide data for the development of pathology AI tools. Finally, functionalities that aid the use of the database as well as the processing of images for diagnostic and research purposes will be developed.</p>
<h3 id="consortium">Consortium</h3>
<p>BIGPICTURE is a public-private partnership funded by IMI, with representation from academic institutions, small- and medium-sized enterprises (SMEs), public organisations and pharmaceutical companies, together with a large network slide contributing partners. The consortium partners involved in the project are:</p>
<p><strong>Academic institutions:</strong> Radboud University Medical Center (NL), Linköping University (SE), Leeds Teaching Hospitals NHS Trust (UK), University Medical Centre Utrecht (NL), Uppsala University (SE, ELIXIR node), Haute Ecole Spécialisé de Suisse Occidentale (CH), Technical University Eindhoven (NL), University of Warwick (UK), [Technical University of Munich (DE), Medical University Graz (AT), Institut Pasteur (FR), University of Liege (BE), University of Semmelweis (HU), National Cancer Institute (NL), Region Östergötland (SE), Medical University Vienna (AT), University of Marburg (DE), Helsingin ja Uudenmaan sairaanhoitopiirin kuntayhtymä (FI),</p>
<p><strong>Pharmaceutical companies:</strong> Novartis Pharma AG (CH), Janssen Pharmaceutica NV (BE), Bayer AG (DE), Boehringer Ingelheim International GmbH (DE), Novo Nordisk A/S (DK), Pfizer (US), Genentech – Roche (US), Sanofi Aventis recherche et Développement (FR), Institut de Recherches Internationales Servier (FR), and UCB Biopharma SRL (BE).</p>
<p><strong>Other public &amp; private organisations:</strong> CSC – IT Center for Science Finland (FI, ELIXIR node), Biobanks and biomolecular resources research infrastructure (AT), Azienda Ospedaliera Per L’Emergenza Cannizzaro (IT), Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.(DE), Deutsches Institut für Normung E.V. (DE), European Institute for Innovation through Health Data (BE), European Society of Pathology (BE), Digital pathology association (US), GBG Forschungs Gmbh (DE), ttopstart (NL), Sectra AB (SE), Cytomine SCRLFS (BE), Stichting Lygature (NL), Owkin (FR), Deciphex (IE), MedicalPhit (NL), Timelex (BE),</p>
<p>BIGPICTURE starts on 1st February 2021 and will run for 6 years. However, the platform is meant to last, and the consortium will elaborate sustainability plans to maintain and continue to develop the platform beyond this term.</p>
<h4 id="acknowledgment-of-support-and-disclaimer">Acknowledgment of support and disclaimer</h4>
<p><em>This project has received funding from the Innovative Medicines Initiative 2 Joint Undertaking under grant agreement No 945358. This Joint Undertaking receives support from the European Union’s Horizon 2020 research and innovation program and EFPIA.</em>
<a href="http://www.imi.europe.eu" target="_blank" rel="noopener"><em>www.imi.europe.eu</em></a></p>
<p><em>This communication reflects the consortium’s view. Neither IMI nor the European Union or EFPIA are responsible for any use that may be made of the information contained therein.</em></p>
<p><img src="EU.png" alt="EU"> <img src="EFPIA.png" alt="EFPIA"> <img src="IMI.png" alt="IMI"></p>
<p><img src="BigPicture.png" alt="BigPicture"></p>
</description>
</item>
<item>
<title>Autoencoders for Unsupervised Anomaly Segmentation in Brain MR Images: A Comparative Study</title>
<link>https://albarqouni.github.io/publication/baur-2020-autoencoders/</link>
<pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/baur-2020-autoencoders/</guid>
<description></description>
</item>
<item>
<title>Autoencoders for Unsupervised Anomaly Segmentation in Brain MR Images: A Comparative Study</title>
<link>https://albarqouni.github.io/publication/baur-2021-autoencoders/</link>
<pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/baur-2021-autoencoders/</guid>
<description></description>
</item>
<item>
<title>Butterfly-Net: Spatial-Temporal Architecture For Medical Image Segmentation</title>
<link>https://albarqouni.github.io/publication/klymenko-2021-butterfly/</link>
<pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/klymenko-2021-butterfly/</guid>
<description></description>
</item>
<item>
<title>Eine computergestützte automatische Polypencharakterisierung von Hyperplasten, Adenomen und Serratierten Adenomen im Kolorektum-Ergebnisse der CASSANDRA Studie</title>
<link>https://albarqouni.github.io/publication/zvereva-2021-computergestutzte/</link>
<pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/zvereva-2021-computergestutzte/</guid>
<description></description>
</item>
<item>
<title>FedDis: Disentangled Federated Learning for Unsupervised Brain Pathology Segmentation</title>
<link>https://albarqouni.github.io/publication/bercea-2021-feddis/</link>
<pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/bercea-2021-feddis/</guid>
<description></description>
</item>
<item>
<title>Federated Disentangled Representation Learning for Unsupervised Brain Anomaly Detection</title>
<link>https://albarqouni.github.io/publication/bercea-2021-federated/</link>
<pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/bercea-2021-federated/</guid>
<description></description>
</item>
<item>
<title>FedPerl: Semi-Supervised Peer Learning for Skin Lesion Classification</title>
<link>https://albarqouni.github.io/publication/bdair-2021-fedperl/</link>
<pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/bdair-2021-fedperl/</guid>
<description></description>
</item>
<item>
<title>Fourier Transform of Percoll Gradients Boosts CNN Classification of Hereditary Hemolytic Anemias</title>
<link>https://albarqouni.github.io/publication/sadafi-2021-fourier/</link>
<pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/sadafi-2021-fourier/</guid>
<description></description>
</item>
<item>
<title>Microaneurysms segmentation and diabetic retinopathy detection by learning discriminative representations</title>
<link>https://albarqouni.github.io/publication/sarhan-2021-microaneurysms/</link>
<pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/sarhan-2021-microaneurysms/</guid>
<description></description>
</item>
<item>
<title>Modeling Healthy Anatomy with Artificial Intelligence for Unsupervised Anomaly Detection in Brain MRI</title>
<link>https://albarqouni.github.io/publication/baur-2021-modeling/</link>
<pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/baur-2021-modeling/</guid>
<description></description>
</item>
<item>
<title>Semi-Supervised Few-Shot Learning with Prototypical Random Walks</title>
<link>https://albarqouni.github.io/publication/ayyad-2021-semi/</link>
<pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/ayyad-2021-semi/</guid>
<description></description>
</item>
<item>
<title>Sickle Cell Disease Severity Prediction from Percoll Gradient Images using Graph Convolutional Networks</title>
<link>https://albarqouni.github.io/publication/sadafi-2021-sickle/</link>
<pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/sadafi-2021-sickle/</guid>
<description></description>
</item>
<item>
<title>The Federated Tumor Segmentation (FeTS) Challenge</title>
<link>https://albarqouni.github.io/publication/pati-2021-federated/</link>
<pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/pati-2021-federated/</guid>
<description></description>
</item>
<item>
<title>The OOD Blind Spot of Unsupervised Anomaly Detection</title>
<link>https://albarqouni.github.io/publication/heer-2021-ood/</link>
<pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/heer-2021-ood/</guid>
<description></description>
</item>
<item>
<title>Multi-task multi-domain learning for digital staining and classification of leukocytes</title>
<link>https://albarqouni.github.io/publication/tomczak-2020-multi/</link>
<pubDate>Tue, 22 Dec 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/tomczak-2020-multi/</guid>
<description></description>
</item>
<item>
<title>An Uncertainty-Driven GCN Refinement Strategy for Organ Segmentation</title>
<link>https://albarqouni.github.io/publication/soberanis-2020-uncertainty/</link>
<pubDate>Tue, 01 Dec 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/soberanis-2020-uncertainty/</guid>
<description></description>
</item>
<item>
<title>Ascertaining the Pose of an X-Ray Unit Relative to an Object on the Basis of a Digital Model of the Object</title>
<link>https://albarqouni.github.io/publication/albarqouni-2020-ascertaining/</link>
<pubDate>Tue, 01 Dec 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/albarqouni-2020-ascertaining/</guid>
<description></description>
</item>
<item>
<title>Determining a Pose of an Object in the Surroundings of the Object by Means of Multi-Task Learning</title>
<link>https://albarqouni.github.io/publication/zakharov-2020-determining/</link>
<pubDate>Sun, 01 Nov 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/zakharov-2020-determining/</guid>
<description></description>
</item>
<item>
<title>The Future of Digital Health with Federated Learning</title>
<link>https://albarqouni.github.io/publication/rieke-2020-future/</link>
<pubDate>Mon, 14 Sep 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/rieke-2020-future/</guid>
<description></description>
</item>
<item>
<title>6D Camera Relocalization in Ambiguous Scenes via Continuous Multimodal Inference</title>
<link>https://albarqouni.github.io/publication/bui-20206-d/</link>
<pubDate>Sat, 01 Aug 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/bui-20206-d/</guid>
<description></description>
</item>
<item>
<title>Fairness by Learning Orthogonal Disentangled Representations</title>
<link>https://albarqouni.github.io/publication/sarhan-2020-fairness/</link>
<pubDate>Sat, 01 Aug 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/sarhan-2020-fairness/</guid>
<description></description>
</item>
<item>
<title>Medium Blog: Journey through COVID-19 RSNA Papers</title>
<link>https://albarqouni.github.io/talk/covid19/</link>
<pubDate>Mon, 20 Apr 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/talk/covid19/</guid>
<description><p><em>Disclaimer: I am neither a radiologist nor a clinician. I am a computer scientist who have been working on medical image computing for a while. I tried to summairze the key findings reported in almost 15 papers published in the Radiology Society in North America (RSNA) in the last two months.</em></p>
<p>
<a href="http://www.euro.who.int/en/health-topics/health-emergencies/coronavirus-covid-19/novel-coronavirus-2019-ncov" target="_blank" rel="noopener"><strong>Intro about COVID-19</strong> </a></p>
<h3 id="ct-imaging-features"><strong>CT Imaging features</strong></h3>
<p>Key CT findings have been studied and investigated by Guan et al. [10] in a large cohort of 1099 patients with confirmed COVID-19, and Chung et al. [1] in a group of 21 patients infected with COVID-19 in China. Their key results that majority of RT-PCR confirmed patients (some are asymptomatic) show typical CT findings such as the presence of bilateral ground-glass opacities (GGO) and/or consolidation, with a rounded morphology and a peripheral lung distribution (cf. Fig.1). In another cohort of 104 patients, from the cruise ship “Diamond Princess”, Inui et al. [4] have reported similar findings of lung opacities and airway abnormalities in both asymptomatic and symptomatic cases. In addition to the key characteristics of peripheral GGO, Caruso et al. [7] have also observed an association with sub-segmental vessel enlargement (&gt; 3 mm) in his cohort of 158 participants from Italy (cf Fig.2).</p>
<p><img src="https://cdn-images-1.medium.com/max/1440/1*j4iktyLv3cqyIh5LEcwf0A.png" alt="img">Fig.1: Image adopted from Chung et al. [1]</p>
<blockquote>
<p>Of 21 patients with the 2019 novel coronavirus, 15 (71%) had involvement of more than two lobes at chest CT, 12 (57%) had ground-glass opacities, seven (33%) had opacities with a rounded morphology, seven (33%) had a peripheral distribution of disease, six (29%) had consolidation with ground-glass opacities, and four (19%) had crazy-paving pattern. [1]</p>
</blockquote>
<p><img src="https://cdn-images-1.medium.com/max/1440/1*fnIOZsThw7AR_XrVaiIyXA.png" alt="img">Fig.2: Image adopted from Caruso et al. [7]</p>
<p>Surprisingly, 14% of the patients (3 out of 21), studied by Chang et al. [1], show negative CT findings in their initial chest CT scan. Follow-up scans, however, show rounded peripheral ground-glass opacity (cf. Fig.3). Xie et al. [3] and Caruso et al. [7] have also reported similar percentages of 4% (7 out of 167), and 3% (2 out of 62), respectively, of their cohorts who show no findings in their CT scans.</p>
<p><img src="https://cdn-images-1.medium.com/max/1440/1*fq3JovfT6QlwCdCg9iaDhA.png" alt="img">Fig.3: Image adopted from Chung et al. [1]</p>
<p>Contrary, Xie et al. [3] found out that 3% of their cohort (5 out of 167), who had initially negative RT-PCR, show positive Chest CT with similar findings of viral pneumonia reported by Chung et al. [1] (cf Fig.4). A few days later, and after repeated swap tests, the RT-PCR had become positive. This has been. also confirmed in another cohort, reported by Fang et al. [8], where the percentage was around 29% (15 out of 51).</p>
<p><img src="https://cdn-images-1.medium.com/max/1440/1*TUkEdff20eeO6JBAmY7lfA.png" alt="img">Fig.4: Adopted from Xie et al. [3]</p>
<h3 id="chest-ct-vs-rt-pcr"><strong>Chest CT vs. RT-PCR</strong></h3>
<p>Given the aforementioned key characteristics of COVID-19, the low sensitivity of the RT-PCR test (42–71%) [6], and the long mean interval time between the initial negative to positive RT-PCR (5.1 +/- 1.5 days), clinicians and researchers have investigated <strong>whether diagnostic imaging features could be used an alternative to RT-PCR in screening</strong>.</p>
<blockquote>
<p>In patients at high risk for 2019-nCoV infection, chest CT evidence of viral pneumonia may precede positive negative RT-PCR test results. [3]</p>
</blockquote>
<p>A few recent studies (Ai et al. [6], Caruso et al. [7], Fang et al. [8]) have investigated the correlation of Chest CT findings and RT-PCR test reporting high sensitivity of 97–98% for Chest CT in diagnosing COVID-19. Detailed evaluation metrics against the RT-PCR are reported below. Interestingly, 98% of the patients (56 out of 57), reported by Ai et al. [6], who had initially positive CT findings show positive RT-PCR within 6 days (cf Fig.5). <strong>Such interesting results suggest chest CT could be considered for screening.</strong></p>
<blockquote>
<p>In a series of 51 patients with chest CT and RT-PCR assay performed within 3 days, the sensitivity of CT for COVID-19 infection was 98% compared to RT-PCR sensitivity of 71% (p&lt;.001) [2]</p>
</blockquote>
<table>
<thead>
<tr>
<th>Papers</th>
<th>Sample Size</th>
<th>Country</th>
<th>Sensitivity</th>
<th>Specificity</th>
<th>Accuracy</th>
</tr>
</thead>
<tbody>
<tr>
<td>Caruso et al. [7]</td>
<td>58</td>
<td>Rome, Italy</td>
<td>97%</td>
<td>56%</td>
<td>72%</td>
</tr>
<tr>
<td>Ai et al. [6]</td>
<td>1014</td>
<td>Wuhan, China</td>
<td>97%</td>
<td>25%</td>
<td>68%</td>
</tr>
<tr>
<td>Fang et al. [8]</td>
<td>51</td>
<td>Shanghai, China</td>
<td>98%</td>
<td>N/A</td>
<td>N/A</td>
</tr>
</tbody>
</table>
<p><img src="https://cdn-images-1.medium.com/max/1440/1*c9DxSwH8gO2Wey-Ml6HfSQ.png" alt="img">Fig.5: Adopted from Ai et al. [6]</p>
<p>Discrepancy between CT findings and RT-PCR motivated clinicians and researchers to analyze the serial CT findings over time (Wang and Dong et al. [9], Pan et al. [12]) and study the relationship to duration of infection (Bernheim et al. [11]). As reported in their analysis (Fig. 6, 7,8), the appearance of GGO and Consolidations varies over time explaining the discrepancy in the sensitivity. <strong>Both studies suggest, however, pathology quantification might help in the prognosis.</strong></p>
<blockquote>
<p>The extent of CT abnormalities progressed rapidly after the onset of symptoms, peaked around 6–11 days, and followed by persistence of high levels in lung abnormalities. The temporal changes of the diverse CT manifestations followed a specific pattern, which might indicate the progression and recovery of the illness. [7]</p>
</blockquote>
<p><img src="https://cdn-images-1.medium.com/max/1440/1*s5erFY4_5HdFSxC5sSI9BQ.png" alt="img">Fig. 6: Adopted from Wang and Dong et al. [9]</p>
<blockquote>
<p>Recognizing imaging patterns based on infection time course is paramount for not only understanding the pathophysiology and natural history of infection, but also for helping to predict patient progression and potential complication development. [11]</p>
</blockquote>
<p><img src="https://cdn-images-1.medium.com/max/1440/1*kp-Ce1GSU9YDuS14igqnAQ.png" alt="img">Fig. 7: Adopted from Bernheim et al. [11]</p>
<p><img src="https://cdn-images-1.medium.com/max/1440/1*GlNXjNXv7oAsm2MBJ1ivCA.png" alt="img">Fig.8: Adopted from Pan et al. [12]</p>
<h3 id="chest-x-ray-vs-rt-pcr"><strong>Chest X-ray vs. RT-PCR</strong></h3>
<p>Given the limited resources, and to minimize the risk of cross-infection [14], and contamination, clinicians and researchers have investigated <strong>whether a readily available diagnostic imaging, namely X-ray, could be used as a first-line triage tool</strong>, and help in detecting abnormalities associated with COVID-19 in Chest X-rays, in particular, for asymptomatic patients.</p>
<p>One of the interesting studies reported by Wong et al. [13] who have studied the appearance of COVID-19 in Chest X-ray, and its correlation with the key findings in the CT scans. Besides, they have investigated the correlation of Chest X-ray and RT-PCR test.</p>
<p>In their cohort of 64 patients from Hong Kong, they observed similar key characteristics, appeared in CT scans, such as bilateral, peripheral ground-glass opacity, and/or consolidations (cf. Fig. 9).</p>
<p><img src="https://cdn-images-1.medium.com/max/1440/1*uJ-h7jvo2RNUD8A-IqO86w.png" alt="img">Fig. 9: Adapted from Wong et al. [13]</p>
<p>In contrast to the high sensitivity reported for the CT scans, Wong et al. [13] reported a sensitivity of 69% for Chest X-ray, compared to 91% for the initial RT-PCR. The Chest X-ray abnormalities preceded the positive RT-PCR only in 9% (6 out of 64 patients). Examples on the latter scenario are demonstrated in Fig. 10 (A, and B).</p>
<p><img src="https://cdn-images-1.medium.com/max/1440/1*FuB_zAuHcVmnYEmUJkoNdw.png" alt="img">Fig. 10: Adapted from Wong et al. [13]</p>
<p>The remarkable low sensitivity indicates a high number of False Negative suggesting further investigation of the abnormalities change over time. Fig.11 shows the changes of severity score in Chest X-ray, where the peak score was reported in 10–12 days since symptoms onset.</p>
<p><img src="https://cdn-images-1.medium.com/max/1440/1*8Y0vR85Wue5QMiaEaSz6JQ.png" alt="img">Fig.11: Adopted from Wong et al. [13]</p>
<p>Surprisingly, 86% of the patients (24 out of 28) who had initial positive Chest X-ray, show positive findings on the CT as well. Whereas only one patient of the rest shows no findings in the Chest X-ray, the CT shows peripheral GOO (cf. Fig.12). <strong>These results suggest Chest X-ray might be helpful in monitoring and prognosis, but not recommended for screening.</strong></p>
<p><img src="https://cdn-images-1.medium.com/max/1440/1*iPWLY7G9dtwRfhrct4NHQw.png" alt="img">Fig. 12: Adapted from Wong et al. [13]</p>
<blockquote>
<p>At this time, CT screening for the detection of COVID-19 is not recommended by most radiological societies. However, we anticipate that the use of CT in clinical management as well as incidental findings potentially attributable to COVID-19 will evolve. [15]</p>
</blockquote>
<h3 id="community-acquired-pneumonia-cap-vs-covid-19"><strong>Community Acquired Pneumonia (CAP) vs. COVID-19</strong></h3>
<p>So far, previous studies report high sensitivity in diagnosing COVID-19 from CT scans, however, with remarkable low specificity, e.g. 25%, and 56% in Ai et al. [6], and Caruso et al. [7], respectively. In other words, radiologists might misinterpret the CT scan and diagnose the patient with COVID-19.</p>
<blockquote>
<p>These studies have shown that COVID-19 often produces a CT pattern resembling organizing pneumonia, notably peripheral ground-glass opacities (GGO) and nodular or mass-like GGO that are often bilateral and multilobar (
<a href="https://pubs.rsna.org/doi/10.1148/ryct.2020200152#r11" target="_blank" rel="noopener">11</a>). However, additional imaging findings have also been reported including linear, curvilinear or perilobular opacities, consolidation, and diffuse GGO, which can mimic several disease processes including other infections, inhalational exposures, and drug toxicities (
<a href="https://pubs.rsna.org/doi/10.1148/ryct.2020200152#r12" target="_blank" rel="noopener">12</a>
<a href="https://pubs.rsna.org/doi/10.1148/ryct.2020200152#r15" target="_blank" rel="noopener">–15</a>). [15]</p>
</blockquote>
<p>To assess the performance of radiologists in differentiating COVID-19 from other viral infections, Bai and Hsieh et al. [16] collected a cohort of 424 chest CT scans; 52% with positive COVID-19 by RT-PCR test, and 48% with positive Respiratory Pathogen Panel for viral pneumonia. The cohort was blindly reviewed by three radiologists from China, and a subset of 58 patients were reviewed by four radiologists from the US. Overall, their results demonstrate that <strong>radiologists can distinguish COVID-19 from other viral pneumonia with moderate to high sensitivity 67–93%, and high specificity 93–100%.</strong> Misinterpreted cases show either subtle or atypical findings in their CT scans (cf. Fig. 13). Key differences have been also reported by the radiologists.</p>
<blockquote>
<p>Compared to non-COVID-19 pneumonia, COVID-19 pneumonia was more likely to have a peripheral distribution (80% vs. 57%, p&lt;0.001), ground-glass opacity (91% vs. 68%, p&lt;0.001), fine reticular opacity (56% vs. 22%, p&lt;0.001), and vascular thickening (59% vs. 22%, p&lt;0.001), but less likely to have a central+peripheral distribution (14.% vs. 35%, p&lt;0.001), pleural effusion (4.1 vs. 39%, p&lt;0.001) and lymphadenopathy (2.7% vs. 10.2%, p&lt;0.001).</p>
</blockquote>
<p><img src="https://cdn-images-1.medium.com/max/1440/1*WWHZpIe7G8OnWhuHjaGIgw.png" alt="img">Fig.13: Adapted from Bai and Hsieh et al. [16]</p>
<p>To reduce the reporting variability and uncertainty which might arise due to incidental findings with other viral infections, e.g. influenza-A, Simpson and Kay et al. [15] put together a nice piece of work and suggestions on <strong>standardized CT reporting language of COVID-19, which could be considered as a good reference for structured reporting</strong>. Examples of suggested reporting languages along with a few chest CT images are demonstrated in Fig. 14–16.</p>
<p><img src="https://cdn-images-1.medium.com/max/1440/1*0oVLldjnqtvz00RHcBHKHw.jpeg" alt="img">Fig.14: Adopted from Simpson and Kay et al. [15]</p>
<p><img src="https://cdn-images-1.medium.com/max/1440/1*uhQ09ognFQ3QjGRf8BSoLg.png" alt="img">Fig. 15: Adapted from Simpson and Kay et al. [15]</p>
<p><img src="https://cdn-images-1.medium.com/max/1440/1*M5oT2KtuUWOX49RcO45kqg.png" alt="img">Fig. 16: Adapted from Simpson and Kay et al. [15]</p>
<blockquote>
<p>Future direction includes development of an artificial intelligence classifier that can further augment radiologist performance in combination with clinical information. [16]</p>
</blockquote>
<h3 id="from-my-point-of-view-ai-has-the-potential-to">From my point of view, AI has the potential to:</h3>
<ul>
<li>identify the asymptomatic carriers of COVID-19</li>
<li>detect and quantify the abnormalities in serial Chest CT/X-rays scans for prognosis purpose</li>
<li>distinguish CAP from COVID-19 using Chest CT scans, and additional clinical information; age, gender, previous disorders, …etc.</li>
</ul>
<h3 id="references">References:</h3>
<p>[1] Chung, M., Bernheim, A., Mei, X., Zhang, N., Huang, M., Zeng, X., Cui, J., Xu, W., Yang, Y., Fayad, Z.A. and Jacobi, A., 2020. CT imaging features of 2019 novel coronavirus (2019-nCoV). <em>Radiology</em>, <em>295</em>(1), pp.202–207. (
<a href="https://pubs.rsna.org/doi/pdf/10.1148/radiol.2020200230" target="_blank" rel="noopener">PDF</a>)</p>
<p>[2] Fang, Y., Zhang, H., Xie, J., Lin, M., Ying, L., Pang, P. and Ji, W., 2020. Sensitivity of chest CT for COVID-19: comparison to RT-PCR. <em>Radiology</em>, p.200432. (
<a href="https://pubs.rsna.org/doi/10.1148/radiol.2020200432?fbclid=IwAR2EyI4QVRos1SzZvFCl4oMIY0Da06XMbFW1TRmr4P7g3lLyO634O6tBgFs" target="_blank" rel="noopener">PDF</a>)</p>
<p>[3] Xie, X., Zhong, Z., Zhao, W., Zheng, C., Wang, F. and Liu, J., 2020. Chest CT for typical 2019-nCoV pneumonia: relationship to negative RT-PCR testing. <em>Radiology</em>, p.200343. (
<a href="https://pubs.rsna.org/doi/abs/10.1148/radiol.2020200343" target="_blank" rel="noopener">PDF</a>)</p>
<p>[4] Inui S, Fujikawa A, Jitsu M, Kunishima N, Watanabe S, Suzuki Y, Umeda S, Uwabe Y. Chest CT findings in cases from the cruise ship “Diamond Princess” with coronavirus disease 2019 (COVID-19). Radiology: Cardiothoracic Imaging. 2020 Mar 17;2(2):e200110. (
<a href="https://pubs.rsna.org/doi/full/10.1148/ryct.2020200110" target="_blank" rel="noopener">PDF</a>)</p>
<p>[5] Simpson, S., Kay, F.U., Abbara, S., Bhalla, S., Chung, J.H., Chung, M., Henry, T.S., Kanne, J.P., Kligerman, S., Ko, J.P. and Litt, H., 2020. Radiological Society of North America Expert Consensus Statement on Reporting Chest CT Findings Related to COVID-19. Endorsed by the Society of Thoracic Radiology, the American College of Radiology, and RSNA. <em>Radiology: Cardiothoracic Imaging</em>, <em>2</em>(2), p.e200152. (
<a href="https://pubs.rsna.org/doi/10.1148/ryct.2020200152" target="_blank" rel="noopener">PDF</a>)</p>
<p>[6] Ai, T., Yang, Z., Hou, H., Zhan, C., Chen, C., Lv, W., Tao, Q., Sun, Z. and Xia, L., 2020. Correlation of chest CT and RT-PCR testing in coronavirus disease 2019 (COVID-19) in China: a report of 1014 cases. <em>Radiology</em>, p.200642. (
<a href="https://pubs.rsna.org/doi/abs/10.1148/radiol.2020200642" target="_blank" rel="noopener">PDF</a>)</p>
<p>[7] Caruso, D., Zerunian, M., Polici, M., Pucciarelli, F., Polidori, T., Rucci, C., Guido, G., Bracci, B., de Dominicis, C. and Laghi, A., 2020. Chest CT features of COVID-19 in Rome, Italy. <em>Radiology</em>, p.201237. (
<a href="https://pubs.rsna.org/doi/full/10.1148/radiol.2020201237" target="_blank" rel="noopener">PDF</a>)</p>
<p>[8] Fang, Y., Zhang, H., Xie, J., Lin, M., Ying, L., Pang, P. and Ji, W., 2020. Sensitivity of chest CT for COVID-19: comparison to RT-PCR. <em>Radiology</em>, p.200432. (
<a href="https://pubs.rsna.org/doi/full/10.1148/radiol.2020200432" target="_blank" rel="noopener">PDF</a>)</p>
<p>[9] Wang, Y., Dong, C., Hu, Y., Li, C., Ren, Q., Zhang, X., Shi, H. and Zhou, M., 2020. Temporal changes of CT findings in 90 patients with COVID-19 pneumonia: a longitudinal study. <em>Radiology</em>, p.200843. (
<a href="https://pubs.rsna.org/doi/full/10.1148/radiol.2020200843" target="_blank" rel="noopener">PDF</a>)</p>
<p>[10] Guan, W.J., Ni, Z.Y., Hu, Y., Liang, W.H., Ou, C.Q., He, J.X., Liu, L., Shan, H., Lei, C.L., Hui, D.S. and Du, B., 2020. Clinical characteristics of coronavirus disease 2019 in China. <em>New England Journal of Medicine</em>. (
<a href="https://www.nejm.org/doi/full/10.1056/NEJMoa2002032" target="_blank" rel="noopener">PDF</a>)</p>
<p>[11] Bernheim, A., Mei, X., Huang, M., Yang, Y., Fayad, Z.A., Zhang, N., Diao, K., Lin, B., Zhu, X., Li, K. and Li, S., 2020. Chest CT findings in coronavirus disease-19 (COVID-19): relationship to duration of infection. <em>Radiology</em>, p.200463. (
<a href="https://pubs.rsna.org/doi/full/10.1148/radiol.2020200463" target="_blank" rel="noopener">PDF</a>)</p>
<p>[12] Pan, F., Ye, T., Sun, P., Gui, S., Liang, B., Li, L., Zheng, D., Wang, J., Hesketh, R.L., Yang, L. and Zheng, C., 2020. Time course of lung changes on chest CT during recovery from 2019 novel coronavirus (COVID-19) pneumonia. <em>Radiology</em>, p.200370. (
<a href="https://pubs.rsna.org/doi/pdf/10.1148/radiol.2020200370" target="_blank" rel="noopener">PDF</a>)</p>
<p>[13] Wong, H.Y.F., Lam, H.Y.S., Fong, A.H.T., Leung, S.T., Chin, T.W.Y., Lo, C.S.Y., Lui, M.M.S., Lee, J.C.Y., Chiu, K.W.H., Chung, T. and Lee, E.Y.P., 2020. Frequency and distribution of chest radiographic findings in COVID-19 positive patients. <em>Radiology</em>, p.201160. (
<a href="https://pubs.rsna.org/doi/10.1148/radiol.2020201160" target="_blank" rel="noopener">PDF</a>)</p>
<p>[14] American College of Radiology, 2020. ACR recommendations for the use of chest radiography and computed tomography (CT) for suspected COVID-19 infection.
<a href="https://www.acr.org/Advocacy-and-Economics/ACR-Position-Statements/Recommendations-for-Chest-Radiography-and-CT-for-Suspected-COVID19-Infection" target="_blank" rel="noopener"><em>ACR website</em></a><em>.</em></p>
<p>[15] Simpson, S., Kay, F.U., Abbara, S., Bhalla, S., Chung, J.H., Chung, M., Henry, T.S., Kanne, J.P., Kligerman, S., Ko, J.P. and Litt, H., 2020. Radiological Society of North America Expert Consensus Statement on Reporting Chest CT Findings Related to COVID-19. Endorsed by the Society of Thoracic Radiology, the American College of Radiology, and RSNA. <em>Radiology: Cardiothoracic Imaging</em>, <em>2</em>(2), p.e200152. (
<a href="https://pubs.rsna.org/doi/10.1148/ryct.2020200152" target="_blank" rel="noopener">PDF</a>)</p>
<p>[16] Bai, H.X., Hsieh, B., Xiong, Z., Halsey, K., Choi, J.W., Tran, T.M.L., Pan, I., Shi, L.B., Wang, D.C., Mei, J. and Jiang, X.L., 2020. Performance of radiologists in differentiating COVID-19 from viral pneumonia on chest CT. <em>Radiology</em>, p.200823. (
<a href="https://pubs.rsna.org/doi/full/10.1148/radiol.2020200823" target="_blank" rel="noopener">PDF</a>)</p>
</description>
</item>
<item>
<title>Deep Federated Learning in Healthcare</title>
<link>https://albarqouni.github.io/project/federated-learning/</link>
<pubDate>Mon, 06 Apr 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/project/federated-learning/</guid>
<description><p>Deep Learning (DL) has emerged as a leading technology for accomplishing many challenging tasks showing outstanding performance in a broad range of computer vision and medical applications. However, this success comes at the cost of collecting and processing a massive amount of data, which often are not accessible, in Healthcare, due to privacy issues. Federated Learning (FL) has been recently introduced to allow training DL models without sharing the data. Instead, DL models at local hubs, <em>i.e.</em> hospitals, share only the trained parameters with a centralized DL model, which is, in return, responsible for updating the local DL models as well.</p>
<p>Our golas in this project is to develop novel models and algorithms for a ground-breaking new generation of deep FL, which can distill the knowledge from local hubs, <em>i.e.</em> hospitals, and edges, <em>i.e.</em> wearable devices, to provide personalized healthcare services.</p>
<p>The principal <strong>challenges</strong>, to overcome, concern the nature of medical data, namely data heterogeneity; severe class-imbalance, few amounts of annotated data, inter-/intra-scanners variability (domain shift), inter-/intra-observer variability (noisy annotations), system heterogeneity, and privacy issues (see the example below).</p>
<h3 id="collaboration">Collaboration:</h3>
<ul>
<li></li>
</ul>
<h3 id="funding">Funding:</h3>
<ul>
<li>Soon</li>
</ul>
</description>
</item>
<item>
<title>Learn from Crowds</title>
<link>https://albarqouni.github.io/project/learn-from-crowds/</link>
<pubDate>Mon, 06 Apr 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/project/learn-from-crowds/</guid>
<description><p>Today&rsquo;s clinical procedures often generate a large amount of digital images requiring close inspection. Manual examination by physicians is time-consuming and machine learning in computer vision and pattern recognition is playing an increasing role in medical applications. In contrast to pure machine learning methods, crowdsourcing can be used for processing big data sets, utilising the collective brainpower of huge crowds. Since individuals in the crowd are usually no medical experts, preparation of medical data as well as an appropriate visualization to the user becomes indispensable. The concept of gamification typically allows for embedding non-game elements in a serious game environment, providing an incentive for persistent engagement to the crowd. Medical image analysis empowered by the masses is still rare and only a few applications successfully use the crowd for solving medical problems. The goal of this project is to bring the gamification and crowdsourcing to the Medical Imaging community.</p>
<h3 id="collaboration">Collaboration:</h3>
<h3 id="funding">Funding:</h3>
</description>
</item>
<item>
<title>Learn from Prior Knowledge</title>
<link>https://albarqouni.github.io/project/learn-from-graph/</link>
<pubDate>Mon, 06 Apr 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/project/learn-from-graph/</guid>
<description><p>Together with our clinical and industry partners, we realized that there is a need to incorporate domain-specific knowledge and let the model <em>Learn from a Prior Knowledge</em>. We first investigated modeling general priors, i.e., manifold assumptions, to learn powerful representations. Such representations achieved state-of-the-art on benchmark datasets, such as e IDRiD for Diabetic Retinopathy Early Detection (Sarhan <em>et al.</em> 2019), and 7 Scenes for Camera Relocalization (Bui <em>et al.</em> 2017). Then, we started looking into the laplacian graph, where prior knowledge can be modeled as a soft constraint, i.e., regularization, to learn feature representation that follows such manifold defined by graphs. We have shown in our ISBI (Kazi <em>et al.</em> 2019a), MICCAI (Kazi <em>et al.</em> 2019b), and IPMI (Kazi <em>et al.</em> 2019) papers that leveraging prior knowledge such as proximity of ages, gender, and a few lab results, are of high importance in Alzheimer classification.</p>
<h3 id="collaboration">Collaboration:</h3>
<ul>
<li></li>
</ul>
<h3 id="funding">Funding:</h3>
<ul>
<li>Siemens AG</li>
</ul>
</description>
</item>
<item>
<title>Learn to Adapt</title>
<link>https://albarqouni.github.io/project/learn-to-adapt/</link>
<pubDate>Mon, 06 Apr 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/project/learn-to-adapt/</guid>
<description><p>To build domain-agnostic models that are generalizable to a different domain, i.e., scanners, we have investigated three directions; First, <em>Style Transfer</em>, where the style/color of the source domain is transferred to match the target one. Such style transfer is performed in the high-dimensional image space using adversarial learning, as shown in our papers on Histology Imaging (Lahiani <em>et al.</em> 2019a, Lahiani <em>et al.</em> 2019b, Shaban <em>et al.</em> 2019). Second, <em>Domain Adaptation</em>, where the distance between the features of the source and target domains are minimized. Such distance can be optimized in a supervised fashion, i.e., class aware, using angular cosine distance as shown in our paper on MS Lesion Segmentation in MR Imaging (Baur <em>et al.</em> 2017), or in an unsupervised way, i.e., class agnostic, using adversarial learning as explained in our article on Left atrium Segmentation in Ultrasound Imaging (Degel <em>et al.</em> 2018). Yet, another exciting direction that has been recently investigated in our paper (Lahiani <em>et al.</em> 2019c) is to disentangle the feature that is responsible for the style and color from the one responsible for the semantics.</p>
<p><img src="Baur_Degel_Shaban.jpeg" alt="Baur et al. 2017, Degel et al. 2018, and Shaban et al. 2019"></p>
<p><img src="lahiani2019c.jpeg" alt="Lahiani et al. 2019c"></p>
<h3 id="collaboration">Collaboration:</h3>
<ul>
<li>Eldad Klaiman, Roche Diagnostics GmbH</li>
<li>Georg Schummers and Matthias Friedrichs, TOMTEC Imaging Systems GmbH</li>
<li></li>
</ul>
</description>
</item>
<item>
<title>Learn to Learn</title>
<link>https://albarqouni.github.io/project/learn-to-learn/</link>
<pubDate>Mon, 06 Apr 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/project/learn-to-learn/</guid>
<description><p>To build models that are transferable to different tasks or different data distributions, i.e., non i.i.d., we have investigated meta-learning approaches such as prototypical networks (PN) (Snell <em>et al.</em> 2017). PN learns a class prototype from very few amounts of labeled data, e.g., 1-5 shots, and use the learned prototypes to perform the classification tasks. In the context of medical imaging, we were first to introduce Few-Shot Learning into the MIC community. We have shown in our recent ICML Workshop paper (Ayyad <em>et al.</em> 2019) that our novel Semi-Supervised Few-Shot Learning achieves the state-of-the-art on benchmark datasets; Omniglot, miniImageNet, and TieredImageNet. Further, we have demonstrated in our recent paper (Parida <em>et al.</em> 2019) that such concepts can be utilized in medical imaging segmentation with an extremely low budget of annotated data, e.g., bounding boxes, and better generalization capability, i.e., to new organs or anomalies, however, at the cost of less accurate segmentation. Yet, our proposed models have great potential in clinical practice where a novel application could come in, and only a very few annotations are required, to perform segmentation tasks. Further, such a learning paradigm has a great potential in Federated Learning, where the data acquired at different hospitals capture heterogeneous and non i.i.d data, i.e., various tasks, making proposed models suitable for such a problem.</p>
<p><img src="Parida2019.jpeg" alt="Parida et al. 2019"></p>
<h3 id="collaboration">Collaboration:</h3>
<ul>
<li>Prof.
<a href="https://www.kaust.edu.sa/en/study/faculty/mohamed-elhoseiny" target="_blank" rel="noopener">Mohamed Elhoseiny</a>,
<a href="https://ai.facebook.com/" target="_blank" rel="noopener">Facebook AI Research</a></li>
</ul>
</description>
</item>
<item>
<title>Learn to Reason and Explain</title>
<link>https://albarqouni.github.io/project/learn-to-reason-and-explain/</link>
<pubDate>Mon, 06 Apr 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/project/learn-to-reason-and-explain/</guid>
<description><p>To build explainable AI models that are interpretable for our end-users, i.e., clinicians, we have investigated two research directions. First, we have utilized some visualization techniques to explain and interpret &ldquo;black box&rdquo; models by propagating back the gradient of the class of interest to the image space where you can see the relevant semantics, so-called Gradient Class Activation Maps (GradCAM). Sooner, we found out such techniques do not produce meaningful results. In other words, irrelevant semantics could be highly activated in GradCAM, yielding unreliable explanation tools. To overcome such a problem, we have introduced a robust optimization loss in our MICCAI paper (Khakzar <em>et al.</em> 2019), which generated adversarial examples enforcing the network to only focus on relevant features and probably correlated with other examples belonging to the same class.</p>
<p><img src="Khakzar2019.jpeg" alt="Khakzar2019"></p>
<p>Second, we have investigated designing and building explainable models by i) uncertainty quantification and ii) disentangled feature representation. In the first category, we started understanding the uncertainty estimates generated by Monte-Carlo Dropout, the approximate of Bayesian Neural Networks, and other techniques, e.g. PointNet, in Camera Relocalization problem (Bui <em>et al.</em> 2018), to shed light on the ambiguity present in the dataset. We took a step further, and use such uncertainty estimates to refine the segmentation in an unsupervised fashion (Soberanis-Mukul <em>et al.</em> 2019, Bui <em>et al.</em> 2019).</p>
<p><img src="Sarhan2019.jpeg" alt=""></p>
<p>Recently, we have investigated modeling the labels uncertainty, which is related to the inter-/intra-observer variability, and produced a metric to quantify such uncertainty. We have shown in our paper (Tomczack <em>et al.</em> 2019) that such uncertainty can be rather disentangled from the model and data uncertainties, so-called, epistemic, and aleatoric uncertainties, respectively. We believe such uncertainty is of high importance to the referral systems. In the second category, we have studied the variational methods, and disentangled representations, where the assumption here that some generative factors, <em>e.g.</em>, color, shape, and pathology, will be captured in the lower-dimensional latent space, and one can easily go through the manifold and generate tons of example by sampling from the posterior distribution. We were among the firsts who introduce such concepts in medical imaging by investigating the influence of residual blocks and adversarial learning on disentangled representation (Sarhan <em>et al.</em> 2019). Our hypothesis that better reconstruction fidelity would force the network to model high resolution, which might have a positive influence on the disentangled representation, in particular, some pathologies.</p>
<p><img src="Roger_Tomczack2019.jpeg" alt=""></p>
<h3 id="collaboration">Collaboration:</h3>
<ul>
<li>Dr.
<a href="https://scholar.google.de/citations?user=PmHOyT0AAAAJ&amp;hl=en" target="_blank" rel="noopener">Abouzar Eslami</a>, Carl Zeiss Meditec AG</li>
<li>PD. Dr.
<a href="https://scholar.google.de/citations?user=ELOVd8sAAAAJ&amp;hl=en" target="_blank" rel="noopener">Slobodan Ilic</a>, Siemens AG</li>
</ul>
<h3 id="funding">Funding:</h3>
<ul>
<li>Siemens AG</li>
</ul>
</description>
</item>
<item>
<title>Learn to Recognize</title>
<link>https://albarqouni.github.io/project/learn-to-recognize/</link>
<pubDate>Mon, 06 Apr 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/project/learn-to-recognize/</guid>
<description><p>We started investigating Convolutional Neural Networks for Object Recognition in a supervised fashion, for example, mitotic figure detection in histology imaging (Albarqouni <em>et al.</em> 2016), Catheter electrodes detection and depth estimation in Interventional Imaging (Baur <em>et al.</em> 2016), femur fracture detection in radiology (Kazi <em>et al.</em> 2017), in-depth layer X-ray synthesis (Albarqouni <em>et al.</em> 2017), and pose estimation of mobile X-rays (Bui <em>et al.</em> 2017). One of the first work which has been highly recognized and featured in the media is AggNet (Albarqouni <em>et al.</em> 2016) for Mitotic figure detection in Histology Images. Although the network architecture was shallow, it was trained using millions of multi-scale RGB patches of histology images, achieving outstanding performance (ranked 3rd among 15 participants in AMIDA13 challenge).</p>
<p>During our work, we found out such data-driven models demand a massive amount of annotated data, which might not be available in medical imaging and can not be mitigated by simple data augmentation. Besides, we found out such models are so sensitive to domain shift, i.e., different scanner, and methods such as domain adaptation is required. Therefore, we have focused our research directions to develop fully-automated, high accurate solutions that save export labor and efforts, and mitigate the challenges in medical imaging. For example, i) the availability of a few annotated data, ii) low inter-/intra-observers agreement, iii) high-class imbalance, iv) inter-/intra-scanners variability and v) domain shift.</p>
<p><img src="Shadi_Web_Images.016.jpeg" alt=""></p>
<p>To mitigate the problem of limited annotated data, we developed models that <em>Learn from a Few Examples</em> by i) leveraging the massive amount of unlabeled data via semi-supervised techniques (Baur and Albarqouni <em>et al.</em> 2017), ii) utilizing weakly labeled data, which is way cheaper than densely one (Kazi <em>et al.</em> 2017), iii) generating more examples through modeling the data distribution (Baur <em>et al.</em> 2018), and finally by iv) investigating unsupervised approaches (Baur <em>et al.</em> 2018, Baur <em>et al.</em> 2019).</p>
<p><img src="Shadi_Web_Images.017.jpeg" alt=""></p>
<h3 id="collaboration">Collaboration:</h3>
<ul>
<li>Prof.
<a href="https://www.med.upenn.edu/apps/faculty/index.php/g275/p9161623" target="_blank" rel="noopener">Peter Nöel</a>, Department of Radiology,
<a href="https://www.med.upenn.edu/" target="_blank" rel="noopener">University of Pennsylvania</a>, USA</li>
<li>Prof.
<a href="https://www.med.physik.uni-muenchen.de/personen/guests/dr_guillaume_landry/index.html" target="_blank" rel="noopener">Guillaume Landry</a>, Department of Radiation Oncology, Medical Center of the University of Munich, Germany</li>
<li>Dr.
<a href="https://www.neurokopfzentrum.med.tum.de/neuroradiologie/forschung_projekt_computational_imaging.html" target="_blank" rel="noopener">Benedikt Wiestler</a>, TUM Neuroradiologie,
<a href="https://www.mri.tum.de/" target="_blank" rel="noopener">Klinikum rechts der Isar</a>, Germany</li>
<li>Prof. Dr. med.
<a href="https://www.kernspin-maximilianstrasse.de/prof-dr-med-sonja-kirchhoff/" target="_blank" rel="noopener">Sonja Kirchhoff</a>,
<a href="https://www.mri.tum.de/" target="_blank" rel="noopener">Klinikum rechts der Isar</a>, Germany</li>
<li>Prof.
<a href="[https://www.ls2n.fr/annuaire/Diana%20MATEUS/">Diana Mateus</a>,
<a href="https://www.ec-nantes.fr/" target="_blank" rel="noopener">Ecole Centrale Nantes</a>, France</li>
<li>Prof.
<a href="https://www5.cs.fau.de/en/our-team/maier-andreas/projects/index.html" target="_blank" rel="noopener">Andreas Maier</a>,
<a href="https://www.fau.de/" target="_blank" rel="noopener">Friedrich-Alexander-Universität Erlangen-Nürnberg</a>, Germany</li>
<li>Prof.
<a href="https://health.uottawa.ca/people/fallavollita-pascal" target="_blank" rel="noopener">Pascal Fallavollita</a>,
<a href="https://www.uottawa.ca/en" target="_blank" rel="noopener">Ottawa University</a>, Canada</li>
</ul>
<h3 id="funding">Funding:</h3>
<ul>
<li>Siemens Healthineers</li>
<li>Siemens AG</li>
</ul>
</description>
</item>
<item>
<title>Modelling Uncertainty in Deep Learning for Medical Applications</title>
<link>https://albarqouni.github.io/project/uncertainty/</link>
<pubDate>Mon, 06 Apr 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/project/uncertainty/</guid>
<description><p>Deep Learning has emerged as a leading technology for accomplishing many challenging tasks showing outstanding performance in a broad range of applications in computer vision and medical applications. Despite its success and merit in recent state-of-the-art methods, DL tools still lack in robustness hindering its adoption in medical applications. Modeling uncertainty, through Bayesian Inference and Monte-Carlo dropout, has been successfully introduced to computer vision for better understanding the underlying deep learning models. In this proposal, we investigate modeling the uncertainty for medical applications given the well-known challenges in medical image analysis, namely severe class-imbalance, few amounts of labeled data, domain shift, and noisy annotations.</p>
<h3 id="collaboration">Collaboration:</h3>
<p>Prof.
<a href="http://people.ee.ethz.ch/~kender/" target="_blank" rel="noopener">Ender Konukoglu</a>,
<a href="https://ee.ethz.ch/" target="_blank" rel="noopener">Department of Information Technology and Electrical Engineerng</a>,
<a href="https://ethz.ch/en.html" target="_blank" rel="noopener">ETH Zurich</a>.</p>
<p>Prof.
<a href="http://wp.doc.ic.ac.uk/dr/" target="_blank" rel="noopener">Daniel Rueckert</a>,
<a href="http://www.imperial.ac.uk/computing" target="_blank" rel="noopener">Department of Computing</a>,
<a href="http://www.imperial.ac.uk/" target="_blank" rel="noopener">Imperial College London</a></p>
<p>Prof.
<a href="http://campar.in.tum.de/Main/NassirNavab" target="_blank" rel="noopener">Nassir Navab</a>,
<a href="http://campar.in.tum.de/" target="_blank" rel="noopener">Faculty of Informatics</a>,
<a href="www.tum.de">Technical University of Munich</a></p>
<h3 id="funding">Funding:</h3>
<p>This project is supported by the
<a href="https://www.daad.de/de/studieren-und-forschen-in-deutschland/stipendien-finden/prime/prime-fellows-201819/" target="_blank" rel="noopener">PRIME programme</a> of the
<a href="www.daad.de">German Academic Exchange Service (DAAD)</a> with funds from the
<a href="www.bmbf.de">German Federal Ministry of Education and Research (BMBF)</a>.</p>
</description>
</item>
<item>
<title>Telemedicine in Palestine</title>
<link>https://albarqouni.github.io/project/telemedicine/</link>
<pubDate>Mon, 06 Apr 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/project/telemedicine/</guid>
<description><iframe src="//www.slideshare.net/slideshow/embed_code/key/BXqwyYh8hPU9Ub" width="595" height="485" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" style="border:1px solid #CCC; border-width:1px; margin-bottom:5px; max-width: 100%;" allowfullscreen> </iframe> <div style="margin-bottom:5px"> <strong> <a href="//www.slideshare.net/sbaraqouni/telemedicine-in-palestine" title="Telemedicine in Palestine" target="_blank">Telemedicine in Palestine</a> </strong> from <strong><a href="https://www.slideshare.net/sbaraqouni" target="_blank">Shadi Nabil Albarqouni</a></strong> </div>
<h3 id="collaboration">Collaboration:</h3>
<h3 id="funding">Funding:</h3>
</description>
</item>
<item>
<title>Uncertainty Aware Methods for Camera Pose Estimation and Relocalization</title>
<link>https://albarqouni.github.io/project/bacatec/</link>
<pubDate>Mon, 06 Apr 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/project/bacatec/</guid>
<description><p>Camera pose estimation is the term for determining the 6-DoF rotation and translation parameters of a camera. It is now a key technology in enabling multitudes of applications such as augmented reality, autonomous driving, human computer interaction and robot guidance. For decades, vision scholars have worked on finding the unique solution of this problem. Yet, this trend is witnessing a fundamental change. The recent school of thought has begun to admit that for our highly complex and ambiguous real environments, obtaining a single solution is not sufficient. This has led to a paradigm shift towards estimating rather a range of solutions in the form of full probability or at least explaining the uncertainty of camera pose estimates. Thanks to the advances in Artificial Intelligence, this important problem can now be tackled via machine learning algorithms that can discover rich and powerful representations for the data at hand. In collaboration, TU Munich and Stanford University plan to devise and implement generative methods that can explain uncertainty and ambiguity in pose predictions. In particular, our aim is to bridge the gap between 6DoF pose estimation either from 2D images/3D point sets and uncertainty quantification through multimodal variational deep methods.</p>
<h3 id="collaboration">Collaboration:</h3>
<p>
<a href="http://tbirdal.me/" target="_blank" rel="noopener">Dr. Tolga Birdal</a>,
<a href="https://profiles.stanford.edu/leonidas-guibas" target="_blank" rel="noopener">Prof. Leonidas Guibas</a>, Stanford University</p>
<p>
<a href="http://campar.in.tum.de/Main/MaiBui" target="_blank" rel="noopener">Mai Bui</a>,
<a href="%22#about%22">Dr. Shadi Albarqouni</a>,
<a href="http://campar.in.tum.de/WebHome" target="_blank" rel="noopener">Prof. Nassir Navab</a>, Technical University of Munich</p>
<h3 id="funding">Funding:</h3>
<p>This project is funded by the Bavaria California Technology Center (
<a href="https://www.bacatec.de/en/gefoerderte_projekte.html" target="_blank" rel="noopener">BaCaTeC</a>)</p>
<h3 id="heading"></h3>
</description>
</item>
<item>
<title>Organizing Committee Member at MICCAI DART 2020</title>
<link>https://albarqouni.github.io/talk/dart2020/</link>
<pubDate>Wed, 01 Apr 2020 13:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/talk/dart2020/</guid>
<description></description>
</item>
<item>
<title>Organizing Committee Member at MICCAI DCL 2020</title>
<link>https://albarqouni.github.io/talk/dcl2020/</link>
<pubDate>Wed, 01 Apr 2020 13:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/talk/dcl2020/</guid>
<description></description>
</item>
<item>
<title>Uncertainty-based graph convolutional networks for organ segmentation refinement</title>
<link>https://albarqouni.github.io/publication/soberanis-2019-uncertainty/</link>
<pubDate>Wed, 01 Apr 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/soberanis-2019-uncertainty/</guid>
<description></description>
</item>
<item>
<title>Seamless Virtual Whole Slide Image Synthesis and Validation Using Perceptual Embedding Consistency</title>
<link>https://albarqouni.github.io/publication/lahiani-2020-seamless/</link>
<pubDate>Sat, 01 Feb 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/lahiani-2020-seamless/</guid>
<description></description>
</item>
<item>
<title>Invited Talk: Towards Deep Federated Learning in Healthcare</title>
<link>https://albarqouni.github.io/talk/ulm2019/</link>
<pubDate>Fri, 17 Jan 2020 09:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/talk/ulm2019/</guid>
<description></description>
</item>
<item>
<title>Modelling Labels Uncertainty in Medical Imaging</title>
<link>https://albarqouni.github.io/talk/eth2020/</link>
<pubDate>Wed, 15 Jan 2020 11:15:00 +0000</pubDate>
<guid>https://albarqouni.github.io/talk/eth2020/</guid>
<description></description>
</item>
<item>
<title>Keynote Speaker: AI in Healthcare</title>
<link>https://albarqouni.github.io/talk/ai4h/</link>
<pubDate>Wed, 08 Jan 2020 09:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/talk/ai4h/</guid>
<description></description>
</item>
<item>
<title>A learning without forgetting approach to incorporate artifact knowledge in polyp localization tasks</title>
<link>https://albarqouni.github.io/publication/soberanis-2020-learning/</link>
<pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/soberanis-2020-learning/</guid>
<description></description>
</item>
<item>
<title>An objective comparison of detection and segmentation algorithms for artefacts in clinical endoscopy</title>
<link>https://albarqouni.github.io/publication/ali-2020-objective/</link>
<pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/ali-2020-objective/</guid>
<description></description>
</item>
<item>
<title>An Uncertainty-Driven GCN Refinement Strategy for Organ Segmentation</title>
<link>https://albarqouni.github.io/publication/mukul-2020-uncertainty/</link>
<pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/mukul-2020-uncertainty/</guid>
<description></description>
</item>
<item>
<title>Attention Based Multiple Instance Learning for Classification of Blood Cell Disorders</title>
<link>https://albarqouni.github.io/publication/sadafi-2020-attention/</link>
<pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/sadafi-2020-attention/</guid>
<description></description>
</item>
<item>
<title>Bayesian Skip-Autoencoders for Unsupervised Hyperintense Anomaly Detection in High Resolution Brain Mri</title>
<link>https://albarqouni.github.io/publication/baur-2020-bayesian/</link>
<pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/baur-2020-bayesian/</guid>
<description></description>
</item>
<item>
<title>Benefit of dual energy CT for lesion localization and classification with convolutional neural networks</title>
<link>https://albarqouni.github.io/publication/shapira-2020-benefit/</link>
<pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/shapira-2020-benefit/</guid>
<description></description>
</item>
<item>
<title>Domain Adaptation and Representation Transfer, and Distributed and Collaborative Learning</title>
<link>https://albarqouni.github.io/publication/albarqouni-2020-domain/</link>
<pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/albarqouni-2020-domain/</guid>
<description></description>
</item>
<item>
<title>GANs for medical image analysis</title>
<link>https://albarqouni.github.io/publication/kazeminia-2020-gans/</link>
<pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/kazeminia-2020-gans/</guid>
<description></description>
</item>
<item>
<title>Image-to-Images Translation for Multi-Task Organ Segmentation and Bone Suppression in Chest X-Ray Radiography</title>
<link>https://albarqouni.github.io/publication/eslami-2020-image/</link>
<pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/eslami-2020-image/</guid>
<description></description>
</item>
<item>
<title>Inverse Distance Aggregation for Federated Learning with Non-IID Data</title>
<link>https://albarqouni.github.io/publication/yeganeh-2020-inverse/</link>
<pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/yeganeh-2020-inverse/</guid>
<description></description>
</item>
<item>
<title>Liver lesion localisation and classification with convolutional neural networks: a comparison between conventional and spectral computed tomography</title>
<link>https://albarqouni.github.io/publication/shapira-2020-liver/</link>
<pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/shapira-2020-liver/</guid>
<description></description>
</item>
<item>
<title>On the Fairness of Privacy-Preserving Representations in Medical Applications</title>
<link>https://albarqouni.github.io/publication/sarhan-2020-fairness-2/</link>
<pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/sarhan-2020-fairness-2/</guid>
<description></description>
</item>
<item>
<title>Polyp-artifact relationship analysis using graph inductive learned representations</title>
<link>https://albarqouni.github.io/publication/soberanis-2020-polyp/</link>
<pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/soberanis-2020-polyp/</guid>
<description></description>
</item>
<item>
<title>Precise proximal femur fracture classification for interactive training and surgical planning.</title>
<link>https://albarqouni.github.io/publication/jimenez-2020-precise/</link>
<pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/jimenez-2020-precise/</guid>
<description></description>
</item>
<item>
<title>Retinal Layer Segmentation Reformulated as OCT Language Processing</title>
<link>https://albarqouni.github.io/publication/tran-2020-retinal/</link>
<pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/tran-2020-retinal/</guid>
<description></description>
</item>
<item>
<title>ROAM: Random Layer Mixup for Semi-Supervised Learning in Medical Imaging</title>
<link>https://albarqouni.github.io/publication/bdair-2020-roam/</link>
<pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/bdair-2020-roam/</guid>
<description></description>
</item>
<item>
<title>Scale-Space Autoencoders for Unsupervised Anomaly Segmentation in Brain MRI</title>
<link>https://albarqouni.github.io/publication/baur-2020-scale/</link>
<pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/baur-2020-scale/</guid>
<description></description>
</item>
<item>
<title>SteGANomaly: Inhibiting CycleGAN Steganography for Unsupervised Anomaly Detection in Brain MRI</title>
<link>https://albarqouni.github.io/publication/baur-2020-steganomaly/</link>
<pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/baur-2020-steganomaly/</guid>
<description></description>
</item>
<item>
<title>Understanding the effects of artifacts on automated polyp detection and incorporating that knowledge via learning without forgetting</title>
<link>https://albarqouni.github.io/publication/kayser-2020-understanding/</link>
<pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/kayser-2020-understanding/</guid>
<description></description>
</item>
<item>
<title>Invited Talk: Towards Deep Federated Learning in Healthcare</title>
<link>https://albarqouni.github.io/talk/haicu2019/</link>
<pubDate>Mon, 16 Dec 2019 09:30:00 +0000</pubDate>
<guid>https://albarqouni.github.io/talk/haicu2019/</guid>
<description></description>
</item>
<item>
<title>Keynote Speaker: Towards Deep Federated Learning in Healthcare</title>
<link>https://albarqouni.github.io/talk/guc2019/</link>
<pubDate>Wed, 27 Nov 2019 11:15:00 +0000</pubDate>
<guid>https://albarqouni.github.io/talk/guc2019/</guid>
<description></description>
</item>
<item>
<title>Organizing Committee Member at MICCAI DART 2019</title>
<link>https://albarqouni.github.io/talk/dart2019/</link>
<pubDate>Sun, 13 Oct 2019 16:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/talk/dart2019/</guid>
<description></description>
</item>
<item>
<title>Organizing Committee Member at MICCAI COMPAY 2019</title>
<link>https://albarqouni.github.io/talk/compay2019/</link>
<pubDate>Sun, 13 Oct 2019 13:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/talk/compay2019/</guid>
<description></description>
</item>
<item>
<title>Keynote Speaker: Towards Deep Federated Learning in Healthcare</title>
<link>https://albarqouni.github.io/talk/icann2019/</link>
<pubDate>Thu, 19 Sep 2019 09:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/talk/icann2019/</guid>
<description></description>
</item>
<item>
<title>Method for determining a pose of an object in an environment of the object using multi task learning and control device</title>
<link>https://albarqouni.github.io/publication/bui-2019-method/</link>
<pubDate>Mon, 01 Jul 2019 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/publication/bui-2019-method/</guid>
<description></description>
</item>
<item>
<title>Keynote Speaker: Deep Learning in Medical Imaging</title>
<link>https://albarqouni.github.io/talk/zeiss2019/</link>
<pubDate>Sun, 16 Jun 2019 13:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/talk/zeiss2019/</guid>
<description></description>
</item>
<item>
<title>AI meets COVID-19</title>
<link>https://albarqouni.github.io/slides/_example/</link>
<pubDate>Tue, 05 Feb 2019 00:00:00 +0000</pubDate>
<guid>https://albarqouni.github.io/slides/_example/</guid>
<description><h1 id="brief-progress-of">Brief Progress of</h1>
<p>
<a href="https://sourcethemes.com/academic/" target="_blank" rel="noopener">Academic</a> |
<a href="https://sourcethemes.com/academic/docs/managing-content/#create-slides" target="_blank" rel="noopener">Documentation</a></p>
<hr>
<h2 id="dataset">Dataset</h2>
<ul>
<li>Efficiently write slides in Markdown</li>
<li>3-in-1: Create, Present, and Publish your slides</li>
<li>Supports speaker notes</li>
<li>Mobile friendly slides</li>
</ul>
<hr>
<h2 id="pathology-quantification">Pathology Quantification:</h2>
<ul>
<li>To be able to quantify the pathologies in thorax CT scans, one needs to segment the pathologies, and probably classify them into common ones characterizing the COVID-19, <em>e.g.</em>,
<ul>
<li>
<span class="fragment " >
</li>
</ul>
Ground Glass Opacity (GGO)
</span>
<ul>
<li>
<span class="fragment " >
</li>
</ul>
Consolidations
</span>
<ul>
<li>
<span class="fragment " >
</li>
</ul>
Scarr
</span>
<ul>
<li>
<span class="fragment " >
</li>
</ul>
Pleueral Effusion
</span></li>
</ul>
<hr>
<h2 id="code-highlighting">Code Highlighting</h2>
<p>Inline code: <code>variable</code></p>
<p>Code block:</p>
<pre><code class="language-python">porridge = &quot;blueberry&quot;
if porridge == &quot;blueberry&quot;:
print(&quot;Eating...&quot;)
</code></pre>
<hr>
<h2 id="math">Math</h2>
<p>In-line math: $x + y = z$</p>
<p>Block math:</p>
<p>$$
f\left( x \right) = ;\frac{{2\left( {x + 4} \right)\left( {x - 4} \right)}}{{\left( {x + 4} \right)\left( {x + 1} \right)}}
$$</p>
<hr>
<h2 id="fragments">Fragments</h2>
<p>Make content appear incrementally</p>
<pre><code>{{% fragment %}} One {{% /fragment %}}
{{% fragment %}} **Two** {{% /fragment %}}
{{% fragment %}} Three {{% /fragment %}}
</code></pre>
<p>Press <code>Space</code> to play!</p>
<span class="fragment " >
One
</span>
<span class="fragment " >
**Two**
</span>
<span class="fragment " >
Three
</span>
<hr>
<p>A fragment can accept two optional parameters:</p>
<ul>
<li><code>class</code>: use a custom style (requires definition in custom CSS)</li>
<li><code>weight</code>: sets the order in which a fragment appears</li>
</ul>
<hr>
<h2 id="speaker-notes">Speaker Notes</h2>
<p>Add speaker notes to your presentation</p>
<pre><code class="language-markdown">{{% speaker_note %}}
- Only the speaker can read these notes
- Press `S` key to view
{{% /speaker_note %}}
</code></pre>
<p>Press the <code>S</code> key to view the speaker notes!</p>
<aside class="notes">
<ul>
<li>Only the speaker can read these notes</li>
<li>Press <code>S</code> key to view</li>
</ul>
</aside>
<hr>
<h2 id="themes">Themes</h2>
<ul>
<li>black: Black background, white text, blue links (default)</li>
<li>white: White background, black text, blue links</li>
<li>league: Gray background, white text, blue links</li>
<li>beige: Beige background, dark text, brown links</li>
<li>sky: Blue background, thin dark text, blue links</li>
</ul>
<hr>
<ul>
<li>night: Black background, thick white text, orange links</li>
<li>serif: Cappuccino background, gray text, brown links</li>
<li>simple: White background, black text, blue links</li>
<li>solarized: Cream-colored background, dark green text, blue links</li>
</ul>
<hr>
<section data-noprocess data-shortcode-slide
data-background-image="/img/boards.jpg"
>
<h2 id="custom-slide">Custom Slide</h2>
<p>Customize the slide style and background</p>
<pre><code class="language-markdown">{{&lt; slide background-image=&quot;/img/boards.jpg&quot; &gt;}}
{{&lt; slide background-color=&quot;#0000FF&quot; &gt;}}
{{&lt; slide class=&quot;my-style&quot; &gt;}}
</code></pre>