diff --git a/index.html b/index.html index 5de986b..116b46e 100644 --- a/index.html +++ b/index.html @@ -207,80 +207,91 @@

Abstract

+
+
+
+

Model architecture

+ +
+ +
+
+
+
- - - - + + - - +
+ +
+ + + +
+
+
+
+

Qualitative Results / Tasks

- There's a lot of excellent work that was introduced around the same time as ours. -

-

- Progressive Encoding for Neural Optimization introduces an idea similar to our windowed position encoding for coarse-to-fine optimization. -

-

- D-NeRF and NR-NeRF - both use deformation fields to model non-rigid scenes. -

-

- Some works model videos with a NeRF by directly modulating the density, such as Video-NeRF, NSFF, and DyNeRF -

-

- There are probably many more by the time you are reading this. Check out Frank Dellart's survey on recent NeRF papers, and Yen-Chen Lin's curated list of NeRF papers. -

+ Machine-generated images are ...
- --> - -
-
-
-
- -
+
+
+ +
+
+
+

Model architecture

+ +
+
-
+
+
+ +
-
-

Secure and Safe AI

+

Citation

-

- This work has been done under the Multimedia use case of the European network ELSA - European Lighthouse on Secure and Safe AI. - The objective of the Multimedia use case is to develop effective solutions for detecting and mitigating the spread of deep fake images in multimedia content. -

-

- Machine-generated images are becoming more and more popular in the digital world, thanks to the spread of Deep Learning models that can generate visual data like Generative Adversarial Networks, and Diffusion Models. While image generation tools can be employed for lawful goals (e.g., to assist content creators, generate simulated datasets, or enable multi-modal interactive applications), there is a growing concern that they might also be used for illegal and malicious purposes, such as the forgery of natural images, the generation of images in support of fake news, misogyny or revenge porn. While the results obtained in the past few years contained artefacts which made generated images easily recognizable, today's results are way less recognizable from a pure perceptual point of view. In this context, assessing the authenticity of fake images becomes a fundamental goal for security and for guaranteeing a degree of trustworthiness of AI algorithms. There is a growing need, therefore, to develop automated methods which can assess the authenticity of images (and, in general, multimodal content), and which can follow the constant evolution of generative models, which become more realistic over time. -

+
@article{poppi2024removing,
+            title={{Safe-CLIP: Removing NSFW Concepts from Vision-and-Language Models}},
+            author={Poppi, Samuele and Poppi, Tobia and Cocchi, Federico and Cornia, Marcella and Baraldi, Lorenzo and Cucchiara, Rita},
+            journal={arXiv preprint arXiv:2311.16254},
+            year={2024}
+          }
-
+