Confused with the models - false positives #708
Replies: 28 comments
-
The new one is supposed to have better accuracy. Use the DAG 93. Can you provide some false positives here? I can elaborate on the pros/cons, but I want to see if it fixes your issue first. And sorry, I think that s3 link is outdated. |
Beta Was this translation helpful? Give feedback.
-
Hello, I've just updated to the latest version of NSFWJS, and then loaded the DAG 93 model. Tested against a number of false positives, and the "porn" % decreased to much more acceptable values for sure. However, I tested a few images that are 100% porn and they would give 99% on the older model, but with the DAG93 they give 72.95% for one, 56.37% for another and 34.13% for the other, which doesn't make sense. There's a 4th image that I tested that it's not porn, but a semi-naked lady, and that one gave 89.48%. With my examples, the new model just doesn't seem accurate at all. Happy to give you examples, please give me your email so I can send privately to you. |
Beta Was this translation helpful? Give feedback.
-
Our latest model was provided by @TechnikEmpire so I'm tagging em here. My email is Gant and the domain is infinite.red |
Beta Was this translation helpful? Give feedback.
-
NSFW Post!!!My post is NSFW because I'm going to use some real talk here to get to the nitty gritty details of what I did differently. Alright so with the very latest model that I trained, there was a significant change. Changes to Training DataI used the same training data that @GantMan used in older models, but I decided to clean it up. What I did was I loaded Yahoo Open NSFW model and I deleted all images from the I then used Yahoo model to move all images out of the This translated into several thousand files for each category. There were thousands of files in the There were probably a couple thousand images in the There were also a couple thousand images of extremely obscure pornography. I took a peek and some of them I was like "this is a completely benign image", then I'd see a 3x3 pixel area of a penis hanging out of someone's pants. This is going to throw off the neural network, at least for training purposes. Those images were blown away. The Remarks About Changing AccuracyWith regards to the scores for categories changing, this isn't necessarily a bad thing. The way this model is to be used is that the highest scored class wins, and that's how neural networks like this are scored as well. Top-1 and Top-5 accuracy. Even if a porn image is split up like so:
The model is still accurate. It's not even necessarily indicative of a problem in the neural network. ConclusionWhat I believe happened here is twofold. First, I believe I've overfit the model by over-training it. I didn't notice this before, but your remarks made me take a second glance and it seems to be a bit overfit. If you look at the posted Tensorflow output on this issue, you can see that train loss is decreasing while validation loss is increasing in the final train iteration. Second, one-byte quantization would exacerbate this issue. I think I'll run a new training session and get a new model published with one less iteration because it seems it was that last iteration that pushed the model over the edge. Thanks for bringing this up! |
Beta Was this translation helpful? Give feedback.
-
Oh yeah I forgot, I also started the training session with training just the final softmax layer, then fine tuning for 5 consecutive sessions, then 2 more iterations with a highly reduced LR. I'll just train it the way I did the previous model and report back. |
Beta Was this translation helpful? Give feedback.
-
@GantMan - just sent the email. Please let me know if you didn't receive. Thanks @TechnikEmpire - please keep me updated! |
Beta Was this translation helpful? Give feedback.
-
I've started training so sometime this evening I'll do a PR on the model repo. |
Beta Was this translation helpful? Give feedback.
-
I got the email @ghnp5 thanks so much. @TechnikEmpire - thanks for checking for overfitting! I'll look forward to your update. |
Beta Was this translation helpful? Give feedback.
-
Hey - any news? :) |
Beta Was this translation helpful? Give feedback.
-
Yeah I've retrained, I'll check your submissions against the new model before I publish. I also got side tracked cause I'm not happy with this 93% ceiling fine tuning so I'm training mobilenet v3 large from scratch. |
Beta Was this translation helpful? Give feedback.
-
Great work guys!!!
I am fairly new at this and I don't know what a good confidence score for each of the categories is. |
Beta Was this translation helpful? Give feedback.
-
Hello, Any news about this? :) Thanks! |
Beta Was this translation helpful? Give feedback.
-
@ghnp5 Yeah sorry for going MIA, I got tied up with a bunch of stuff. I have trained new models, I'm just coordinating reviewing your submission against them. |
Beta Was this translation helpful? Give feedback.
-
@ghnp5 If there's an urgency, you can simply revert back to this model: https://github.com/GantMan/nsfw_model/releases/download/1.1.0/nsfw_mobilenet_v2_140_224.zip |
Beta Was this translation helpful? Give feedback.
-
I'm super excited to see your new model @TechnikEmpire ! Lots of people use NSFWJS and I'd love for your advanced model to be the de facto standard. |
Beta Was this translation helpful? Give feedback.
-
Those scores were just an example. You just take the highest valued class and accept that. Don't get into thresholding the values. Simply take the neural network's prediction at face value. |
Beta Was this translation helpful? Give feedback.
-
Here are the confidences on a new model for your submission @ghnp5 : For unsafe submissions:
For safe submissions:
As you know, the file Here's what happens to the scoring when I colorize that image:
For future reference, it's very difficult to gauge any network with a few images. This is why we split out 10% or more of our total data set for validation. According to that split ratio (against tens of thousands of images), Tensorflow tells us that the latest model is ~92% accurate. While this is a really good accuracy, note that 8% between where we're at and perfection (100%) is quite the chasm, so you're definitely going to see false positives and false negatives. I'll be uploading the newly trained model for @GantMan to my attached PR here: I just need to convert it to tfjs first. |
Beta Was this translation helpful? Give feedback.
-
New model is attached to that linked PR. For greater clarity, my experimentation (the scores given above) are based on a FP16 version run through OpenCV::DNN, not the web or quantized web versions of the model. |
Beta Was this translation helpful? Give feedback.
-
Hi, I'm fairly new to neural nets and this library, I tried using your model and I keep on getting the same error "nsfwjs.min.js:1 Uncaught (in promise) Error: layer: Improper config format:".
EDIT: I've just realised I was using an outdated version of nsfwJS, Thanks for the model! |
Beta Was this translation helpful? Give feedback.
-
I'm also new to the entire subject of preventing nsfw images uploaded by users entering my app without any sort of check for porn or violence. Now, I've read through the blog post here https://shift.infinite.red/avoid-nightmares-nsfw-js-ab7b176978b1 this github readme page as well as https://github.com/GantMan/nsfw_model However, I'm still in the fog on what model file is right for me and where to get it from? What's the difference among the various model files eg normal vs graph? Could someone maybe add two or three lines to the readme page because I reckon that's something people new to the subject are struggling to understand in general?! 😃 |
Beta Was this translation helpful? Give feedback.
-
@evdama I'm still using this one: https://github.com/infinitered/nsfwjs/tree/master/example/nsfw_demo/public/model In my experience, this one works the best. The thing is that you cannot 100% rely on any of the models. |
Beta Was this translation helpful? Give feedback.
-
ha! I was just afk making a tasty ☕️ and already got two great answeres 👍 @TechnikEmpire Please, like you'd speak to a very motivated but unexperienced puppy, where do I get those files and how do I use them? Is it the entire .zip I found or just one of contained files or the entire collection of shards 🤔 ? And once I have the right file/model, I'd just put it inside my Sapper's @GantMan Can you add a line or two to the readme so that puppies know what to do with regards to model files (and which one is the right one for a certain use case e.g. images vs videos)? 🤓 |
Beta Was this translation helpful? Give feedback.
-
Someone answered here, but seems the reply is gone. They were saying that the model I'm using is very outdated and that there's a new one with 98% accuracy. Where can I find it? The README of this repository points to the model I'm using, in the "Host your own model" section: |
Beta Was this translation helpful? Give feedback.
-
Yup, it was @TechnikEmpire so I assume he'll come back with an even better answer... |
Beta Was this translation helpful? Give feedback.
-
I use this model in my chrome extension, from what I remember it was the best however it looks like it's trained on professional porn and not more home-made as where my chrome extension is made for omegle, there are many people with bad lighting or blurry cameras and it doesn't detect them very well sadly |
Beta Was this translation helpful? Give feedback.
-
The site at https://nsfwjs.com/ uses the 93% accuracy model, which I'm pretty sure I tried in the past and got worse results than I get with the model I'm currently using. |
Beta Was this translation helpful? Give feedback.
-
I posted something and then thought better of it. I am the author of a closed-source program that uses such models and there are dirtbags that follow me on github and steal my oss so I'm doubly inclined not to help anyone. However, I was the guy making big overhauls to this repo and for some reason @GantMan stopped merging my pr's. https://github.com/TechnikEmpire/nsfw_model Has stats for every kind of model I trained. Dunno if my site links are gone or not. But basically you just need to manually clean up the categories and the use the new tf2 api that leans on tfhub that I integrated and you'll hit much better numbers. |
Beta Was this translation helpful? Give feedback.
-
Sorry not "this" repo, the model repo that drives this project. |
Beta Was this translation helpful? Give feedback.
-
Hello,
I'm using these model files:
https://github.com/infinitered/nsfwjs/tree/master/example/nsfw_demo/public/model
( I think it's the same as https://s3.amazonaws.com/nsfwdetector/min_nsfwjs.zip )
and I'm having several false positives.
100% non-adult face pictures get flagged as 90%+ porn
I tested also with the non-min model: https://s3.amazonaws.com/nsfwdetector/nsfwjs.zip
But, from the few examples I tried, it didn't change much at all, and some were even worse.
I see that in this issue: #276
You added some new "DAG 93" model.
Is this one supposed to have better accuracy?
What model is the best, and what should I use for best accuracy, please?
What is the difference between using the model provided in README (first link above),
and using the DAG one with { type: "graph" } ?
Are there pros and cons?
Thank you very much.
Beta Was this translation helpful? Give feedback.
All reactions