Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

select faces to not be blurred #51

Open
reeseherber opened this issue Sep 26, 2023 · 14 comments
Open

select faces to not be blurred #51

reeseherber opened this issue Sep 26, 2023 · 14 comments
Labels
enhancement New feature or request

Comments

@reeseherber
Copy link

I was looking into this project and was wondering if it would be possible to select specific faces out of the file to leave unblurred.

@StealUrKill
Copy link

I second this

@mdraw
Copy link
Member

mdraw commented Oct 6, 2023

The face detection model used internally by deface doesn't support face recognition (which would be required for matching a specific face), only general face detection. This feature would be nice to have but I'm afraid implementing it properly would require quite some work and would make the code much more complicated.
If I do a rewrite some day, I'll keep this use case in mind.

@mdraw mdraw added the enhancement New feature or request label Oct 6, 2023
@mthebaud
Copy link

Hi, I have quite the same need, but I have another idea for implementation ?
Could it be possible to define a "detection box" that, with a given pixel rectangle, would ignore the outter of this rectangle.
My use case : two person facing the camera, I want to blur the right one > I define a global rectangle on the right part and the face of the left one is not blurred.
Thanks

@mdraw
Copy link
Member

mdraw commented Jan 23, 2024

Hi @mthebaud, this feature might be a bit too specific for the main project but feel free to implement this in a fork by filtering the detections dets in this loop by their coordinate range:

deface/deface/deface.py

Lines 83 to 89 in 1e6a87f

for i, det in enumerate(dets):
boxes, score = det[:4], det[4]
x1, y1, x2, y2 = boxes.astype(int)
x1, y1, x2, y2 = scale_bb(x1, y1, x2, y2, mask_scale)
# Clip bb coordinates to valid frame region
y1, y2 = max(0, y1), min(frame.shape[0] - 1, y2)
x1, x2 = max(0, x1), min(frame.shape[1] - 1, x2)

As you see, detection boxes are already being clipped to the valid frame coordinate range, so you could just change the first arguments of the max() and min() calls respectively.

@mthebaud
Copy link

That's great, thank for the information !

@nemesis567
Copy link

Honestly that is hugely specific. Using face recognition seems to be the way to go to achieve this. Just run face recognition and then contour recognition, place the person in a buffer then paste that on top of the blurred frame.

@mthebaud
Copy link

mthebaud commented Mar 20, 2024

Yes, I understand that is specific, so I suggest that that this issue can be closed as we have answers to our questions.

@andyg2
Copy link

andyg2 commented Mar 26, 2024

I think it would be less specific if the faces were ordered from left to right and the indexes of that list could be an option.
Obviously someone might need to cut up a video into segments when that order doesn't change - but for interviews etc. having an option to select one or more faces to skip is a useful option.

@StealUrKill
Copy link

I have enabled/added face recognition to it. Works well but it is slow, about 3fps (give or take on hardware). I think I will try and refactor some settings to speed it up. It is using dlib and their models. It is uploaded in my RC branch.

@StealUrKill
Copy link

Commenting on this again as I was able to finally get the newest

CUDA toolkit 12.6
cuDNN 9.4.0
TensorRT 10.4 GA for x86_64

Setup on my gpu and successfully building and installing the dlib pip module from source with cuda support.

pip list is as follows

anonfaces 1.0.6rc0
dlib 19.24.99
onnx 1.16.2
onnxruntime-gpu 1.19.2 ----- installed from onnxruntime documention for cuda 12 pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/

only here for testing due to the correct lib imports in centerface now.
openvino 2024.1.0
openvino-telemetry 2024.1.0

installed but not needed from my testing
tensorrt 10.4.0
tensorrt-cu12 10.4.0
tensorrt-cu12_bindings 10.4.0
tensorrt-cu12_libs 10.4.0

Now the face recognition runs at a nice 15fps with a RTX2070

Screenshot 2024-09-11 000637

@sfxworks
Copy link

Hey, any update on the above? I was looking at this project and master...StealUrKill:anonfaces:master but couldn't quite find where to use this feature.

@StealUrKill
Copy link

StealUrKill commented Nov 18, 2024

Hey, any update on the above? I was looking at this project and master...StealUrKill:anonfaces:master but couldn't quite find where to use this feature.

It's under the RC branch and not master. I have a few things I want to finish/fix before switching it over to the master. Specifically instructions for the dlib python wheel.

@sfxworks
Copy link

Interesting to test. What is it bound by? During scenes with no faces it's fine, but scenes with faces go from 40 it/s to 2 on a Radeon 6800XT.

Either way this is awesome.

@StealUrKill
Copy link

StealUrKill commented Nov 21, 2024

Interesting to test. What is it bound by? During scenes with no faces it's fine, but scenes with faces go from 40 it/s to 2 on a Radeon 6800XT.

Either way this is awesome.

Interesting results. That is far lower than mine as all my machines have Nvidia and using Nvidia cuda with dlib increases the speed 7x usually. Check the readme under the prebuilts folder in the python311 for all my testing results with specs of each of my machines. Have not fully got around for testing the python312 yet. But I feel like it will be the same.

I think the 2 it/s is CPU bound for dlib on actual faces for your machine. I can explain more if needed. I also think there is a better way or possibly a better face recognition that is faster other than dlib. DLIB seemed to be the most straightforward option when searching.

Can you show and explain what you're using in terms of specs and software.

Like are you using dml for onnxruntime? CPU? OPENVINO? Python version?

Also which dlib are you using? Self compiled via python, visual studio, and cmake?

Also what is the size and dimensions of your video that is being used as testing?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

7 participants