-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature request] Is it possible to implement LeRes model for a more detailed depth map? #11
Comments
wow I didn't know about that repository. Gotta take a look. Thanks! |
Much more accurate than midas, I hope they look more closely at this one. |
Also you might want to take a look at this neighboring repo: |
Place the pulls request here as well, maybe it will be faster for them to look https://github.com/TheLastBen/fast-stable-diffusion/ |
I second this. The depth extraction repo I was working with, which was programmed by @donlinglok and which now works locally on windows actually use that exact system if I'm not mistaken. It's based on a second round of depth-extraction and some combination process after the first pass is completed. The results are really more detailed and they work much better if you want to use your depthmap to create 3d models from your scene. Have a look at this repo And most particularly you may want to compare this part: donlinglok1/3d-photo-inpainting@87ffa05 It was a pleasure helping him fix the problems that were preventing this from running on Windows, and I was so proud when we actually got it right (even though @Donlinglok did all the work!). It would be amazing if it could be adapted for this extension as well. EDIT: This was all based on a request to add this as an extension for Automatic1111 over here - it documents the debugging process of the windows port of this famous 3dboost among other things. One more EDIT: LeRes is now fully functional with this version of the Repo : |
@AugmentedRealityCat WOW, the LeRes result looks good! |
Please try it, and let me know if you need my help with anything. If I understand correctly, that would mean the "Boosting Monocular Depth" we were using was not even the latest version since LeRes was recently added to it in the repo @bugmelone has posted at the top of this thread. This is very promising! I can't wait to try this. |
It WORKS (EDIT: It reall works now) ! I installed in a brand new folder, but I did reuse my old venv. Is there any way to make sure that the new LeRes algorithm has been applied ? One thing is for sure, is that I got all the files I was getting with the previous version: a The other thing it produced is some kind of log in a file called test_opt.txt in
Thanks again for making this happen. |
Impressive @AugmentedRealityCat ! Could you make me a pull request with your modifications? |
I wish I could, but I am not a programmer, so I have no idea how to do that. The programmer is @donlinglok . |
@Extraltodeus I think it is easy to implement the BoostingMonocularDepth, you may check on this file boostmonodepth_utils.py line 23-49, it just collects the image and pass to BoostingMonocularDepth via command line, and invert grayscale of the output. more reference here: |
Looks like @thygate added LeRes to his main extension repo as experimental support. |
Just found this repo: https://github.com/compphoto/BoostingMonocularDepth
And I was very impressed with the level of detail of the LeRes model.
We can see that by comparing Midas vs LeRes results in complex scene image:
The text was updated successfully, but these errors were encountered: