3D Photography using Context-aware Layered Depth Inpainting #50
Replies: 13 comments 54 replies
-
Sorry for the late reply, I saw it on mobile and meant to answer on desktop but got caught up in things .. I wanted to look at cython or numba, maybe implement that stereo generation function in cython for a speedup because reading/writing single elements in python is painfully slow. I also still wanted to add support for manual merging of maps using boost, and I wanted to try some things to get a threejs viewer into the webui .. So plenty to do still, .. Also wanted to investigate Parallax Mapping.. and then there's still these 3d reconstruction papers : edit: |
Beta Was this translation helpful? Give feedback.
-
No worries :) Wow, yeah lots in the pipeline already. Parallax mapping would be neat but I guess that needs some specific shader / webgl integration. The 3D reconstruction stuff is incredible, didn't know that was possible! |
Beta Was this translation helpful? Give feedback.
-
I've had loads of fun playing with this over the last two months. Here are a few links that may be of interest to you: Version running on windows, adapted and tweaked by @donlinglok Discussion about using this with Automatic1111 WebUI |
Beta Was this translation helpful? Give feedback.
-
Hey all! Seems like I was tagged by accident here - just wanted to give a heads up so that correct person could be tagged 😄 |
Beta Was this translation helpful? Give feedback.
-
Can i share it? |
Beta Was this translation helpful? Give feedback.
-
3d photo inpaint is actually insane! Great job on the integration. Even with very chaotic pictures it does a amazing job. depthmap-0051_swing.mp4 |
Beta Was this translation helpful? Give feedback.
-
Finally got some time to test this. Just did a first full run including the 3d model and the 4 videos - it took 3m52s for a 1024x512 image. (EDIT: it took that long because it downloaded the extra models for 3d-photo-inpainting - I just ran the same thing again, and this time it only took 2m43s ! this is much faster than the standalone version !) I got this warning along the way but everything seems to be working !
Now, time to check that PLY model and see if the pipeline from SD-to-blender-to-C4d I was using with the standalone version of 3d-photo-inpainting is still working. Normally it should. EDIT: Everything works. The inpainting works - that was the main thing I was looking for ! Combined with the latest model (the 512 edition) I got amazing results. I am rendering a 30s slow-motion flyover in C4d - just a plain camera move. I'll post it later. In the meantime, here is one of the 4 automated ones: depthmap-0266_circle.mp4 |
Beta Was this translation helpful? Give feedback.
-
@graemeniedermayer I added your youtube video to the README, I hope you don't mind .. ? I also linked to this thread, absolutely awesome video by @AugmentedRealityCat, I would love to add it too with your permission. |
Beta Was this translation helpful? Give feedback.
-
More examples from production pipeline prototypes I've been working on: model generation from drawing Truck_drawing_v02.mp4model split into 2 separate meshes + closing mesh holes + manual inpainting + animation test Hamster_Toy_v17.mp4 |
Beta Was this translation helpful? Give feedback.
-
Hi! Is there a way to export the mesh and texture into a 3D application? |
Beta Was this translation helpful? Give feedback.
-
Yes! OBJ or FBX would be preferred =)Sent from my iPhoneOn Jan 29, 2023, at 2:23 AM, Bob Thiry ***@***.***> wrote:
Again very impressive @AugmentedRealityCat , I've been away for a while, will be catching up asap.
Is WaveFront OBJ output still desirable for the inpainted mesh ?
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you are subscribed to this thread.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Hello! Could you please add an option for loading custom Z-depth images? Sometimes I edit generated z-depth and it would be great to load an edited png. Also it would be nice if you add an ability to export into a png sequence or some other lossless format, even jpg at lowest compression would be great. Thank you! |
Beta Was this translation helpful? Give feedback.
-
Here are my results: Mountains_AI_01_2.mp4A still picture was generated in midjorney, then I changed it a little bit using img2img in SD, then I made several upscaled pictures using SD Upscale and combined them in photoshop with Topaz Gigapixel output to fix some bad areas. After that I fixed overstretched edges at last frame by adding some details using img2img, and after that reprojected fixed frame to whole video using Ebsyth. That made a clear good looking footage at all frames. I combined all outputs in After Effects and to make it look better I added some digital compositing like moving clouds and shadows from them. Final output is in 4K. Easy! |
Beta Was this translation helpful? Give feedback.
-
Hello, just wondering if there is any interest in integrating this?
I managed to get it working and it might be a cool way to let people experience their creations in simulated 3D without any additional gear
So far it only seems to work properly with midas and makes a call out to boost. I've provided an example below
If there's interest I can create a fork of their repo with the changes needed to get it working (it had some OS specific commands and the requirements was out of date)
Integrating it is likely a matter of pointing to the right files assuming the dependencies play nice.
statue_circle.mp4
Beta Was this translation helpful? Give feedback.
All reactions