Replies: 6 comments
-
I'll not argue as your point is totally valid, and is something to be worked on the future, but reason on what led to this Python PackagingPython tooling for multiple local dependent packages is terrible, I'm fighting against this for at least 2 years. The simplest solution, and that works the best so far, was to just bundle everything on a single wheel and It's too useful having a main library for common behavior. Rust does this VERY well defining a main Poetry needed some over-engineered code to manage the multiple venvs per project and was very error prone. Rye solved 90% of the immediate problems, namely a single venv for all projects, managing a specific Python version and bundling VersioningI really wanted to have multiple separate packages for isolation and I don't have to worry about this if the PyPI wheel contains always the "current of everything", and submodules are always up-to-date. That's why it's the most effective one currently Logistics issues of this form will become less common, as lib/projects settle on a stable naming and behavior Users and Python InstallationFor that, I chose to manage the submodule hell on development side. Nevertheless, "from source" mode is automated with a single script under https://brokensrc.dev/get/source, and I'll be uploading new wheels soon.. ..but then, if I recommend pip install, it requires Python, which users might not add to PATH, or not have pip (hi Linux Mint), or 3.12/3.13 requires compilers for some packages, etc. It's painful to instruct on every errors possibility Closing ThoughtsAlso, DepthFlow is really a ShaderFlow spin-off, which focus partly on audio reactiveness. It's a tech demo / full application of the bigger project. I hope this clarifies it. Python really is simultaneously the best and worst programming language 😓 |
Beta Was this translation helpful? Give feedback.
-
I did not mean to come across as critical -- I'm was/am very excited to use this, but I've spent half of yesterday and half of today fighting to get it running. I thought pip install would be easiest, but was running into issues with cmake and samplerate (none of which are your fault). Then I was fighting pip to fix that before giving up and building from source. Building from source was a lot nicer than I expected, but I had issues with symlink and xvfb since it isn't installed on my machine and I don't have admin rights (neither are your fault, but most AI based projects tend to be headless linux first). As I said -- most of it isn't your fault, but I think the traction this has is substantially bigger than ShaderFlow and in my mind (although I understand that you are the one with the vision of what you want this to evolve into), deserves to work as a standalone project. Maybe it helps to clarify my use-case. I want to add simple animation to static images to make them more eye catching. This does a perfect job of being unique and eyecatching. Now I can build programmatically build a video slideshow from a bunch of images and make them be a lot more interesting than standard transitions. Hell, if there was an API for this so I didn't need to host it myself, I would pay for that (hence my recommendation for you to stick this on Replicate 😉) |
Beta Was this translation helpful? Give feedback.
-
Hey ! Great project, but same issue here. I just can't figure out how to run the project on linux. I tried Docker and python env, with no success. There is too much "guessing", and I'm not sure I even understand what we are supposed to install to run this project. At what time did we install Any help would be greatly appreciated. |
Beta Was this translation helpful? Give feedback.
-
Hey @RemyMachado, I'm assuming you just downloaded the DepthFlow repo "standalone"? This repository can't be used alone on development mode as there's a monorepo structure involved, manual instructions here for what needs to happen in the main repo (or use the automated scripts on the same page, I've successfully deployed them this week even on Windows) ~ If you've followed those, maybe you didn't source the Python venv (do it so with Or prefer installing it as a regular python package (though I gotta update it to the v0.4.0) |
Beta Was this translation helpful? Give feedback.
-
Thanks for your prompt response. Indeed, I was trying to run it with the DepthFlow repo "standalone". I installed it with the manual instructions and it went flawlessly. It even installed I successfully ran a first render ! 🎉 I see so much potential in this project. I'm certain that a clearer installation guide and usage documentation would help your project get the recognition it deserves. Also, I have an effiency related question. I installed Pytorch CPU flavor, but I'm only running the rendering with about ~20% of my CPU capacity. Do you know by any chance a way to increase the usage for a faster computation? |
Beta Was this translation helpful? Give feedback.
-
Nice :) 👍🏻 💯
Ya, will work on documentation after the presets systems is implemented, as I'm mostly moving fast and breaking things in the past month or two, adding important features (changing upscalers, depth estimators, post fx) :)
I'm assuming you're meaning that the realtime window, after estimating the image, is using only 20% of a core then that's a good sign, as the framerate is limited to 60fps, you can hit TAB to change it in real time! But 20% CPU for estimating the Depth, faster way is only using the GPU with CUDA or ROCm on PyTorch When exporting to a video file the CPU will go crazy for encoding the video with FFmpeg and should be near 100% all time, or low usage if you're rendering with GPU acceleration namely NVENC ( |
Beta Was this translation helpful? Give feedback.
-
My man. This seems like a fantastic project and I've been trying to get it running for the last 2 days.
It's been a constant roadblock of errors everywhere. Admittedly, not all of it comes down to your package (I'm on a headless server without admin rights), but some of the installation process is quite ridiculous.
Why do I need to install broken-source & every package you've created? Why are there sound libraries in a project which doesn't need sound? Why does installation take over 30mins?
I feel this could be as simple as "git clone depthflow" and then "depthflow --image X", with only the dependencies that are required for this project.
Or, stick your model on huggingface spaces/replicate. I know you may have a lot on your plate but the exposure you'd get by doing this might encourage some open source contributors to help out.
This is the first model of its kind on github (that I could find) -- make getting started with it easy and you'll be the main repo for this stuff
Beta Was this translation helpful? Give feedback.
All reactions