-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
0.12.0 Release #4055
0.12.0 Release #4055
Conversation
✅ Deploy Preview for frigate-docs ready!
To edit notification comments on pull requests, go to your Netlify site settings. |
What about keeping YYYY-HH folders but use UTC instead to avoid DST issues? |
That could work too |
I like Frigate's feature where the camera's stream fills the computer screen when you click on the image of the camera's live view. This feature would really be great if it would switch over to the camera's stream identified as "rtmp" in the config file when it does this. For example, I use the low res stream for "detect", but use the high res stream for "rtmp". I would like to see the high res stream when I click on the live image to fill the screen. Would this be possible given that you're planning to incorporate go2rtc? |
Yes. The current live view requires decoding the video stream which would require lots of CPU. go2rtc would allow a direct passthrough of the video to the frontend so the higher resolution can be used. |
Given that go2rtc allows for direct passthrough of the video to the frontend, could the "camera" tab in the Frigate UI show the low res live feeds instead of snapshots as they currently do? |
That's technically already possible without go2rtc. There are existing feature requests for that. |
Would this means that all the files would be in one folder?
Personally I would much prefer this solution, unix timestamps are annoying to work with as a human. |
Not necessarily.
This will probably be just as easy, but why do you need to interact with the segment data? I think of them like internal binary blob storage for the database not intended to be used directly. |
That's always the dream, but in the end we still rely on our laptop fans to discover errant processes 😛. In my case I concatenated segments to make a timelapse that eneded up being exceptionally easy to accomplish given how nicely organized everything in frigate is. |
With go2rtc being integrated directly, does this mean we can stop using the go2rtc add-on within Home Assistant? What if Frigate is running on a host other than HA how would the WebRTC streams be exposed via HA when external to the network? Thanks for continually making Frigate better!!! |
It depends what all you're using go2rtc for. In this initial implementation not all go2rtc features will necessarily be available. There will also be some caveats with things like webRTC where you may need to run frigate as host |
Basically the bare minimum just to get fast streams viewable via Home Assistant without laginess on startup. Streaming the RTMP feeds directly through HA from separate Frigate host are slow. Planned to use go2rtc to provide WebRTC stream that was much faster via the Frigate lovelace card/integration. |
To bad it doesn't work well with UI generators like Dwain's Dashboard. This last rebuild of HA I went with it just as an ease of use case setup. Boy I wish I wouldn't and had just shard coded the yaml. Given DD is nice and looks well. But very prone to bugs and breaks with updates, can't work with some outside plugins like WebRTC's card, and lack of any real follow through development that would allow it to work with WebRTC I tried adding it and it broke the setup to where I could not remove it except by manually editing the file. |
Can't wait to see TensorRT. I would love to have Frigate on my 4gig Jetson Nano with the Coral TPU attached to be able to get the best out of the object detection while also having a GPU to encode and decode streams. |
I'm confused, that setup wouldn't use TensorRT unless you mean using that and a coral. What you're describing should be possible today. |
Yeah you should already to be able to accomplish what you stated. Tensorrt is to use the GPU for detection. Jetsons have an nvdec chip in them. You should be able follow nvidia hwaccel docs to get where you want. If you have performance issues, apply the settings -threads 1 -surfaces 10 in this line: -c:v h264_cuvid -threads 1 -surfaces 10 this will limit the decoding hardware to the minimum required memory needed for a successful decode. (According to NVIDIA docs 8 surfaces is the minimum needed for a good decode so you can probably get away with less if needed, play with it). I don’t know if the one you have has DLA cores or what but if their are multiple GPUs displayed when you do nvidia-smi you need to add -gpus “corresponding GPU number”. So your hwaccel should look like -c:v h264_cuvid -gpus “1” -threads 1 -surfaces 10 -gpus setting not needed if only a single one or the one you want to use is GPU 0. If it doesn’t work let me know as their is another way to engage nvidia hw decoding with ffmpeg if this does not work. It’s just consumes more memory on the gpu and isn’t ideal unless all work is staying on the GPU which with frigate it current doesn’t. The other setting nvidia explicitly recommends when decoding with ffmpeg is -vsync 0 before the hwaccel args to prevent NVDEC from accidentally duplicated frames when it shouldnt. I have not really seen much of a difference either way with that setting but it is stated it should always be used when decoding if possible. |
@user897943 all of these are potential future improvements, but as the release goals at the top show: the focus for 0.12 is to have frigate be more stable in this case meaning not crash and continue to record even when storage is almost full. |
As was implemented in #3942 if frigate detects that there is not enough space for 1 hour of recordings then frigate will delete recordings from oldest to newest until there is a total space for 2 hours of recording and continue this cycle. If a user fills their storage with unrelated files and frigate has no more recordings to delete then it will not crash on being unable to move recording to the recordings drive. |
Hi everyone, I see that it's planned for the next release to support GPU inferencing with TensorRT. If so, then it will probably require different models, per detector type (I presume no one will want a different model for different instances of the same detector type). So then the config will probably have to look like: detectors: Is that something being considered? |
Yes. A mixed set of detectors is already supported. |
For a mixed set of detectors, I think the model configuration will be detector-framework-specific. Do we need to tweak the model config to account for this? |
I think that will be necessary, yea. Perhaps the model config should now be nested under the detector. |
Thank you for such a detailed explanation. I'm going to give it a try here soon( have to disassemble the case to get to the SD Card ). I'll start from scratch with the Jetson Nano SDK Image and try and add from there. Mine is the 4 gig Nano that was out prior to the big chip shortage and I know it has a buttload of CUDA cores. But I'm not sure about DLA cores, I'll have to check. But I don't remember it showing multiple GPUs when running jtop. Currently I've been having pretty good success just utilizing my 40 core PowerEdge R620 and the TPU attached to a passed through usb adapter. I'm then leveraging DoubleTake and CompreFace on my Home Assistant install for recognition and final verifications. So all I'm looking for Frigate to do is the heavy NVR load, RTMP, go2rtc, or decoding/encoding of stream feeds to Home Assistant and devices, and then utilize the TPU for recognition of Person, Face, or Car and send those to DoubleTake via MQTT for processing. Ultimately I would love to see added ability probably from DoubleTake to take URLs and "scrap" images with the corresponding information and present that as well. Places such as the Sex Offenders List, Departmental of Corrections, and Social Media should allow us to be able to detect who is at our door and as much information as possible before we open the door. I know networking and IT like the back of my hand but I wouldn't consider myself a programmer by any means. Although I'm trying to learn. |
As per some of my feature requests I GREATLY prefer self documented files that don't depend on the Application OR the DB to work to function.. Having a logical / human readable / searchable file system means I could backup a days worth of videos without needing Frigate to view them or know when they were made or if the DB corrupts or something I still can go back through historic videos with file system search and a video player. |
Replied to you in another thread - please share the outcome. I also have Nano 4GB sitting waiting for good use and GPU-accelerated object detection with HA would be the perfect use of it. |
* Add ffprobe endpoint * Get ffprobe for multiple inputs * Copy ffprobe in output * Fix bad if statement * Return full output of ffprobe process * Return full output of ffprobe process * Make ffprobe button show dialog with output and option to copy * Add driver names to consts * Add driver env var name * Setup general tracking for GPU stats * Catch RPi args as well * Add util to get radeontop results * Add real amd GPU stats * Fix missed arg * pass config * Use only the values * Fix vram * Add nvidia gpu stats * Use nvidia stats * Add chart for gpu stats * Format AMD with space between percent * Get correct nvidia % * Start to add support for intel GPU stats * Block out RPi as util is not currently available * Formatting * Fix mypy * Strip for float conversion * Strip for float conversion * Fix percent formatting * Remove name from gpu map * Add tests and fix AMD formatting * Add nvidia gpu stats test * Formatting * Add intel_gpu_top for testing * Formatting * Handle case where hwaccel is not setup * Formatting * Check to remove none * Don't use set * Cleanup and fix types * Handle case where args is list * Fix mypy * Cast to str * Fix type checking * Return none instead of empty * Fix organization * Make keys consistent * Make gpu match style * Get support for vainfo * Add vainfo endpoint * Set vainfo output in error correctly * Remove duplicate function * Fix errors * Do cpu & gpu work asynchonously * Fix async * Fix event loop * Fix crash * Fix naming * Send empty data for gpu if error occurs * Show error if gpu stats could not be retrieved * Fix mypy * Fix test * Don't use json for vainfo * Fix cross references * Strip unicode still * await vainfo response * Add gpu deps * Formatting * remove comments * Use empty string * Add vainfo back in
It supports the same entrypoints, given that tflite is a small cut-out of the big tensorflow picture. This patch was created for downstream usage in nixpkgs, where we don't have the tflite python package, but do have the full tensorflow package.
* Set end time for download event * Set the value
* docs: adds note about dynamic config * less technical verbiage * removes `dynamic configuration` verbiage * list all replaceable values
I believe that we should use defined rtsp_cam_sub, not test_cam_sub
I believe that it should be RTSP there
* Fix timezone issues with strftime * Fix timezone adjustment * Fix bug
* Make note that snapshots are required for Frigate+ * Fix spacing
* Fixed extension of config file Using frigate.yml as the config file for the HA addon gives a validation error, the same contents in frigate.yaml work. * More accurate description of config file handling. * Update docs/docs/configuration/index.md Co-authored-by: Nicolas Mowen <[email protected]> --------- Co-authored-by: Nicolas Mowen <[email protected]>
* Point to specific tag of go2rtc docs * Point to go2rtc 1.2.0 docs * Point to go2rtc 1.2.0 docs * Update camera_specific.md
* Comment out timezone as it should not be set with None if copied * Use "" for ffmpeg: so it does not appear as comment * Add example to timezone setting
I always forget that for the logs to appear there, they should not be sent to stderr but stdout.
* Update Unifi specific configuration Provided more specific detail on what modifications are required to the Unifi camera rtsps links: change to rtspx to remove authentication and remove the ?enableSrtp to function on TCP. Provided a sample configuration for a Unifi camera. * Update docs/docs/configuration/camera_specific.md Co-authored-by: Nicolas Mowen <[email protected]> * Update docs/docs/configuration/camera_specific.md Co-authored-by: Nicolas Mowen <[email protected]> --------- Co-authored-by: Nicolas Mowen <[email protected]>
For this release, the goals are to focus on stability, diagnostics, and device compatibility. No promises, but these are the guardrails.
Docs preview: https://deploy-preview-4055--frigate-docs.netlify.app/
Stability
Diagnostics/Troubleshooting
Device support
Known issues