-
Notifications
You must be signed in to change notification settings - Fork 342
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
single JPG or PNG pictures #87
Comments
I ran into the same problems, for example with v4l2rtspserver and ffmpeg builds that are included in the test folder. I didn't get any of the V4L2 features to work. If you take a look at the original rtspserver sources, you'll soon notice lots of snx_* calls everywhere. Some are just wrappers around a couple of default V4L2 ioctl calls, others do all sorts of lowlevel stuff. It seems the drivers/api is loosely based on V4L2 interfaces, but they're not really compliant. |
Where is the source for the ported rtspserver? If I can look at that maybe I can see how they are pulling out the picture and do the same thing. |
Maybe this is the source ? https://github.com/haoweilo/RTSP_stream_server |
This link seems to have some more specific info around the interface: fritz-smh/yi-hack#118 |
Found this in the SDK which might do the trick package\app\example.tgz\example.tar\example\src\ipc_func\snapshot\ based on the description of what it does. Going to try to see if I can build it with the SDK. |
Ok have it working. You need to specify -m and can there are a number of other options. For example:
is jpeg quality 10 and it will take 1 picture when requested. By default it waits for you to touch |
What I really want is it listening on mqtt for the request and then sending the picture off to a remote server, but this is a good start as it is the key component of capturing a single jpeg on request. I might be able to pull the code from the example and integrate with fswebcam, but I don't see the point as the example pretty much does what I want as is in terms of taking a picture. |
In case anybody else wants to build it, these are the steps that I ended up having to do :
It may have failed because I was not sure which of the configuration to choose. I just guessed
|
Great job, but does it work while snx_rtsp_server is running? |
No I had to disable snx_rtsp_server as only one of the 2 can have the device open at the same time. We might be able to combine the code to allow a snapshot mid stream but I'm guessing that won't be trivial. I'm going to do some scripting work, possibly try to compile in an mqtt client to handle the external request/response flow. Even if you have to switch between rtsp and snapshots I'm thinking it should be useful. In my use case I want to start sending pictures at 5 second intervals after an alarm is triggered as opposed to sending a video stream. |
Simple script along these lines can trigger capture and scp to target: (do know that binaries etc should go somewhere else than .../data/bin, but that is where I'm experimenting)
|
Have it working with https://github.com/mhdawson/PIWebcamServer.git (SN986 branch) to allow request for picture to be made through mqtt, when the request comes in PIWC triggers "takepicture.sh". In this case I replaced the original content of takepicture.sh with what I showed in the in the last post (with xxxxx replaced with the proper host of course). Something similar could be done through a post to the webserver which is part of the hack, but mqtt is nice because it reaches out to the server and as opposed to you having to allow an incoming http request through your router. @samtap, I'm probably at the point were I should do a bit of cleanup and then I'd be interested to know if you think support for snx_snapshot can/should be added to the base hacks and if so in what form. |
FYI you should be able to use v4l2copy and v4l2loopback to allow multiple applications to read from a video device. |
@roger- I don't think so, I've looked into the source (briefly) and like all v4l2 code it uses raw read/write/ioctl on the device. Like the ffmpeg or v4l2rtspserver builds (in the test folder) I can't get them to work (SNX requires the middleware api they provide in snx_vc, snx_isp etc). |
Ah, you might be right. I thought the SDK docs said it used the standard V4L2 interface but maybe I was mistaken. v4l2wrapper looks like it was factored out of an old version of v4l2rtspserver (which the snx server is based on), so hopefully it won't be too hard :) |
Can't you just take a single image using ffmpeg on the RTSP stream?? |
I tried that out but it means you have to have the RTSP stream running all the time, it seemed bit flaky. From my experience it did not work nearly as nicely as what I have now with snx_snapshot and the mqtt request. |
So, finally I got a JPG stream working! I modified the snx_snapshot example so that it capture an image every second, and also so that it always save the latest image with the same name in the /tmp/www folder. So for a quick how to:
You can modify the option in the script to fit your need: |
I am impressed by what has been accomplished here! :) My question is this, however: cannot this customization and creation of binaries be extended to also compile a version of 'mjpg_streamer' for this device? That would allow easy use of both snapshots and video streaming to web pages -- which I currently do with a variety of cams on my server. I have found compiling 'mjpg_streamer' to be easy in the past. For example, this thread contains instructions for getting/compiling 'mjpg-streamer' on Raspberry Pi (and also my C.H.I.P. computer): https://bbs.nextthing.co/t/mjpeg-streamer-with-compatible-usb-webcam/6505 |
I think the problem would not be to compile this, but to be able to run it without making the camera over-heat. My guess is, the camera is not powerful enough to directly run a mjpg-streamer. |
I had not considered the possibility that 'mjpg-streamer' could take more resources than the RTSP one. Hmmm... |
MJPEG is an outdated format, however I have a new rtsp server that allows snapshots to jpg. It will be released soon(ish)! |
I don't know enough to debate which is best but one thing is sure, mjpeg stream are better supported on client side. Home-assistant (written in Python) cannot handle rtsp well enough: An integration in the rtsp server of a jpg snapshots function is a great news :-) |
Having an RTSP server that can also provide JPG snapshots would be great! However, it would still be nice to have a way to provide a video stream to web pages :) BTW, I did some Googling the other day and got the impression that 'mjpg-streamer' works fine on devices (e.g. routers) that use the same chip as the Xiaomi -- so I think that it would be powerful enough if that were an option... |
I agree we need a good way to stream for web. As far as I know you can only play RTSP with a flash player, instead of mjpeg we could use HLS or WebRTC. The device can manage the H264 frames efficiently by using it's dedicated hardware, so to package in various formats i.e. RTSP, HLS chunks or write to files on sd-card etc is relatively easy/cheap. Encoding in a different format (converting H264 frames to JPG) using software is very expensive (ffmpeg can do it but it takes ~15 seconds to make a single jpg frame). Mjpg-streamer could work to package the JPG frames and provide a stream, but it would require the hardware assisted MJPG encoding (so no simultaneous H264 stream possible) to grab them. |
Hey guys, really nice work with the snapshot function. But is there any way tom implement also a rotation feature? Because i need this function when the camera is mounted on the ceiling. |
Hi @Freshhat , I guess you'd better try to do that in your client. Which client are you using to view your jpg? |
Great work everyone, I've been hanging out for a jpeg snapshot for use with HA for quite sometime |
@RiRomain |
Thanks to advice above from @RiRomain and @mhdawson I finally have a time lapse solution working for me after experimenting with all sorts of ways to capture time lapse from the rtsp stream - which gave very disappointing image quality - lots and lots of broken frames. I found using the @RiRomain above version of snx_snapshot did not work well for me using wget to grab a snapshot.jpg from the web server because the files I captured were often invalid. I suspect that the snx_snapshot process was probably rewriting them as they were being copied.... Long story short, the original SDK version of snx_snapshot which looks for /tmp/snapshot_en being touched before taking a snapshot and writes the time/date stamp into the jpg name enabled me to run the following sh script. It just copies the latest snapshot to a nearby server cifs directory on a regular basis using smbclient (also in the SDK). I tried scp and scp-openssh but could not (easily) get it to authenticate.... Note that some shell trickery is needed to run smbclient automagically. I resorted to this kludge because there is no smbmount and I failed to get busybox mount to work for remote cifs. As you can see, I use the -o parameter to snx_snapshot to write jpg to /tmp/www in the modified rtsp server start script Updated version to leave snapshot.jpg available for the web site looks like:
These jpg's can then be joined in the usual way on the server hosting the daily directories of images using mencoder- eg
|
"Copy the file snx_snapshot on your cam into /media/mmcblk0p2/data/usr/bin/" And don't forget (like me) chmod+x |
@samtap How's the release coming of the new rtsp server? ;) |
@samtap Also interested in the new RTSP server - it looks like there's a new build of snx_rtsp_server at https://github.com/haoweilo/RTSP_stream_server, not sure if that would fix the issue I'm having but trying to use the included snx_rtsp_server with Milestone XProtect just results in a constant RTSP SETUP, RTSP PLAY, RTSP TEARDOWN loop every few seconds. |
It's going very slowly but holidays are coming up so hopefully I'll be able to do a new release then |
Can i get snaphot (jpeg picture) using ffmpeg on camera? |
Hi thanks a lot @RiRomain your snapshot app is exactly what i needed to integrate the cam into home assistant. I've used it for a few hours and it seamed to work fine but the update rate slowed down over night so I tried to figure out whats causing this behavior. It used to save one frame per second but slowed down to >10sec per frame (https://owncloud.illuminum.de/index.php/s/s62aOTgGceFhf8m) The second issue that I've seen was this: Third thing that I'd like to have is OSD and i know that I cat activate the date OSD (see wiki) but I was looking for some extras likecam name and localtime. So long story shot: I've modified your work a bit:
result: and the binary is here https://github.com/KoljaWindeler/XN986/raw/master/snx_sdk/app/example/src/ipc_func/snapshot/snx_snapshot Hope this helps others. Rein gehauen, edit: new parameter
e.g.: snx_snapshot -m -q 40 -n 1 -W 1920 -H 1080 -a Entrance -z 5 >$LOG 2>&1 & |
Short question (likely not in the right place) .. is there anyone who would be interested in motion detection in parallel to these jpg frames? I''ve added motion detection to the snap program and 2 external commands that one can set when starting the command. E.g. ./snx_snapshot -m -q 40 -n 1 -W 1920 -H 1080 -a Entrance -z 5 -b "echo go >> /tmp/log; date >> /tm This seams to work surprisingly well. No the next task is to find a mqtt client that will run on the camera and tell Homeassistant that we're seeing some motion. Home assistant can then grab the frame and send it via e.g. push pullet. As of now I'm doing the same thing with camera+esp8266 but that seams very stupid :). So: 1) anyone interested? 2) is there a camera compatible mqtt client? (@mhdawson) Kolja |
FYI I also modified
|
Hi, that's a fancy approach to control the fps. I've tried your command from https://github.com/samtap/fang-hacks/wiki/WIP:-Motion-detection before but I seams like it requires access to the video device, which isn't available when I run the snx_snapshot. Is that correct? So I can only have one of both, snapshots or motion detection at the time? At least that was the reason why I've integrated the motion detection into my version of snx_snapshot. Apart from that: thanks for mosquitto 👍 |
Right. I stopped using |
I see, so how are you detecting motion now? As of now you're calling mosquitto pub on every frame (at 4fps) so you actually send every frame per wifi, right? I'd guess that this consumes quite a bit of wifi bandwidth, doesn't it? My plan is to send a mqtt message to Home assistant whenever there is motion and let Home assistant grab the frames on incoming mqtt message. Kolja PS love the two way audio that you've integrated. will test tonight 👍 |
I retain 4 fps via I run node-red for automation (my devices are all custom ones) and am very happy with it. Have fun! ) |
#305 (comment) -- a shell MJPEG streamer inside |
@KoljaWindeler I do have an mqtt client running, I'm just back from holidays catching up but I'll try to dig up the details in the new few days. |
Thanks, is it something in the direction of mosquito_pub? I'm using the binary that @dvv posted and they work perfectly. Currently I have added many more options to my branch, e.g.: This way I send "warn" instantly, "alarm" after 2 frames and "clear" after 5 no-motion frames. I've placed the camera next to one of my esp8266 that is connected to a PIR to report motion via mqtt and log both (cam+pir). I've seen lots of false-positiv triggers from the cam and therefore increased the threshold from 320 to 400 pixels. This seams to work quite well, but I'll leave it running for a week or so, before I'll decide the final values. Kolja |
@KoljaWindeler looks like I compiled libpaho-mqtt3cs.so and used that. Source is available here: https://github.com/eclipse/paho.mqtt.c And example of how I used it is: https://github.com/mhdawson/PIWebcamServer/tree/SN986 |
Hi, just to round this thing up, setting 400 as motion detection threshold works fairly well for me. My setup is doing the following things:
Currently I'm calling snx_snapshot like this:
m_cp.sh:
Here are the options (a few a new) for snx_screenshot:
This works as good as a PIR, apart from the effect that it reports motion whenever I turn the lights off in the room ( message "no motion" -> lights turn off -> message "warm" -> lights turn on .. ) but that's something that I'll solve in HomeAssistant Kolja |
Is it possible to interact with the camera over USB, or just WiFi? |
wifi only - the xioafang usb port is for attaching storage AFAIK. |
Is there a way to enable "Y only output" and also what does the YUV rate divider do? Last one, I can't find the yuv files when I enable YUV Output with -r. Where are these saved? |
This teardown shows the SoC is the SN98660 and the sonix site suggests this is a 402MHz processor. So a better buildscript config might be sn98660_402mhz_sf_defconfig. Hopefully this helps |
I'm trying to change the work mode between RTSP server and Snapshot based on a MQTT topic. My steps until now were (maybe they can help someone else ;-) ):
`vi 30-mqtt-listening
|
I've read a few comments about people wanting to have the camera capture single JPG's or PNGs and push then to a remote server which I'd like as well.
I think what we'd want is this project: https://github.com/fsphil/fswebcam. I've used it on a raspberry pi and it does the capture that we want. It supports V4L or V4L2 compatible devices and from reading through the threads it seems XiaoFang WiFi Camera is V4L2 so I thought I'd give porting it over a try.
I have managed to manually hack the build scripts and the dependency libgd (and libfreetype2, libpng, libjpeg, libz which libgd requite) to build using the arm cross compiler on ubuntu. I now have an executable which runs on the camera (build, scp to camera, run though ssh shell running on camera).
Unfortunately I can't get it working yet. It gets as far as wanting to take the picture:
NOTE: before you can get this far you'll have to disable the RTSP server through script management option in the hacks UI.
The docs for V4L2 seem to say that cameras with V4L2 support should handle one of the following:
Read/Write() requires that the device report the V4L2_CAP_READWRITE capability, which is does not, and my experiments to read even 1 byte from the device with Read always returned -1
You are supposed to be able to tell which of the streaming methods are supported by requesting buffers
using VIDIOC_REQBUFS corresponding to the streaming type and it should only succeed if the method is supported. Requesting buffers for the DMA buffer type fails with 'Invalid argument'
Requests for buffers for Memory Mapping and User Pointers work but later calls to enqueue these buffers to be used for capture seems to fail. When I try to query the buffers with VIDIOC_QUERYBUF to configure for memory map, or enqueue with VIDIOC_QBUF the ioctl just fails telling me the ioctl is inappropriate for the device.
So I'm stuck because it looks like none of the methods for getting the picture from the camera are working. On the other hand I know its possible since the RTSP server does it.
My question is if anybody who has worked with the RTSP server know which method it uses to get picture data from the camera, and if so can they point me to the code in the RTSP server that implements that part. That might help me figure out what I'm doing wrong.
The text was updated successfully, but these errors were encountered: