Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

single JPG or PNG pictures #87

Open
mhdawson opened this issue Apr 4, 2017 · 52 comments
Open

single JPG or PNG pictures #87

mhdawson opened this issue Apr 4, 2017 · 52 comments

Comments

@mhdawson
Copy link

mhdawson commented Apr 4, 2017

I've read a few comments about people wanting to have the camera capture single JPG's or PNGs and push then to a remote server which I'd like as well.

I think what we'd want is this project: https://github.com/fsphil/fswebcam. I've used it on a raspberry pi and it does the capture that we want. It supports V4L or V4L2 compatible devices and from reading through the threads it seems XiaoFang WiFi Camera is V4L2 so I thought I'd give porting it over a try.

I have managed to manually hack the build scripts and the dependency libgd (and libfreetype2, libpng, libjpeg, libz which libgd requite) to build using the arm cross compiler on ubuntu. I now have an executable which runs on the camera (build, scp to camera, run though ssh shell running on camera).

Unfortunately I can't get it working yet. It gets as far as wanting to take the picture:

/media/mmcblk0p2/data/bin # ./fswebcam5 --verbose --device /dev/video0
main,1609: gd has no fontconfig support
--- Opening /dev/video0...
Trying source module v4l2...
/dev/video0 opened.
src_v4l2_get_capability,87: /dev/video0 information:
src_v4l2_get_capability,88: cap.driver: "snx_isp"
src_v4l2_get_capability,89: cap.card: "isp Camera"
src_v4l2_get_capability,90: cap.bus_info: ""
src_v4l2_get_capability,91: cap.capabilities=0x04000001
src_v4l2_get_capability,92: - VIDEO_CAPTURE
src_v4l2_get_capability,103: - STREAMING
No input was specified, using the first.
src_v4l2_set_input,181: /dev/video0: Input 0 information:
src_v4l2_set_input,182: name = "Camera"
src_v4l2_set_input,183: type = 00000002
src_v4l2_set_input,185: - CAMERA
src_v4l2_set_input,186: audioset = 00000000
src_v4l2_set_input,187: tuner = 00000000
src_v4l2_set_input,188: status = 00000000
src_v4l2_set_pix_format,541: Device offers the following V4L2 pixel formats:
src_v4l2_set_pix_format,554: 0: [0x30323453] 'S420' (S420)
src_v4l2_set_pix_format,554: 1: [0x30314742] 'BG10' (SBGGR10)
src_v4l2_set_pix_format,554: 2: [0x31384142] 'BA81' (SBGGR8)
Using palette BAYER
src_v4l2_set_mmap,693: mmap information:
src_v4l2_set_mmap,694: frames=4
--- Capturing frame...
VIDIOC_DQBUF: Inappropriate ioctl for device
No frames captured.

NOTE: before you can get this far you'll have to disable the RTSP server through script management option in the hacks UI.

The docs for V4L2 seem to say that cameras with V4L2 support should handle one of the following:

  • The classic I/O (Read/Write0)
  • Streaming I/O (Memory Mapping)
  • Streaming I/O (User Pointers)
  • Streaming I/O (DMA buffer importing)

Read/Write() requires that the device report the V4L2_CAP_READWRITE capability, which is does not, and my experiments to read even 1 byte from the device with Read always returned -1

You are supposed to be able to tell which of the streaming methods are supported by requesting buffers
using VIDIOC_REQBUFS corresponding to the streaming type and it should only succeed if the method is supported. Requesting buffers for the DMA buffer type fails with 'Invalid argument'

Requests for buffers for Memory Mapping and User Pointers work but later calls to enqueue these buffers to be used for capture seems to fail. When I try to query the buffers with VIDIOC_QUERYBUF to configure for memory map, or enqueue with VIDIOC_QBUF the ioctl just fails telling me the ioctl is inappropriate for the device.

So I'm stuck because it looks like none of the methods for getting the picture from the camera are working. On the other hand I know its possible since the RTSP server does it.

My question is if anybody who has worked with the RTSP server know which method it uses to get picture data from the camera, and if so can they point me to the code in the RTSP server that implements that part. That might help me figure out what I'm doing wrong.

@samtap
Copy link
Owner

samtap commented Apr 4, 2017

I ran into the same problems, for example with v4l2rtspserver and ffmpeg builds that are included in the test folder. I didn't get any of the V4L2 features to work.

If you take a look at the original rtspserver sources, you'll soon notice lots of snx_* calls everywhere. Some are just wrappers around a couple of default V4L2 ioctl calls, others do all sorts of lowlevel stuff. It seems the drivers/api is loosely based on V4L2 interfaces, but they're not really compliant.

@mhdawson
Copy link
Author

mhdawson commented Apr 6, 2017

Where is the source for the ported rtspserver? If I can look at that maybe I can see how they are pulling out the picture and do the same thing.

@mhdawson
Copy link
Author

Maybe this is the source ? https://github.com/haoweilo/RTSP_stream_server

@mhdawson
Copy link
Author

This link seems to have some more specific info around the interface: fritz-smh/yi-hack#118

@mhdawson
Copy link
Author

Found this in the SDK which might do the trick package\app\example.tgz\example.tar\example\src\ipc_func\snapshot\ based on the description of what it does. Going to try to see if I can build it with the SDK.

@mhdawson
Copy link
Author

mhdawson commented Apr 16, 2017

Ok have it working. You need to specify -m and can there are a number of other options. For example:

snx_snapshot -m -q 10 -n 1

is jpeg quality 10 and it will take 1 picture when requested.

By default it waits for you to touch /tmp/snaptshot_en. When you do that it then grabs the requested picture(s) writes then to /tmp and then add the name of the snapshot to /tmp/snaplist.txt

@mhdawson
Copy link
Author

What I really want is it listening on mqtt for the request and then sending the picture off to a remote server, but this is a good start as it is the key component of capturing a single jpeg on request. I might be able to pull the code from the example and integrate with fswebcam, but I don't see the point as the example pretty much does what I want as is in terms of taking a picture.

@mhdawson
Copy link
Author

mhdawson commented Apr 16, 2017

In case anybody else wants to build it, these are the steps that I ended up having to do :

  • install lubuntu 16.10

  • add 32 bit support and some key libraries as the SDK has 32 bit binaries

sudo dpkg --add architecture i386
sudo apt-get update
sudo apt-get install libc6:i386 libncurses5:i386 libstdc++6:i386
  • add additional required 32 bit libraries
sudo apt-get install zlib1g:i386
  • Fix up /bin/sh symbolic link so it points to /bin/bash instead of /bin/dash

  • get the SDK from the location shown in fritz-smh/yi-hack/issues/118

  • unzip the SDK files

  • run sdk.unpack to expand the SDK

  • generate auto configured files. This failed with an error but seemed to have gotten far along
    enough for me to work around it and get snx_snapshot to compile:

cd buildscripts
make sn98600_360mhz_sf_defconfig

It may have failed because I was not sure which of the configuration to choose. I just guessed
at which one to use.

  • move the existing shared libraries that are needed to the right directory (needed because the
    generation of the auto generated files did not work fully and so other components were
    not properly installed)
cp   /home/user1/SN986/snx_sdk/middleware/video/middleware/lib/*  /home/user1/SN986/snx_sdk/middleware/_install/lib
cp /home/user1/SN986/snx_sdk/middleware/rate_ctl/middleware/lib/* /home/user1/SN986/snx_sdk/middleware/_install/lib
  • add the cross compiler toolchain to the path
export PATH=/home/user1/SN986/snx_sdk/toolchain/crosstool-4.5.2/bin:$PATH
  • add the required headers to the include path
export C_INCLUDE_PATH=/home/user1/SN986/snx_sdk/middleware/rate_ctl/middleware/include/snx_rc:/home/user1/SN986/snx_sdk/middleware/video/middleware/include/snx_vc
  • Make the binary
cd /home/user1/SN986/snx_sdk/app/example/src/ipc_func/snapshot
make 

@samtap
Copy link
Owner

samtap commented Apr 17, 2017

Great job, but does it work while snx_rtsp_server is running?

@mhdawson
Copy link
Author

mhdawson commented Apr 17, 2017

No I had to disable snx_rtsp_server as only one of the 2 can have the device open at the same time. We might be able to combine the code to allow a snapshot mid stream but I'm guessing that won't be trivial.

I'm going to do some scripting work, possibly try to compile in an mqtt client to handle the external request/response flow. Even if you have to switch between rtsp and snapshots I'm thinking it should be useful. In my use case I want to start sending pictures at 5 second intervals after an alarm is triggered as opposed to sending a video stream.

@mhdawson
Copy link
Author

Simple script along these lines can trigger capture and scp to target: (do know that binaries etc should go somewhere else than .../data/bin, but that is where I'm experimenting)

export HOME=/media/mmcblk0p2/data/bin
rm /tmp/snaplist.txt
touch /tmp/snapshot_en
while  [  ! -f /tmp/snaplist.txt ]
  do
     usleep 100000
  done
NEWFILE=`cat /tmp/snaplist.txt`
mv $NEWFILE $1

scp -i ./bkey.txt -P 20022 $1 ubuntu@XXXXXX:pictures/$1
rm $1

@mhdawson
Copy link
Author

Have it working with https://github.com/mhdawson/PIWebcamServer.git (SN986 branch) to allow request for picture to be made through mqtt, when the request comes in PIWC triggers "takepicture.sh". In this case I replaced the original content of takepicture.sh with what I showed in the in the last post (with xxxxx replaced with the proper host of course).

Something similar could be done through a post to the webserver which is part of the hack, but mqtt is nice because it reaches out to the server and as opposed to you having to allow an incoming http request through your router.

@samtap, I'm probably at the point were I should do a bit of cleanup and then I'd be interested to know if you think support for snx_snapshot can/should be added to the base hacks and if so in what form.

@roger-
Copy link

roger- commented Apr 19, 2017

FYI you should be able to use v4l2copy and v4l2loopback to allow multiple applications to read from a video device.

@samtap
Copy link
Owner

samtap commented Apr 19, 2017

@roger- I don't think so, I've looked into the source (briefly) and like all v4l2 code it uses raw read/write/ioctl on the device. Like the ffmpeg or v4l2rtspserver builds (in the test folder) I can't get them to work (SNX requires the middleware api they provide in snx_vc, snx_isp etc).
However I'm looking into modifying v4l2rtspwrapper (used by v4l2copy, v4l2rtspserver). If it is possible to create a SNX-version of V4l2Device class, this stuff would likely work as-is and we'd have a pretty elegant method of sharing the device for different things like streaming, recording and even running iCamera.

@roger-
Copy link

roger- commented Apr 19, 2017

Ah, you might be right. I thought the SDK docs said it used the standard V4L2 interface but maybe I was mistaken.

v4l2wrapper looks like it was factored out of an old version of v4l2rtspserver (which the snx server is based on), so hopefully it won't be too hard :)

@thanme
Copy link

thanme commented May 12, 2017

Can't you just take a single image using ffmpeg on the RTSP stream??

@mhdawson
Copy link
Author

I tried that out but it means you have to have the RTSP stream running all the time, it seemed bit flaky. From my experience it did not work nearly as nicely as what I have now with snx_snapshot and the mqtt request.

@RiRomain
Copy link

RiRomain commented Jul 26, 2017

So, finally I got a JPG stream working!

I modified the snx_snapshot example so that it capture an image every second, and also so that it always save the latest image with the same name in the /tmp/www folder.
The latest jpg can than be accessed at http://$CAM_IP/snapshot.jpg

So for a quick how to:

  1. Download snx_snapshot here: https://drive.google.com/file/d/0BwhTA0oE8QeXZTU5bGFrWkZXcXc/view?usp=sharing
  2. Copy the file snx_snapshot on your cam into /media/mmcblk0p2/data/usr/bin/
  3. Add execution right to snx_snapshot "chmod +x /media/mmcblk0p2/data/usr/bin/snx_snapshot"
  4. In /media/mmcblk0p2/data/etc/scripts/20-rtsp-server replace the line:
    "snx_rtsp_server -W 1920 -H 1080 -Q 10 -b 4096 -a >$LOG 2>&1 &"
    with this one:
    with "snx_snapshot -m -q 40 -n 1 -W 1920 -H 1080 > $LOG 2>&1 &"
  5. and reboot your webcam (reboot now)
  6. You can now access the image at http://$CAM_IP/snapshot.jpg

You can modify the option in the script to fit your need:
-W Capture Width (Default is 1280)
-H Capture Height (Default is 720)
-q JPEG QP (Default is 60)

@darethehair
Copy link

darethehair commented Aug 8, 2017

I am impressed by what has been accomplished here! :)

My question is this, however: cannot this customization and creation of binaries be extended to also compile a version of 'mjpg_streamer' for this device? That would allow easy use of both snapshots and video streaming to web pages -- which I currently do with a variety of cams on my server. I have found compiling 'mjpg_streamer' to be easy in the past.

For example, this thread contains instructions for getting/compiling 'mjpg-streamer' on Raspberry Pi (and also my C.H.I.P. computer):

https://bbs.nextthing.co/t/mjpeg-streamer-with-compatible-usb-webcam/6505

@RiRomain
Copy link

RiRomain commented Aug 9, 2017

I think the problem would not be to compile this, but to be able to run it without making the camera over-heat. My guess is, the camera is not powerful enough to directly run a mjpg-streamer.

@darethehair
Copy link

I had not considered the possibility that 'mjpg-streamer' could take more resources than the RTSP one. Hmmm...

@samtap
Copy link
Owner

samtap commented Aug 11, 2017

MJPEG is an outdated format, however I have a new rtsp server that allows snapshots to jpg. It will be released soon(ish)!

@RiRomain
Copy link

I don't know enough to debate which is best but one thing is sure, mjpeg stream are better supported on client side. Home-assistant (written in Python) cannot handle rtsp well enough:
https://community.home-assistant.io/t/rtsp-stream-support-for-camera/586/81

An integration in the rtsp server of a jpg snapshots function is a great news :-)

@darethehair
Copy link

Having an RTSP server that can also provide JPG snapshots would be great! However, it would still be nice to have a way to provide a video stream to web pages :) BTW, I did some Googling the other day and got the impression that 'mjpg-streamer' works fine on devices (e.g. routers) that use the same chip as the Xiaomi -- so I think that it would be powerful enough if that were an option...

@samtap
Copy link
Owner

samtap commented Aug 11, 2017

I agree we need a good way to stream for web. As far as I know you can only play RTSP with a flash player, instead of mjpeg we could use HLS or WebRTC.

The device can manage the H264 frames efficiently by using it's dedicated hardware, so to package in various formats i.e. RTSP, HLS chunks or write to files on sd-card etc is relatively easy/cheap. Encoding in a different format (converting H264 frames to JPG) using software is very expensive (ffmpeg can do it but it takes ~15 seconds to make a single jpg frame). Mjpg-streamer could work to package the JPG frames and provide a stream, but it would require the hardware assisted MJPG encoding (so no simultaneous H264 stream possible) to grab them.

@Freshhat
Copy link

Hey guys, really nice work with the snapshot function. But is there any way tom implement also a rotation feature? Because i need this function when the camera is mounted on the ceiling.

@RiRomain
Copy link

Hi @Freshhat , I guess you'd better try to do that in your client. Which client are you using to view your jpg?
Of course it's also possible with the snapshot function, but it need to be implement and I know next to nothing about how to do that... and not sure someone else with the knowledge will invest the time to implement this function.

@halfluck
Copy link

Great work everyone, I've been hanging out for a jpeg snapshot for use with HA for quite sometime

@Orbit4l
Copy link

Orbit4l commented Aug 29, 2017

@RiRomain
Great work on snx_snapshot.
Is there any way to change the frequency of taking a snapshot from 1 sec to any value (having it as a new option)?

@fubar2
Copy link

fubar2 commented Sep 10, 2017

Thanks to advice above from @RiRomain and @mhdawson I finally have a time lapse solution working for me after experimenting with all sorts of ways to capture time lapse from the rtsp stream - which gave very disappointing image quality - lots and lots of broken frames.

I found using the @RiRomain above version of snx_snapshot did not work well for me using wget to grab a snapshot.jpg from the web server because the files I captured were often invalid. I suspect that the snx_snapshot process was probably rewriting them as they were being copied....

Long story short, the original SDK version of snx_snapshot which looks for /tmp/snapshot_en being touched before taking a snapshot and writes the time/date stamp into the jpg name enabled me to run the following sh script. It just copies the latest snapshot to a nearby server cifs directory on a regular basis using smbclient (also in the SDK). I tried scp and scp-openssh but could not (easily) get it to authenticate.... Note that some shell trickery is needed to run smbclient automagically. I resorted to this kludge because there is no smbmount and I failed to get busybox mount to work for remote cifs. As you can see, I use the -o parameter to snx_snapshot to write jpg to /tmp/www in the modified rtsp server start script

Updated version to leave snapshot.jpg available for the web site looks like:

#!/media/mmcblk0p2/data/bin/ash
# ross lazarus me fecit Sept 2017
# fanghack script to send an image to 
# remote NAS for later assembly into a
# time lapse movie
# renames latest so can be viewed at fanghacks web server as (in my case) http://192.168.1.107/snapshot.jpg
SNAPINT=87 # plus POSTINT = 90 secs
POSTINT=3
KEEPME="/tmp/www/snapshot.jpg"
cd /tmp
while true
do 
 rm /tmp/www/*.jpg
 rm /tmp/www/snaplist.txt
 touch /tmp/snapshot_en 
 # trigger snx_snapshot process to make a new snapshot
 sleep $POSTINT
# make sure the snapshot is done
 DIRE=`date +"tent_%Y_%m_%d"`
 smbclient  \\\\192.168.1.9\\private [password here] -U guest <<ENDIT
prompt
lcd www
cd tent
mkdir $DIRE
cd $DIRE
mput *.jpg
quit
ENDIT
 # rename latest snapshot
 fn=$(ls -c /tmp/www/*.jpg | head -n1)
 mv -f -- "$fn" $KEEPME
 chmod ugo+rx $KEEPME
 # fix permissions so can be viewed
 sleep $SNAPINT
done

These jpg's can then be joined in the usual way on the server hosting the daily directories of images using mencoder- eg

#!/bin/bash
# join all frames
mencoder mf://*/*.jpg -mf fps=20:type=jpg:h=720:w=1280 -ovc lavc -lavcopts   vcodec=mpeg4:mbd=2:trell -oac copy -o test.avi 

@yapa69
Copy link

yapa69 commented Oct 21, 2017

"Copy the file snx_snapshot on your cam into /media/mmcblk0p2/data/usr/bin/"

And don't forget (like me) chmod+x

@mikkel75
Copy link

@samtap How's the release coming of the new rtsp server? ;)

@Mazo
Copy link

Mazo commented Dec 7, 2017

@samtap Also interested in the new RTSP server - it looks like there's a new build of snx_rtsp_server at https://github.com/haoweilo/RTSP_stream_server, not sure if that would fix the issue I'm having but trying to use the included snx_rtsp_server with Milestone XProtect just results in a constant RTSP SETUP, RTSP PLAY, RTSP TEARDOWN loop every few seconds.

@samtap
Copy link
Owner

samtap commented Dec 7, 2017

It's going very slowly but holidays are coming up so hopefully I'll be able to do a new release then
@Mazo That sounds like a client issue, new server will still be based on live555 so if your client doesn't cooperate with that there's not much I can do.

@utya1988
Copy link

Can i get snaphot (jpeg picture) using ffmpeg on camera?

@KoljaWindeler
Copy link

KoljaWindeler commented Jan 23, 2018

Hi thanks a lot @RiRomain your snapshot app is exactly what i needed to integrate the cam into home assistant. I've used it for a few hours and it seamed to work fine but the update rate slowed down over night so I tried to figure out whats causing this behavior.

It used to save one frame per second but slowed down to >10sec per frame (https://owncloud.illuminum.de/index.php/s/s62aOTgGceFhf8m)
I found that the/tmp/snaplist.txt was hugh, and that there was one line in the file per frame that was saved (all showing the same filename). this file was dumped on the console (I saw thousands of line when i connected via uart) so i wrote a little service to delete the file every five second and it was working stable and fast for some days.

The second issue that I've seen was this:
https://owncloud.illuminum.de/index.php/s/Oeg5NsgqnsvkP6O
https://owncloud.illuminum.de/index.php/s/babgecMguTwskK7
sorry for scaling them down .. but i guess you can see that the frames contain some artifacts. I guess this is due to fact that I've requested the image just in that moment when it was overwritten. I admit that this happens very rarly but it was kind of annoying when I've "streamed" the snaps.

Third thing that I'd like to have is OSD and i know that I cat activate the date OSD (see wiki) but I was looking for some extras likecam name and localtime.

So long story shot: I've modified your work a bit:

  1. removed the "system(cat /tmp/snaplist.txt))" call to stop spamming the console
    KoljaWindeler/XN986@8551616#diff-c55cba8a8a5802c2e4a91f99ddf93155R468

  2. write to a temp file and rename it once the file is written completely
    KoljaWindeler/XN986@8551616#diff-c55cba8a8a5802c2e4a91f99ddf93155R458

  3. add overlay with custom text
    KoljaWindeler/XN986@7c3cea1
    stolen from some other code

result:
https://owncloud.illuminum.de/index.php/s/uAYmKiH9iNGgVtI

and the binary is here https://github.com/KoljaWindeler/XN986/raw/master/snx_sdk/app/example/src/ipc_func/snapshot/snx_snapshot

Hope this helps others.

Rein gehauen,
Kolja

edit: new parameter

-a			add cam name to OSD
-e			overlay on/off (1/0) (default is 1)
-x			overlay x-position (default is -1 = center)
-z			overlay y-position (default is 0)

e.g.: snx_snapshot -m -q 40 -n 1 -W 1920 -H 1080 -a Entrance -z 5 >$LOG 2>&1 &

@KoljaWindeler
Copy link

KoljaWindeler commented Jan 23, 2018

Short question (likely not in the right place) .. is there anyone who would be interested in motion detection in parallel to these jpg frames? I''ve added motion detection to the snap program and 2 external commands that one can set when starting the command.

E.g. ./snx_snapshot -m -q 40 -n 1 -W 1920 -H 1080 -a Entrance -z 5 -b "echo go >> /tmp/log; date >> /tm
p/log" -c "echo stop >> /tmp/log; date >> /tmp/log"

This seams to work surprisingly well. No the next task is to find a mqtt client that will run on the camera and tell Homeassistant that we're seeing some motion. Home assistant can then grab the frame and send it via e.g. push pullet. As of now I'm doing the same thing with camera+esp8266 but that seams very stupid :).

So: 1) anyone interested? 2) is there a camera compatible mqtt client? (@mhdawson)

Kolja

@dvv
Copy link

dvv commented Jan 24, 2018

  1. https://github.com/samtap/fang-hacks/wiki/WIP:-Motion-detection
  2. mosquitto_pub from MQTT: mosquitto client binaries shadow-1/yi-hack-v3#130 should fit

FYI I also modified snx_snapshot source so that it calls ./send.sh FILENAME on every picture taken. This allows both to i) publish snapshots; ii) control snapshots rate. E.g.:

mosquitto_pub ... -t ... -f "$1"
rm -f "$1"
exec usleep 250000 # allow circa 4 FPS

@KoljaWindeler
Copy link

Hi, that's a fancy approach to control the fps.

I've tried your command from https://github.com/samtap/fang-hacks/wiki/WIP:-Motion-detection before but I seams like it requires access to the video device, which isn't available when I run the snx_snapshot. Is that correct? So I can only have one of both, snapshots or motion detection at the time? At least that was the reason why I've integrated the motion detection into my version of snx_snapshot.

Apart from that: thanks for mosquitto 👍
Kolja

@dvv
Copy link

dvv commented Jan 24, 2018

Right. I stopped using snx_isp_md because of this issue and because it emits false positives in twilight when the picture becomes noisy. The solution would be to increase threshold but I'd better use my own motion detector.
FYI my version of snx_snapshot in #305

@KoljaWindeler
Copy link

I see, so how are you detecting motion now?
Using the build-in snx motion detection in your snx_snapshot like I do it at the moment or have you integrated your motion detection outside in your plain c code that runs on the camera or even further outside like a different PC?

As of now you're calling mosquitto pub on every frame (at 4fps) so you actually send every frame per wifi, right? I'd guess that this consumes quite a bit of wifi bandwidth, doesn't it?

My plan is to send a mqtt message to Home assistant whenever there is motion and let Home assistant grab the frames on incoming mqtt message.

Kolja

PS love the two way audio that you've integrated. will test tonight 👍

@dvv
Copy link

dvv commented Jan 24, 2018

I retain 4 fps via mosquitto_pub and relay motion detection to a custom python opencv app (sources are hairy so code is private). I see no signficant wifi load at all.

I run node-red for automation (my devices are all custom ones) and am very happy with it.

Have fun! )

@dvv
Copy link

dvv commented Jan 29, 2018

#305 (comment) -- a shell MJPEG streamer inside

@mhdawson
Copy link
Author

@KoljaWindeler I do have an mqtt client running, I'm just back from holidays catching up but I'll try to dig up the details in the new few days.

@KoljaWindeler
Copy link

Thanks, is it something in the direction of mosquito_pub? I'm using the binary that @dvv posted and they work perfectly.

Currently I have added many more options to my branch, e.g.:
On first motion frame execute command A, if there are more than N frames with motion, execute command B. As soon as M frames without motion are captured, run command C.

This way I send "warn" instantly, "alarm" after 2 frames and "clear" after 5 no-motion frames. I've placed the camera next to one of my esp8266 that is connected to a PIR to report motion via mqtt and log both (cam+pir). I've seen lots of false-positiv triggers from the cam and therefore increased the threshold from 320 to 400 pixels. This seams to work quite well, but I'll leave it running for a week or so, before I'll decide the final values.

Kolja

@mhdawson
Copy link
Author

mhdawson commented Feb 5, 2018

@KoljaWindeler looks like I compiled libpaho-mqtt3cs.so and used that.

Source is available here:

https://github.com/eclipse/paho.mqtt.c

And example of how I used it is: https://github.com/mhdawson/PIWebcamServer/tree/SN986

@KoljaWindeler
Copy link

KoljaWindeler commented Feb 10, 2018

Hi, just to round this thing up, setting 400 as motion detection threshold works fairly well for me.

My setup is doing the following things:

  1. Send a mqtt message as soon as there is motion: "warn"
  2. Send a mqtt message when there is motion in 2 frames in a row: "alarm:
  3. Send a mqtt message when there is no motion for 5 frames: "off"
  4. Store the last 1000 frames with motion on the local SD card
  5. Add an OSD with cam name an localtime and "M" if there is motion

Currently I'm calling snx_snapshot like this:

snx_snapshot -m -q 40 -n 1 -T 400 -W 1920 -H 1080 -N CAM3 -Y 5 -l "mosquitto_pub -h 192.168.2.84 -u MQTT_USER -P MQTT_PASSWORD -t cam3/r/motion -m ON" -b "mosquitto_pub -h 192.168.2.84 -u MQTT_USER -P MQTT_PASSWORD -t cam3/r/motion -m WARN" -c "mosquitto_pub -h 192.168.2.84 -u MQTT_USER -P MQTT_PASSWORD -t cam3/r/motion -m OFF" -M "/media/mmcblk0p2/data/opt/m_cp.sh" >$LOG 2>&1 &

m_cp.sh:

#!/bin/sh
DIR=/media/mmcblk0p2/data/opt/snaps;
mkdir -p $DIR >/dev/null 2>&1;
cd $DIR;
ls -A1t  | sed -e '1,1000d' | xargs rm >/dev/null 2>&1;
cp /tmp/www/snapshot.jpg $(date +"%Y%m%d_%H%M%S").jpg

Here are the options (a few a new) for snx_screenshot:


Usage: snx_snapshot [options]/n
Version: V0.1.2
Options:
	-h		Print this message
	-m		m2m path enable (default is Capture Path)
	-o		outputPath (default is /tmp)
	-i		isp fps (Only in M2M path, default is 30)
	-f		codec fps (default is 30 fps, NOT more than M2M path)
	-W		Capture Width (Default is 1280, depends on M2M path)
	-H		Capture Height (Default is 720, depends on M2M path)
	-q		JPEG QP (Default is 60)
	-n		Num of Frames to capture (Default is 3)
	-s		scaling mode (default is 1,  1: 1, 2: 1/2, 4: 1/4 )
	-r		YUV data output enable
	-v		YUV capture rate divider (default is 5)
	-T		Motion detection threshold (default is 320)
	-j		Num of no motion frames before calling motion end command (default is 5)
	-k		Num of motion frames before calling motion start command (default is 2)
	-t		Command to execute on each frame (default is none)
	-b		Command to execute on motion instantly (default is none)
	-M		Command to execute on each motion frames (default is none)
	-l		Command to execute after '-k' motion frames (default is none)
	-c		Command to execute afer '-j' no motion frames (default is none)
	-N		Cam name for OSD
	-e		Overlay on/off (1/0) (default is 1)
	-X		Overlay x-position (default is -1 = center)
	-Y		Overlay y-position (default is 0)
	-C		Overlay color (default is 0x00FF00)
	-u		Delay between snapshots [ms] (default is 1000)
	M2M Example:   snx_snapshot -m -i 30 -f 30 -q 120 /dev/video1
	capture Example:   snx_snapshot -n 1 -q 120 /dev/video1

This works as good as a PIR, apart from the effect that it reports motion whenever I turn the lights off in the room ( message "no motion" -> lights turn off -> message "warm" -> lights turn on .. ) but that's something that I'll solve in HomeAssistant

https://github.com/KoljaWindeler/XN986/blob/master/snx_sdk/app/example/src/ipc_func/snapshot/snx_snapshot

Kolja

@eberkund
Copy link

Is it possible to interact with the camera over USB, or just WiFi?

@fubar2
Copy link

fubar2 commented Apr 14, 2018

Is it possible to interact with the camera over USB, or just WiFi?

wifi only - the xioafang usb port is for attaching storage AFAIK.

@russellhq
Copy link

Is there a way to enable "Y only output" and also what does the YUV rate divider do? Last one, I can't find the yuv files when I enable YUV Output with -r. Where are these saved?

@russellhq
Copy link

This teardown shows the SoC is the SN98660 and the sonix site suggests this is a 402MHz processor. So a better buildscript config might be sn98660_402mhz_sf_defconfig.
SONiX site for SN98660 http://www.sonix.com.tw/article-en-958-25165
Teardown https://www.unifore.net/product-highlights/disassemble-cheapest-xiaomi-isc5-1080p-wi-fi-camera.html

Hopefully this helps

@anfss
Copy link

anfss commented Jun 11, 2018

I'm trying to change the work mode between RTSP server and Snapshot based on a MQTT topic.
For that, I do a stop action on current mode and start the other.
However, when I activate a change from RTSP to Snapshot and, after a few seconds, again from Snapshot to RTSP it fails in this last one. I think it's related to the TIME_WAIT state for the old RTSP connection. How can I solve this situation.

My steps until now were (maybe they can help someone else ;-) ):

  • create a work directory
mkdir /media/mmcblk0p2/data/ctrl_fang
cd /media/mmcblk0p2/data/ctrl_fang
  • remove 20-rtsp-server script
    • create a backup and remove the script
cp  ../etc/scripts/20-rtsp-server  .
rm  ../etc/scripts/20-rtsp-server

  • create a new script 20-ctrl-fang
    • create/edit file

vi 20-ctrl-fang

* copy&paste the following lines:
#!/bin/sh
PIDFILE="/var/run/ctrl_fang.pid"

status()
{
  pid="$(cat "$PIDFILE" 2>/dev/null)"
  if [ "$pid" ]; then
    kill -0 "$pid" >/dev/null && echo "PID: $pid" || return 1
  fi
}

start_rtsp()
{
  LOG=/dev/null
  echo "Starting RTSP server..."
  snx_rtsp_server -W 1920 -H 1080 -Q 10 -b 4096 -a >$LOG 2>&1 &
  echo "$!" > "$PIDFILE"
}

start_snapshot()
{
  LOG=/dev/null
  echo "Starting Snapshot server..."
  snx_snapshot -m -q 50 -n 1 -W 1920 -H 1080 -N MyCam >$LOG 2>&1 &
  echo "$!" > "$PIDFILE"
}

start()
{
  start_rtsp
}

stop()
{
  pid="$(cat "$PIDFILE" 2>/dev/null)"
  if [ "$pid" ]; then
     kill $pid ||  rm "$PIDFILE"
  fi
}

if [ $# -eq 0 ]; then
  start
else
  case $1 in start|start_rtsp|start_snapshot|stop|status)
    $1
    ;;
  esac
fi
* change permissions and copy the new script to the scripts directory
chmod 755 20-ctrl-fang
cp ./20-ctrl-fang ../etc/scripts/

  • create a new script and command to listening MQTT topic
    • create/edit command file
      `vi mqtt_ctrl_mode

    • copy&paste the following lines:

mosquitto_sub -h <broker_IP> -u <username> -P <password> -t "ctrl_fang"  | while read mqttcmd;
do
  echo "Stop service"
  /media/mmcblk0p2/data/etc/scripts/20-ctrl-fang stop
  echo "Start service with $line"
  /media/mmcblk0p2/data/etc/scripts/20-ctrl-fang $mqttcmd
done

* change permission and copy to usr/bin:
chmod 755 mqtt_ctrl_mode
cp mqtt_ctrl_mode ../usr/bin

* create/edit new script

`vi 30-mqtt-listening

* copy&paste the following lines:
#!/bin/sh
PIDFILE="/var/run/mqtt-listening.pid"

status()
{
  pid="$(cat "$PIDFILE" 2>/dev/null)"
  if [ "$pid" ]; then
    kill -0 "$pid" >/dev/null && echo "PID: $pid" || return 1
  fi
}

start()
{
  LOG=/dev/null
  echo "Starting MQTT Listening..."
  mqtt_ctrl_mode >$LOG 2>&1 &
  echo "$!" > "$PIDFILE"
}

stop()
{
  pid="$(cat "$PIDFILE" 2>/dev/null)"
  if [ "$pid" ]; then
     kill $pid ||  rm "$PIDFILE"
  fi
}

if [ $# -eq 0 ]; then
  start
else
  case $1 in start|stop|status)
    $1
    ;;
  esac
fi

* change permissions and copy the new script to the scripts directory
chmod 755 30-mqtt-listening
cp ./30-mqtt-listening ../etc/scripts/

* reboot

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests