Skip to content
This repository has been archived by the owner on Jul 8, 2023. It is now read-only.

GPMF Emulation #1

Closed
Cleric-K opened this issue Aug 10, 2020 · 36 comments
Closed

GPMF Emulation #1

Cleric-K opened this issue Aug 10, 2020 · 36 comments

Comments

@Cleric-K
Copy link

Hi,
I just wanted to let you know that I have done something similar for myself some time ago.
I wanted to use ReelSteadyGo so I had to fabricate the GPMF track into the mp4 container.

I was successfully able to do this, by converting the blackbox gyro data into GPMF format and RSTG was fooled to accept it.

I was thinking about packaging all up as a project and publish it but didn't have the time and motivation to do it. In my case I was doing it by a bunch of standalone python scripts that require somewhat low lever understanding of how things work.
The main reason I don't feel strong motivation to make it public is because even if it is packaged in nice wizard-like interface there's still considerable manual work to be done which I guess will put off most average users.

Probably the most tricky part is getting the synchronization between the video and the blackbox data right. Of course, this is done with Blackbox explorer, but in my case it seems that the clocks of the video and FC are running at slightly different rate. For example if I sync the video at the beginning of the BB log then at the end it has already drifted. And vice versa - if I sync at the end, then the at the beginning it is incorret. That's why I had to sync at the beginning, write down some numbers, then sync at the end and write down again. Then these both sync points are used in the script to interpolate. It works, of course, but all this is getting quite laborsome for the normal user. Also the syncing in BB explorer must be done quite accurately, which requires some skills too.

In the end run, everything works but it is a hassle.

If you are interested I can share with you my work and you can use it in any way you like. Honestly, I don't know if I'm ever going to work on this on my own to make it public.

Here's an example. It is shot with Runcam 3, integrated the gyro data and run through RSTG. The results are plausible overall. I suspect that it might never be quite perfect, because if the camera is softmounted the microvibrations can quite easily be out of phase with these of the FC. For this reason I doubt that propwash can be properly eliminated but I guess in some cases it will work fine.
https://www.youtube.com/watch?v=CYlGe_PhRLg

Best~

@ElvinC
Copy link
Owner

ElvinC commented Aug 10, 2020

That's amazing work. Thanks for sharing, I was wondering if anyone else had tried something similar before. Your clip shows that there is some merit to the method.

Another issue (I think) with tricking ReelSteady Go with fake GPMF data, is the lack of camera calibration/lens undistortion presets for non-GoPro Cameras. It seems to work decently well in your example, but I doubt it will work well for cameras with a completely different FOV/distortion. I also wanted to try to make something from scratch, which is why I started this project.

Hopefully I can figure something out with automatic gyro sync by comparing waveforms programmatically. Small vibrations will probably be hard/impossible to stabilize out as you mentioned, but for now the goal for non-GoPro footage will be to smooth out the user stick input, which should get good results on a low vibration quad + ND filter. The newer Runcams have a built-in gyro for doing in-camera stabilization. Maybe, just maybe, It'll be possible to log that data with a new firmware. It could also be fun to design a PCB for external logging like the virtualGimbal project, but we'll see.

Oh, and I would love to take a look at your code. The gyro interpolation code could be useful for what I need to implement.

@Cleric-K
Copy link
Author

Here are the scripts gopro.zip.
They are very chaotic because it was all experimenting, just trying to make it work.
To be honest, I haven't looked at them in months and now even I have trouble figuring out some of the parts :D

Some of the scripts were actually stand alone tests and have no relation to the overall project but I have left everything in.
In general, gp.py was supposed to be the main script. It takes few cmd line args:
1/ the bbox exported as csv from bbox explorer
2/ the input mp4/mov file
Here we have two cases: if we sync using only one point
3/ is the offset as seen in bbox explorer
If using two points for sync, first we sync at the beginning of the log:
3/ offset
4/ the time at the cursor
Then we sync again near the end of the log
5/ offset
6/ the time at the cursor

If both offsets are equal it means the video and fc clock are ticking in sync and there's no
need for 2-point sync.

The general logic is as follows:

  1. First step is to correct the FPS. GoPro produces files with strange time base like
    30000/1001 or 60000/1001. RSTG expects num/denom like that so we have to switch the time base of
    our file if needed. It does not modify the video data itself, only the numbers in the header
    which basically tells how long a single frame from the video data should last. Obviously
    this will put the video and audio progresivelly out of sync but it is not a problem because
    at the end we can restore the original num/denom.
  2. Then we read and parse the bbox csv. Only interested in the time (useconds) and the gyro columns. The time column measures
    time from the moment the FC is booted (not from arming/bb start) that's why we subtract the first time value from every time value in order to get zero based microseconds time.
    It is important to measure very accurately the camera tilt of the HD camera because gyro data has to be rotated.
  3. Using the offsets provided as args we map the time column from FC time to video time (bbox.map_time)
    If using single point sync we simply add the offset to every value of the FC time (btw it's using numpy so the t in
    map_time is actually numpy.array).
    If using two points, it is a matter of a simple linear mapping. We know what the offsets should be in time1 and time2 so
    we simply make linear function that interpolates the offset from time1 to time2 and adds the interpolated offset at
    every point.
  4. Now we have vectors with timestamps corrected for video time and corresponding gyro data.
    Next step is to get the gyro data at every GPMF frame we'll generate. The timestamps of the bbox gyro data, although
    now in the correct time base, have different frame rate than the needed GPMF samples so can't be used directly. For
    this we use numpy.iterp. We create linspace for the timestamps of the gyro GPMF frames and sample our
    timestamps/gyro data at that times. As a result we have equally spaced samples of the gyro data at the correct GPMF frame rate.
  5. Then we construct the actual GPMF track and build the MP4 container.

Besides the GPMF track there's some additional meta data (in the beginning of the mdat section if I remember correctly).
RSTG parses video width/height/fps from the mp4 container, but it reads something in this meta data to know in what mode
the video was shot - basically it cares if Superview was used and the model ID of the camera.
For this part I simply have ripped that meta part from real GP Hero Session 5 files and I put it
into the output container.

Since RSTG is written in Unity it is quite easy to dig inside it using dnSpy and UnityAssetsBundleExtractor. After the basic video data is read, the correct preset is chosen among these:
image

If the source video is 1920x1080 for example the wide meta data should be put inside the file. For 4:3 - linear. The super preset could be useful only if working with older GoPro (like H4) that has superview but does not record gyro (this starts from 5).

For example, for my video footage with Runcam 3 I have used the H5S.1920.1080.wide.60000 preset. You are quite correct that RSTG has database for the lens geometry (in the different modes) of different models:
image

So yes, the closer the lens of our camera is to that of gopro - the better. For example if I work on my runcam3 footage with the superview preset the results, as it can be expected, are very distorted. But it seems that the wide preset looks more or less OK.

gp.py dumps the meta at the end as meta2 file. I used to manually convert that to meta.js using the bin2array.py script. Then with viz.html it was possible to make basic check if the transformed gyro data makes any sense. The html parses the meta data and gets the GPMF data, and draws basic symbol over the video to indicate the gyros.

I have written my own routines for mp4 and gpmf. It turned out that they are quite clear formats so using external libs was not that necessary and furthermore I had to do it in order to understand better their structures.

The virtualGimbal project is interesting. In what format does it expect the gyro data? Wouldn't it be possible to convert our bbox data to their format?

If I have missed something, please write back and I'll try to explain further.

Best~

@ElvinC
Copy link
Owner

ElvinC commented Aug 10, 2020

This is absolutely awesome. I'll take a look :)

@attilafustos
Copy link

Hello @Cleric-K, @ElvinC
@Cleric-K I was looking for a solution like this (Reelsteady). I think this is excellent work.
@ElvinC I saw your results video on youtube and it is great work. I will definitely check it out.

@Cleric-K
I was trying the gp.py script but i get an error (typo maybe?):
image
image

@Cleric-K
Copy link
Author

Cleric-K commented Oct 31, 2020

Yes, that line is not needed. I'm not sure if the script works as a whole. I don't remember in what state I've left the work. Everything was in active developing, commenting, add, removing code, so I'm not sure if the script is working properly.

For example line 45 sys.exit() is probably something I've left when testing and it probably has to be removed.

@attilafustos
Copy link

Yes, that line is not needed. I'm not sure if the script works as a whole. I don't remember in what state I've left the work. Everything was in active developing, commenting, add, removing code, so I'm not sure if the script is working properly.

For example line 45 sys.exit() is probably something I've left when testing and it probably has to be removed.

Today I had some success with your code.
I had to resize the video from 4K to 1080p to be accepted by RS GO.
I am not sure though how to set the offset, and the video is not well stabilized like this.

@attilafustos
Copy link

@Cleric-K
Can you please tell me how to modify the code so that I can use 4K video?

@soukron
Copy link
Contributor

soukron commented Dec 18, 2020

Can you please tell me how to modify the code so that I can use 4K video?

Unless i have missunderstood the explanation, you would probably need to edit the metadata used to identify the camera preset, this seems to be configured in the templates.py file (mdat_gopro_meta), but I don't know yet how.

@Cleric-K
Copy link
Author

I don't have the possibility to look into this now but my guess is that, as @soukron says, it'll be needed to get real gopro 4k movie file and get the meta from there.
Keep in mind that information about the video mode (superview, wide, ...) is stored into this specific to gopro meta data. I don't know what the format of it is. When I was working on the scripts I just took the meta from an existing H5 sess mp4 shot in wide mode (not super). So I guess that the same must be done regarding 4k.

There's one additional caveat. Normally the mp4 format uses 32bit numbers for offsets. This means that file size can not exceed ~4GB (2^32 bytes). Of course, the format has been extended and it does support special kind of offsets that are 64bit. Unfortunately, the scripts do not support this. So if your 4K clip exceeds 4GB it will most certainly not work.

@attilafustos
Copy link

@Cleric-K
I have some GP 6 videos I captured with my naked gopro in 4K wide. I will take a look and try to figure out.
Thanks

@attilafustos
Copy link

I extracted the udta binary information from the mp4 but I cannot figure how to convert it to the escaped format used in template file.

@Cleric-K
Copy link
Author

There are different way. One is to use python. If you have the data in a file meta:

f = open('meta', 'rb')
print ( repr(f.read()) )

@attilafustos
Copy link

@Cleric-K
Thank you. I think it works. I used the 4K wide from session 5.
RS GO accepted the file for processing, I added sync points .
Only thing , it seems the output video is much wider format then 16:9
Will upload result to the tubes when done.

@Cleric-K
Copy link
Author

Do you get the same much wider format when processing the original gp file?

@attilafustos
Copy link

@Cleric-K
Here it is: https://www.youtube.com/watch?v=psUP377Jzx0&feature=youtu.be
YT still working on 4K version though.

@Cleric-K
Copy link
Author

Great job! The side-by-side video actually show the results to be pretty good IMO.

@attilafustos
Copy link

@Cleric-K
Well, it's your work, I am only testing. You did a great job indeed.
I added some code to detect video resolution, so I can include one udta or another.
The only video I made with blackbox is too dark so I cannot tell how good the overall result is.
In my case I used UART control to start stop the camera from the Arm switch.
This way I get the same lenght of gyro and video and it is syncronised from the start (I used 0:0 offset).
I really hope Runcam will listen and add gyro to the new hybrid because blackbox is problematic. Some FC have no flash, other have limited memory. You also need UART , some FC's lack UARTS.
I am not sure how much I will use this because I also have my GP Lite.
But I had this goal to succeed with split camera stabilisation because they have great potential if done right.

@attilafustos
Copy link

@Cleric-K
Do you have a github repository for the project?
I was thinking to add my own branch to it with my contributions.

@Cleric-K
Copy link
Author

No I don't. If you want you can make your own project. If not, I can make a repo in my account and I'll give you developer access.

@attilafustos
Copy link

@Cleric-K
I think it would be best if you make a repo and I push there. I don't want to get credits for your work :)
master - would be your original code
myownbranch - modified code

@Cleric-K
Copy link
Author

I've created https://github.com/Cleric-K/BlackboxToGPMF
Please checkout the initial commit and rebase your project on it, so it will look as building upon it (simply paste your scripts over and make new commit). And you can directly commit to master - however you like.

@Fifi1717
Copy link

@Cleric-K hi Cleric-K i wanna ask you can u send me ur email ? thanks

@attilafustos
Copy link

@Cleric-K
I pushed a new branch "gui" to the repository.
l also made a facebook group for this: https://www.facebook.com/groups/fpvtools

@Cleric-K
Copy link
Author

Great! Let's hope it will see development.

@Fifi1717 just write here

@ElvinC
Copy link
Owner

ElvinC commented Dec 22, 2020

Hey @attilafustos nice work on the UI. Taking some inspiration from that I added a barebone stabilization UI to gyroflow as well to make testing easier. At the moment it only works (with some bugs) when used with GoPro files with internal metadata. Only a few tests using blackbox data were done a while ago, where I was having some difficulty using it with automatic sync (maybe because of the higher sample rate and slight orientation offsets). I was wondering if you would be willing to send me a copy of the modified runcam file to try out and help with generating a lens correction profile for it? If reelsteady can be tricked into thinking it's a gopro file, then maybe gyroflow can be tricked as well.

@attilafustos
Copy link

I need a way to read the video width/height without using external library?
Any idea how to read the info with the mp4 class?

@Cleric-K
Copy link
Author

At the moment I can't look into this but you can use https://sourceforge.net/projects/mp4-inspector/
to see if there's somewhere in the meta data info about width/height. If you locate it then you can use the mp4 class to locate the atom.

@attilafustos
Copy link

Cannot figure out where the mdat meta ends.
image

@Cleric-K
Copy link
Author

Yeah, you'll have to find that in roundabout way. If we knew the exact structure of the Gopro meta we would know, but now we have to use tricks.

Open the file in Mp4 inspector, right click on the file in the tree and expand all.
image
Look for the stco atoms. These contain offsets of the video/audio/other samples. These offsets are relative to the beginning of the file. You'll have to go through all stco that you see and look for the lowest number you see. This means that at that offset there's data sample - thus it means that it is not gorpo meta for sure. Now you'll have to use hex editor and locate that offset. The data between GPRO (after mdat) and that offset exclusive, is the gopro meta.

@attilafustos
Copy link

Thank you

@attilafustos
Copy link

@Cleric-K
I have a new idea, try to fix a Session 5 footage to work with RS GO, this time without blackbox.
I want to read the original GPMF gyro from the mp4 , lowpass filter it then write back the filtered GPMF to the mp4.
Do you think it is possible? Can your mp4 class read the original GPMF?

@ElvinC
Copy link
Owner

ElvinC commented Dec 30, 2020

Jaromeyer (who worked on the blackbox2gpmf project) has done something similar with rsgopatcher, where he modified the acceleration data from the original file for use with horizon lock: https://github.com/jaromeyer/rsgopatcher

@attilafustos
Copy link

I got to the point that I read the original gpmf and I write it back to the resulting mp4.
It works fine...but now I have to 'decode' the gpmf, do the filtering, 'encode' it back with the filtered values.
I guess it enough coding for today.

@attilafustos
Copy link

@Cleric-K
One more question, I get asked a lot on the groups: how to use the offset1, offset2, time1, time2 parameters.
In my demos I only used time 1 parameter with 0:0 because the length of gyro was the same as video length.

@nivim
Copy link
Contributor

nivim commented Jan 1, 2021

@Cleric-K - +1 on @attilafustos's request - Any chance for more Readme explanation on the the process? (or have a visual solution for a Mac). I really like the concept and wants to try it with a Vista DVR video.

@Cleric-K
Copy link
Author

Cleric-K commented Jan 2, 2021

I have created a Discussion section at the project page and propose that we move the talk there.
Here's the answer to the offset/time params: Cleric-K/BlackboxToGPMF#1 (comment)

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants