Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The different number of the output event frames as the original input frames. #57

Open
yuweics opened this issue Aug 21, 2023 · 12 comments

Comments

@yuweics
Copy link

yuweics commented Aug 21, 2023

How should we configure the parameters to have the same number of the output event frames as the original input frames? For example, if we set "input_frame_rate" to 10 and "dvs_exposure" to 0.1, the output event is always two frames less than the input

@mohsij
Copy link

mohsij commented Aug 23, 2023

Would also like this answered. Thanks

@tobidelbruck
Copy link
Collaborator

tobidelbruck commented Aug 23, 2023 via email

@mohsij
Copy link

mohsij commented Aug 23, 2023

@tobidelbruck that makes sense to me if the events are generated using the difference between consecutive frames. Generating log intensity images on pairs of frames would produce 2 less frames by design. I suppose to get equal number of frames to the input one could use the --hdr and supply our own log intensity frames?

@tobidelbruck
Copy link
Collaborator

tobidelbruck commented Aug 23, 2023 via email

@yuweics
Copy link
Author

yuweics commented Aug 23, 2023

Yes, I find the first and last frames lack corresponding output events through debugging. Why there is no corresponding event for the two frames.

@jinzi98
Copy link

jinzi98 commented Nov 6, 2023

Yes, I find the first and last frames lack corresponding output events through debugging. Why there is no corresponding event for the two frames.

Hi, I guess you used frames to generate events. That's what I want to do. I read v2e tutorials but it seems it just shows how to generate events from videos. So if I want to use frames to get events, how should I do?

@tobidelbruck
Copy link
Collaborator

tobidelbruck commented Nov 6, 2023 via email

@jinzi98
Copy link

jinzi98 commented Nov 22, 2023

I recall we had an option to read videos from a folder of frames. Please check if that still works correctly. Input file handling:   -i INPUT, --input INPUT                         Input video file or a image folder; leave empty for                         file chooser dialog.If the input is a folder, the                         folder should contain a ordered list of image files.In                         addition, the user has to set the frame rate manually.

I try to use the v2e_tutorial.ipynb on Colaboratory to get events from a list of frames. I uploaded a zip named "motion.zip" which contains 20 continuous frames and changed the "video_path" to "/content/motion.zip". It seemed not to work after running "final_v2e_command" and went wrong. The error log is below:

"INFO:v2e:torch device is cuda
INFO:v2e:No module named 'gooey': Gooey GUI builder not available, will use command line arguments.
Install with 'pip install Gooey if you want a no-arg GUI to invoke v2e'. See README
INFO:v2e:name 'Gooey' is not defined: Gooey package GUI not available, using command line arguments.
You can try to install with "pip install Gooey"
INFO:v2ecore.v2e_utils:using output folder /content/v2e-output
INFO:v2e:output_in_place==False so made output_folder=/content/v2e-output
INFO:v2ecore.v2e_args:
*** arguments:
auto_timestamp_resolution: False
avi_frame_rate: 30
batch_size: 8
crop: None
cs_lambda_pixels: None
cs_tau_p_ms: None
cutoff_hz: 30.0
ddd_output: False
disable_slomo: True
dvs1024: False
dvs128: False
dvs240: False
dvs346: True
dvs640: False
dvs_aedat2: None
dvs_aedat4: None
dvs_emulator_seed: 0
dvs_exposure: ['duration', '.033']
dvs_h5: events.h5
dvs_params: None
dvs_text: None
dvs_vid: dvs-video.avi
dvs_vid_full_scale: 2
hdr: False
input: /content/motion_frames.zip
input_frame_rate: 30.0
input_slowmotion_factor: 1.0
label_signal_noise: False
leak_jitter_fraction: 0.1
leak_rate_hz: 0.1
neg_thres: 0.2
no_preview: True
noise_rate_cov_decades: 0.1
output_folder: /content/v2e-output
output_height: None
output_in_place: False
output_width: None
overwrite: True
photoreceptor_noise: False
pos_thres: 0.2
record_single_pixel_states: None
refractory_period: 0.0005
save_dvs_model_state: False
scidvs: False
shot_noise_rate_hz: 5.0
show_dvs_model_state: None
sigma_thres: 0.03
skip_video_output: False
slomo_model: /usr/local/lib/python3.10/dist-packages/input/SuperSloMo39.ckpt
slomo_stats_plot: False
start_time: None
stop_time: None
synthetic_input: None
timestamp_resolution: None
unique_output_folder: False
vid_orig: video_orig.avi
vid_slomo: video_slomo.avi

WARNING:v2ecore.v2e_args:
**** extra other arguments (please check if you are misspelling intended arguments):
--davis_output

INFO:v2ecore.v2e_args:DVS frame expsosure mode ExposureMode.DURATION: frame rate 30.3030303030303
INFO:v2e:opening video input file /content/motion_frames.zip
[ERROR:[email protected]] global cap.cpp:164 open VIDEOIO(CV_IMAGES): raised OpenCV exception:

OpenCV(4.8.0) /io/opencv/modules/videoio/src/cap_images.cpp:253: error: (-5:Bad argument) CAP_IMAGES: can't find starting number (in the name of file): /content/motion_frames.zip in function 'icvExtractPattern'

INFO:v2e:Input video frame rate 0.0Hz is overridden by command line argument --input_frame_rate=30.0
WARNING:v2e:num frames is less than 2, probably cannot be determined from cv2.CAP_PROP_FRAME_COUNT
WARNING:v2e:slomo interpolation disabled by command line option; output DVS timestamps will have source frame interval resolution
INFO:v2e:
events will have timestamp resolution 33.33ms,
WARNING:v2e:DVS video frame rate=30.3030303030303Hz is larger than the effective DVS frame rate of 30.0Hz; DVS video will have blank frames
INFO:v2e:Source video /content/motion_frames.zip has total 0 frames with total duration -33.33ms.
Source video is 30fps with slowmotion_factor 1 (frame interval 33.33ms),
Will convert 0 frames 0 to -1
(From 0.0s to -0.03333333333333333s, duration -0.03333333333333333s)
INFO:v2e:v2e DVS video will have constant-duration frames
at 30.30fps (accumulation time 33ms),
DVS video will have -2 frames with duration -66ms and playback duration -66.67ms

INFO:v2ecore.emulator:ON/OFF log_e temporal contrast thresholds: 0.2 / 0.2 +/- 0.03
INFO:v2ecore.emulator:opening event output dataset file /content/v2e-output/events.h5
WARNING:v2ecore.emulator:cannot get screen size for window placement: No enumerators available
INFO:v2e:processing frames 0 to -1 from video input
INFO:v2e:Input video /content/motion_frames.zip has W=0 x H=0 frames each with 3 channels
INFO:v2e:*** Stage 1/3: Resizing 0 input frames to output size (with possible RGB to luma conversion)
rgb2luma: 0fr [00:00, ?fr/s]
INFO:v2e:*** Stage 2/3:turning npy frame files to png from /tmp/tmpfn8yol40
npy2png: 0fr [00:00, ?fr/s]
Traceback (most recent call last):
File "/usr/local/bin/v2e", line 8, in
sys.exit(main())
File "/usr/local/bin/v2e.py", line 795, in main
np.max(interpTimes)-np.min(interpTimes))
File "/usr/local/lib/python3.10/dist-packages/numpy/core/fromnumeric.py", line 2810, in max
return _wrapreduction(a, np.maximum, 'max', axis, None, out,
File "/usr/local/lib/python3.10/dist-packages/numpy/core/fromnumeric.py", line 88, in _wrapreduction
return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
ValueError: zero-size array to reduction operation maximum which has no identity"

@duguyue100
Copy link
Collaborator

@jinzi98 I see the input is a zip file, v2e doesn't read zip, you need to unzip it first or group your frames into a video yourself.

@jinzi98
Copy link

jinzi98 commented Nov 22, 2023

Thanks a lot! That does help! I unzipped the frames and sent the frames file to "video_path". It ran successfully, and I got events.h5 file. The .h5 file shape is (1760862, 4), and I guess it means 1760862 events with h, w, time_stample, and luminance change.

Here I need help. I will use event information in my deep learning task so I want to change the events into the format I can use, like frames, and let their resolution and number like the frames' I put in. For example, I put in 20 frames, and every frame's shape is (720, 480, 3). After v2e, I want to get 19 event frames with (720, 480, x) shape. What do you think I should do? Hope for your reply.

@xunhuann
Copy link

Thanks a lot! That does help! I unzipped the frames and sent the frames file to "video_path". It ran successfully, and I got events.h5 file. The .h5 file shape is (1760862, 4), and I guess it means 1760862 events with h, w, time_stample, and luminance change.

Here I need help. I will use event information in my deep learning task so I want to change the events into the format I can use, like frames, and let their resolution and number like the frames' I put in. For example, I put in 20 frames, and every frame's shape is (720, 480, 3). After v2e, I want to get 19 event frames with (720, 480, x) shape. What do you think I should do? Hope for your reply.

Please, did you solve the problem of the difference in the number of input and output frames?

@jinzi98
Copy link

jinzi98 commented Dec 1, 2023

Thanks a lot! That does help! I unzipped the frames and sent the frames file to "video_path". It ran successfully, and I got events.h5 file. The .h5 file shape is (1760862, 4), and I guess it means 1760862 events with h, w, time_stample, and luminance change.
Here I need help. I will use event information in my deep learning task so I want to change the events into the format I can use, like frames, and let their resolution and number like the frames' I put in. For example, I put in 20 frames, and every frame's shape is (720, 480, 3). After v2e, I want to get 19 event frames with (720, 480, x) shape. What do you think I should do? Hope for your reply.

Please, did you solve the problem of the difference in the number of input and output frames?

That's not a problem. You can think one output frame is in the middle of two continuous input frames. So the outputs is 1 less inputs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants