-
Hey folks, Suppose I want to display some video in the background of a sketch... what do you suggest as a replacement of Processing Foundation's Video library? |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 10 replies
-
First, why replace the Processing Foundation's Video library? It works just fine with py5. You'll need to add the video jar to your classpath, which can be done by creating a "jars" subdirectory and putting it in there. Note that py5 supports a user-defined If you don't want to use Processing's Video library, you can also use opencv. Everything you know how to do with webcams would be applicable here. |
Beta Was this translation helpful? Give feedback.
-
After downloading and installing the Processing Foundation's Video library from the Processing 3.5.4 IDE, I copied the contents of the The Video library for Procesing 3.54 (based on GStreamer 1.16.2) kind of worked, freezing after some time. The newest Video for Processing 4.X is a bit different and I couldn't make it work at all. import py5
from processing.video import *
def setup():
global movie
py5.size(560, 406)
py5.background(0)
# Load and play the video in a loop
this = py5.get_current_sketch()
movie = Movie(this, 'launch2.mp4')
movie.loop()
def draw():
if movie.available():
movie.read()
py5.image(movie, 0, 0, py5.width, py5.height)
py5.run_sketch() Defining a Using the Video library for Processing 4.2 I got this:
|
Beta Was this translation helpful? Give feedback.
-
Maybe we should avoid Processing Video and use OpenCV instead... this works, adapted from my webcam capture code, with a few tweaks: import py5
import cv2
import numpy as np
from py5 import create_image_from_numpy
movie = cv2.VideoCapture('launch2.mp4')
movie_width = int(movie.get(cv2.CAP_PROP_FRAME_WIDTH))
movie_height = int(movie.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = movie.get(cv2.CAP_PROP_FPS)
py5_img = None
def setup():
py5.size(movie_width, movie_height)
py5.frame_rate(fps)
def draw():
global py5_img
success, frame = movie.read() # frame is a numpy array
if success:
# gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# edges = cv2.Canny(gray, 100, 80)
# edges_rgb_npa = cv2.cvtColor(edges, cv2.COLOR_GRAY2BGR)
# blended_rgb_npa = cv2.addWeighted(frame, 0.6, edges_rgb_npa, 0.4, 0)
# py5_img = create_image_from_numpy(blended_rgb_npa, 'RGB', dst=py5_img)
py5_img = create_image_from_numpy(frame, 'RGB', dst=py5_img)
# display image
py5.image(py5_img, 0, 0)
else: # If it can't read frame, try starting again
movie.set(cv2.CAP_PROP_POS_FRAMES, 0)
if py5.frame_count % 30 == 0:
py5.window_title(f'FR: {py5.get_frame_rate():.1f}')
def exiting():
print('over and out')
movie.release()
py5.run_sketch() More ideas at: https://vuamitom.github.io/2019/12/13/fast-iterate-through-video-frames.html |
Beta Was this translation helpful? Give feedback.
Maybe we should avoid Processing Video and use OpenCV instead... this works, adapted from my webcam capture code, with a few tweaks: