1

I am doing some image operations on a numpy array via OpenCV. These images are then being baked out to jpeg, then ingested into FFMPEG to make a video. However, baking it out to a file is not very efficient. I would like to stream this directly into FFMPEG. In theory, it would look something like this:

p = Popen(['/usr/local/bin/ffmpeg', '-s', '1920x1080',
                   '-pix_fmt', 'yuvj420p',
                   '-y',
                   '-f', 'image2pipe',
                   '-vcodec', 'mjpeg',
                   '-r', self.fps,
                   '-i', '-',
                   '-r', self.fps,
                   '-f', 'mp4',
                   '-vcodec', 'libx264',
                   '-preset', 'fast',
                   # '-crf', '26',
                   'output/{}.mp4'.format(self.animation_name)], stdin=PIPE)

image_resize = cv.resize(self.original_image, (0, 0), fx=zoom, fy=zoom)
M = np.float32([[1, 0, x_total], [0, 1, y_total]])
image_offset = cv.warpAffine(image_resize, M, (self.original_image_width, self.original_image_width))
image = image_offset[0:self.output_raster_height, 0:self.output_raster_width].copy()    
cv.imwrite(p.stdin, image) # this doesn't actually work, but that's the idea...

I've been able to achieve this with Pillow using this setup:

p = Popen(['/usr/local/bin/ffmpeg', '-s', '1920x1080',
                   '-pix_fmt', 'yuvj420p',
                   '-y',
                   '-f', 'image2pipe',
                   '-vcodec', 'mjpeg',
                   '-r', self.fps,
                   '-i', '-',
                   '-r', self.fps,
                   '-f', 'mp4',
                   '-vcodec', 'libx264',
                   '-preset', 'fast',
                   # '-crf', '26',
                   'output/{}.mp4'.format(self.animation_name)], stdin=PIPE)

image_resize = self.original_image.resize((resize_width, resize_height), resample=PIL.Image.BICUBIC)
image_offset = ImageChops.offset(image_resize, xoffset=int(x_total), yoffset=int(y_total))
image = image_offset.crop((0, 0, self.output_raster_width, self.output_raster_height))

image.save(p.stdin, 'JPEG')

So, my question is:

How would I write an OpenCV Jpeg buffer into the p.stdin object like it's being done in the Pillow version?

M Leonard
  • 470
  • 1
  • 6
  • 21
  • 1
    I don't see any opencv code. – zindarod Nov 05 '17 at 20:40
  • 1
    @zindarod fixed :-) – M Leonard Nov 05 '17 at 20:44
  • You can convert the opencv image to PIL image. https://stackoverflow.com/questions/10965417/how-to-convert-numpy-array-to-pil-image-applying-matplotlib-colormap – zindarod Nov 05 '17 at 21:01
  • @zindarod This almost worked, but the image turned out blue-ish. Did it drop a color channel? Do I have to specify the size of the array? – M Leonard Nov 05 '17 at 21:10
  • Opencv by default reads images as `bgr`, while PIL reads as `rgb`. If you're reading images with opencv then before conversion to PIL, do: `cv_img = cv2.cvtColor(cv_img,cv2.COLOR_BGR2RGB)`. – zindarod Nov 05 '17 at 21:35

0 Answers0