2

I am using GStreamer to capture video from a USB webcam (Logitech C920) in H264, and I want to analyze the h264 frames before potentially decoding or streaming them to the net.

Based on different sources on the internet, I constructed a python2.7 script that allows me to get the frames into python, basically using the schematic gst-launch command:

gst-launch-1.0 v4l2src ! video/x-h264 ! h264parse ! appsink

However, I am stuck with trying to interpret the received buffers. I spent already quite some time trying to understand how python gstreamer works to get the meta api attached to the buffer, but in vain for now. If I understand correctly, if I get the meta api somehow attached to the buffer, I will get a structure that will allow me to access the different elements, and the information about the frame encoding. How can I do this? (without writing my own decoder for the frames)

Below is my current script, with some sample outputs:

from __future__ import absolute_import, division, print_function
import sys, os, pdb
from datetime import datetime
import gi

gi.require_version("Gst","1.0")
from gi.repository import Gst
Gst.init(None)

def appsink_new_buffer(sink, data):
    sample = sink.emit("pull-sample")
    buf = sample.get_buffer()
    caps = sample.get_caps()

    print("\nGot new buffer: {}  Sample Info: {}\n".format(datetime.now(),sample.get_info()))
    print("Buffer size: {} ".format(buf.get_size()))
    print("Buffer n_memory: {} Presentation TS (PTS): {:.3f} s Decoding DTS: {:.3f} s Duration: {:.1f} ms".format(
        buf.n_memory(), buf.pts/1e9, buf.dts/1e9, buf.duration/1e6))
    st = caps.get_structure(0)
    field_names = [st.nth_field_name(i) for i in range(st.n_fields())]
    print("Caps {} n_fields: {} name: {}, format: {}, height: {}, width: {}".format(
        i,st.n_fields(),st.get_name(), st.get_value("format"), st.get_value("height"), st.get_value("width")))
    print("     all fields: {}".format(" ".join(field_names)))
    for fname in field_names:
        if fname not in ['pixel-aspect-ratio','framerate']: # cause error because Gst.FractionType not known
            print("  {:20}: ".format(fname), st.get_value(fname))

    #
    # somehow, here one nees to get the Meta API to understand the buffer content and to do further processing
    # of the encoded h264 frames.
    # Q: does one buffer after the h264parse represent exactly one frame?
    #

    return Gst.FlowReturn.OK

def appsink_webcam_h264():
    # adapted from https://gist.github.com/willpatera/7984486

    source = Gst.ElementFactory.make("v4l2src", "source")
    source.set_property("device", "/dev/video2")
    caps = Gst.caps_from_string("video/x-h264, width=640,height=480,framerate=10/1")
    capsfilter = Gst.ElementFactory.make("capsfilter", None)
    capsfilter.set_property("caps", caps)
    parse = Gst.ElementFactory.make("h264parse","h264parse")
    sink = Gst.ElementFactory.make("appsink", "sink")
    pipeline_elements = [source, capsfilter, parse, sink]

    sink.set_property("max-buffers",20) # prevent the app to consume huge part of memory
    sink.set_property('emit-signals',True) #tell sink to emit signals
    sink.set_property('sync',False) #no sync to make decoding as fast as possible
    sink.connect("new-sample", appsink_new_buffer, sink)

    # Create an empty pipeline & add/link elements
    pipeline = Gst.Pipeline.new("test-pipeline")
    for elem in pipeline_elements:
        pipeline.add(elem)
    for i in range(len(pipeline_elements[:-1])):
        if not Gst.Element.link(pipeline_elements[i], pipeline_elements[i+1]):
            raise Exception("Elements {} and {} could not be linked.".format(
                pipeline_elements[i], pipeline_elements[i+1]))

    ret = pipeline.set_state(Gst.State.PLAYING)

    # Wait until error or EOS
    bus = pipeline.get_bus()

    # Parse message
    while True:
        message = bus.timed_pop_filtered(10000, Gst.MessageType.ANY)
        if message:
            if message.type == Gst.MessageType.ERROR:
                err, debug = message.parse_error()
                print("Error received from element %s: %s" % (
                    message.src.get_name(), err))
                print("Debugging information: %s" % debug)
                break
            elif message.type == Gst.MessageType.EOS:
                print("End-Of-Stream reached.")
                break
            elif message.type == Gst.MessageType.STATE_CHANGED:
                if isinstance(message.src, Gst.Pipeline):
                    old_state, new_state, pending_state = message.parse_state_changed()
                    print("Pipeline state changed from %s to %s." %
                           (old_state.value_nick, new_state.value_nick))
            else:
                print("Unexpected message received: ", message, message.type)
    pipeline.set_state(Gst.State.NULL)



if __name__ == '__main__':
    appsink_webcam_h264()

and below are some sample outputs of this script:

...
Got new buffer: 2016-01-09 01:41:52.091462  Sample Info: None
Buffer size: 9409
Buffer n_memory: 1 Presentation TS (PTS): 0.390 s Decoding DTS: 0.000 s Duration: 100.0 ms
Caps 8 n_fields: 9 name: video/x-h264, format: None, height: 480, width: 640
     all fields: stream-format alignment width height pixel-aspect-ratio framerate parsed level profile
  stream-format       :  byte-stream
  alignment           :  au
  width               :  640
  height              :  480
  parsed              :  True
  level               :  4
  profile             :  constrained-baseline

Got new buffer: 2016-01-09 01:41:52.184990  Sample Info: None
Buffer size: 868
Buffer n_memory: 1 Presentation TS (PTS): 0.590 s Decoding DTS: 0.100 s Duration: 100.0 ms
Caps 8 n_fields: 9 name: video/x-h264, format: None, height: 480, width: 640
     all fields: stream-format alignment width height pixel-aspect-ratio framerate parsed level profile
  stream-format       :  byte-stream
  alignment           :  au
  width               :  640
  height              :  480
  parsed              :  True
  level               :  4
  profile             :  constrained-baseline

Got new buffer: 2016-01-09 01:41:52.285425  Sample Info: None
Buffer size: 3202
... 

I searched a lot but could not locate not one example how one could map the meta api to the buffers containing encoded video frames in python, and I assume that this shouldn't be that difficult, since the functionality seems to be provided.

Any suggestions?

user37544
  • 165
  • 1
  • 7

1 Answers1

1

Can you confirm you're not trying to access the raw h264 data? That will just be in the buffer object. In order to perform further analysis on it, like whether there is an I frame or P frame or SEI info, you'll need to use something like gsth264parser.c to parse the raw h264 data.

In order to get metadata you have to know the type of metadata you're looking for. e.g. GstMetaXImage. I'm not aware of any h264 metadata types in GStreamer.

mpr
  • 2,744
  • 20
  • 32
  • Yes, I wanted the formatted/parsed data. As of now, this seems to be the correct answer - there is just no metadata type for h264 in GStreamer yet. – user37544 Jan 28 '16 at 22:06
  • any update here? Did you successfully extract SEI info from a gstreamer pipeline? – matt Apr 23 '19 at 05:43