klv-decode-dynamic sample walkthrough

The klv-decode-dynamic sample application demonstrates creating a GStreamer pipeline for STANAG 4609 file video playback with MISB601 metadata extraction and decoding using klvdecode plugin. Though the same can be achieved with less code, using bins, we'll use a manual approach that gives more control...

Make sure you have klvdecode and MisbCore library on your computer and configure environmental variables, so GStreamer could find them. More on the klvdecode plugin configuration.

We will be manually creating a pipeline that resembles the following:

gst-launch-1.0 filesrc location=~/file.ts ! tsdemux name=demux demux. ! queue ! h264parse ! 'video/x-h264, stream-format=byte-stream' ! avdec_h264 ! autovideosink demux. ! queue ! 'meta/x-klv' ! appsink

The pipeline will have a filesrc that read a STANAG (ts) file from a disk. The tsdemux will separate video and metadata and expose them through different output pads. This way, different branches will be created in the pipeline, dealing with video and metadata separately - video will play back and the Klv metadata will be decoded and sent to the console.

Building the pipeline

The code is pretty straightforward - we start manually creating a pipeline that consists of the filesrc + tsdemux plugins and two processing parts:

  • video presentation (h264 parser, decoder and videosink)
  • metadata decoding and presentation (klvdecoder and the datasink)

This implements a basic GStreamer concept, known as "Dynamic pipelines", where we're building the pipeline "on the fly", as information becomes available. tsdemux will fire events on every elementary stream found in the stream, so we could connect the corresponding processing parts.

Below we'll show some essential code snippets - everything else is a regular GStreamer code...
The complete source code can be found here.

First, we define a structure to hold the pointers to the elements

/* Structure to contain all our information, so we can pass it to callbacks */
typedef struct _CustomData
{
  GstElement *pipeline;
  GstElement *source;
  GstElement *tsDemux;
  GstElement *videoQueue;
  GstElement *dataQueue;
  GstElement *h264parse;
  GstElement *avdec;
  GstElement *klvdec;
  GstElement *videoSink;
  GstElement *dataSink;
} CustomData;

Next, we create those elements, the pipeline and check that all of them are properly created:

  /* Initialize GStreamer */
  gst_init(&argc, &argv);

  /* Create the elements */
  data.source = gst_element_factory_make("filesrc", "source");
  data.tsDemux = gst_element_factory_make("tsdemux", "demux");
  data.videoQueue = gst_element_factory_make("queue", "videoQueue");
  data.dataQueue = gst_element_factory_make("queue", "dataQueue");
  data.h264parse = gst_element_factory_make("h264parse", "h264parse");
  data.avdec = gst_element_factory_make("avdec_h264", "avdec");
  data.klvdec = gst_element_factory_make("klvdecode", "klvdecode");
  data.videoSink = gst_element_factory_make("autovideosink", "videoSink");
  data.dataSink = gst_element_factory_make("appsink", "dataSink");

  /* Create the empty pipeline */
  data.pipeline = gst_pipeline_new("decode-pipeline");

  if (!data.pipeline || !data.source || !data.tsDemux || !data.videoQueue || !data.dataQueue || !data.h264parse || !data.avdec || !data.klvdec || !data.videoSink || !data.dataSink)
  {
    g_printerr("Not all elements could be created.\n");
    return -1;
  }
  ```
Next, we add the elements to the pipeline:  

```c
/* Build the pipeline. Note that we are NOT linking the source at this point. We will do it later. */
  gst_bin_add_many(GST_BIN(data.pipeline), data.source, data.tsDemux, data.videoQueue, data.klvdec, data.dataQueue, data.h264parse, data.avdec, data.videoSink, data.dataSink, NULL);

Now, we'll link parts of the pipeline (not all of them)

This part will be responsible for reading and demuxing the file:

    gst_element_link(data.source, data.tsDemux)

This part will be responsible for video decoding and playback:

    gst_element_link_many(data.videoQueue, data.h264parse, data.avdec, data.videoSink, NULL)

And last, but not least, this part will be responsible for klv metadata decoding and playback:

    gst_element_link_many(data.dataQueue, data.klvdec, data.dataSink, NULL)

Next, we'll assign two callbacks:

The first one will be called when a new pad (for every elementary stream in the file) is added. This will allow us to discover the data types and connect all the pipeline parts created above:

  /* Connect to the pad-added signal */
    g_signal_connect(data.tsDemux, "pad-added", G_CALLBACK(pad_added_handler), &data);

The second callback will notify us that the new data sample has arrived:

    g_signal_connect(data.dataSink, "new-sample", G_CALLBACK(new_sample), &data);

In the pad_added_handler we check the type of the demultiplexed elementary stream and connect video and data branches:

static void pad_added_handler(GstElement *src, GstPad *new_pad, CustomData *data)
{
...

  /* Check the new pad's type */
  new_pad_caps = gst_pad_get_current_caps(new_pad);
  new_pad_struct = gst_caps_get_structure(new_pad_caps, 0);
  new_pad_type = gst_structure_get_name(new_pad_struct);

  g_print("Received new pad '%s' from '%s' of type '%s':\n", GST_PAD_NAME(new_pad), GST_ELEMENT_NAME(src), new_pad_type);

  if (g_str_has_prefix(new_pad_type, "video/x-h264"))
    sink_pad = gst_element_get_static_pad(data->videoQueue, "sink");
  else if (g_str_has_prefix(new_pad_type, "meta/x-klv"))
    sink_pad = gst_element_get_static_pad(data->dataQueue, "sink");
  else
  {
    sink = gst_element_factory_make("fakesink", NULL);
    gst_bin_add(GST_BIN(data->pipeline), sink);
    sink_pad = gst_element_get_static_pad(sink, "sink");
    gst_element_sync_state_with_parent(sink);
  }

  if (gst_pad_is_linked(sink_pad) || sink_pad == NULL)
  {
    g_print("We are already linked. Ignoring.\n");
    goto exit;
  }

  /* Attempt the link */
  ret = gst_pad_link(new_pad, sink_pad);
  if (GST_PAD_LINK_FAILED(ret))
    g_print("Type is '%s' but link failed.\n", new_pad_type);
  else
    g_print("Link succeeded (type '%s').\n", new_pad_type);

exit:
  /* Unreference the new pad's caps, if we got them */
  if (new_pad_caps != NULL)
    gst_caps_unref(new_pad_caps);

  /* Unreference the sink pad */
  if (sink_pad != NULL)
    gst_object_unref(sink_pad);
}

in the new_sample callback we get the decoded KLV metadata buffer and print it to stdout:

static GstFlowReturn new_sample(GstElement *sink, CustomData *data)
{
  GstSample *sample;

  /* Retrieve the buffer */
  g_signal_emit_by_name(sink, "pull-sample", &sample);
  if (sample)
  {
    GstBuffer *gstBuffer = gst_sample_get_buffer(sample);

    if (gstBuffer)
    {
      GstMapInfo map;
      gst_buffer_map(gstBuffer, &map, GST_MAP_READ);

      g_print("Klv packet: %s\n", (char *)map.data);

      gst_buffer_unmap (gstBuffer, &map);
      gst_sample_unref(sample);
      return GST_FLOW_OK;
    }
  }

  return GST_FLOW_ERROR;
}

Everything is ready... All we have to do is to start the playback:

    gst_element_set_state(data.pipeline, GST_STATE_PLAYING);

Video will be played in the pop-up window and klv metadata, decoded to json packets, will be sent to the console.