GStreamer
open source multimedia framework
Home
Features
News
Annual Conference
Planet (Blogs)
Download
Applications
Security Center
GitLab
Developers
Documentation
Forum
File a Bug
Artwork
@gstreamer on Twitter
@gstreamer on Mastodon
#gstreamer on Matrix

GStreamer 1.14 Release Notes

GStreamer 1.14.0 was originally released on 19 March 2018.

The latest bug-fix release in the 1.14 series is 1.14.5 and was released on 29 May 2019.

1.14.5 will likely be the last release in the 1.14 release series which has now been superseded by the 1.16 release series.

See https://gstreamer.freedesktop.org/releases/1.14/ for the latest version of this document.

Last updated: Wednesday 29 May 2019, 12:00 UTC (log)

Introduction

The GStreamer team is proud to announce a new major feature release in the stable 1.x API series of your favourite cross-platform multimedia framework!

As always, this release is again packed with new features, bug fixes and other improvements.

Highlights

  • WebRTC support: real-time audio/video streaming to and from web browsers

  • Experimental support for the next-gen royalty-free AV1 video codec

  • Video4Linux: encoding support, stable element names and faster device probing

  • Support for the Secure Reliable Transport (SRT) video streaming protocol

  • RTP Forward Error Correction (FEC) support (ULPFEC)

  • RTSP 2.0 support in rtspsrc and gst-rtsp-server

  • ONVIF audio backchannel support in gst-rtsp-server and rtspsrc

  • playbin3 gapless playback and pre-buffering support

  • tee, our stream splitter/duplication element, now does allocation query aggregation which is important for efficient data handling and zero-copy

  • QuickTime muxer has a new prefill recording mode that allows file import in Adobe Premiere and FinalCut Pro while the file is still being written.

  • rtpjitterbuffer fast-start mode and timestamp offset adjustment smoothing

  • souphttpsrc connection sharing, which allows for connection reuse, cookie sharing, etc.

  • nvdec: new plugin for hardware-accelerated video decoding using the NVIDIA NVDEC API

  • Adaptive DASH trick play support

  • ipcpipeline: new plugin that allows splitting a pipeline across multiple processes

  • Major gobject-introspection annotation improvements for large parts of the library API

  • GStreamer C# bindings have been revived and seen many updates and fixes

  • The externally maintained GStreamer Rust bindings had many usability improvements and cover most of the API now. Coinciding with the 1.14 release, a new release with the 1.14 API additions is happening.

Major new features and changes

WebRTC support

There is now basic support for WebRTC in GStreamer in form of a new webrtcbin element and a webrtc support library. This allows you to build applications that set up connections with and stream to and from other WebRTC peers, whilst leveraging all of the usual GStreamer features such as hardware-accelerated encoding and decoding, OpenGL integration, zero-copy and embedded platform support. And it's easy to build and integrate into your application too!

WebRTC enables real-time communication of audio, video and data with web browsers and native apps, and it is supported or about to be support by recent versions of all major browsers and operating systems.

GStreamer's new WebRTC implementation uses libnice for Interactive Connectivity Establishment (ICE) to figure out the best way to communicate with other peers, punch holes into firewalls, and traverse NATs.

The implementation is not complete, but all the basics are there, and the code sticks fairly close to the PeerConnection API. Where functionality is missing it should be fairly obvious where it needs to go.

For more details, background and example code, check out Nirbheek's blog post GStreamer has grown a WebRTC implementation, as well as Matthew's GStreamer WebRTC talk from last year's GStreamer Conference in Prague.

New Elements

  • webrtcbin handles the transport aspects of webrtc connections (see WebRTC section above for more details)

  • New srtsink and srtsrc elements for the Secure Reliable Transport (SRT) video streaming protocol, which aims to be easy to use whilst striking a new balance between reliability and latency for low latency video streaming use cases. More details about SRT and the implementation in GStreamer in Olivier's blog post SRT in GStreamer.

  • av1enc and av1dec elements providing experimental support for the next-generation royalty free video AV1 codec, alongside Matroska support for it.

  • hlssink2 is a rewrite of the existing hlssink element, but unlike its predecessor hlssink2 takes elementary streams as input and handles the muxing to MPEG-TS internally. It also leverages splitmuxsink internally to do the splitting. This allows more control over the chunk splitting and sizing process and relies less on the co-operation of an upstream muxer. Different to the old hlssink it also works with pre-encoded streams and does not require close interaction with an upstream encoder element.

  • audiolatency is a new element for measuring audio latency end-to-end and is useful to measure roundtrip latency including both the GStreamer-internal latency as well as latency added by external components or circuits.

  • 'fakevideosink is basically a null sink for video data and very similar to fakesink, only that it will answer allocation queries and will advertise support for various video-specific things such GstVideoMeta, GstVideoCropMeta and GstVideoOverlayCompositionMeta like a normal video sink would. This is useful for throughput testing and testing the zero-copy path when creating a new pipeline.

  • ipcpipeline: new plugin that allows the splitting of a pipeline into multiple processes. Usually a GStreamer pipeline runs in a single process and parallelism is achieved by distributing workloads using multiple threads. This means that all elements in the pipeline have access to all the other elements' memory space however, including that of any libraries used. For security reasons one might therefore want to put sensitive parts of a pipeline such as DRM and decryption handling into a separate process to isolate it from the rest of the pipeline. This can now be achieved with the new ipcpipeline plugin. Check out George's blog post ipcpipeline: Splitting a GStreamer pipeline into multiple processes or his lightning talk from last year's GStreamer Conference in Prague for all the gory details.

  • proxysink and proxysrc are new elements to pass data from one pipeline to another within the same process, very similar to the existing inter elements, but not limited to raw audio and video data. These new proxy elements are very special in how they work under the hood, which makes them extremely powerful, but also dangerous if not used with care. The reason for this is that it's not just data that's passed from sink to src, but these elements basically establish a two-way wormhole that passes through queries and events in both directions, which means caps negotiation and allocation query driven zero-copy can work through this wormhole. There are scheduling considerations as well: proxysink forwards everything into the proxysrc pipeline directly from the proxysink streaming thread. There is a queue element inside proxysrc to decouple the source thread from the sink thread, but that queue is not unlimited, so it is entirely possible that the proxysink pipeline thread gets stuck in the proxysrc pipeline, e.g. when that pipeline is paused or stops consuming data for some other reason. This means that one should always shut down down the proxysrc pipeline before shutting down the proxysink pipeline, for example. Or at least take care when shutting down pipelines. Usually this is not a problem though, especially not in live pipelines. For more information see Nirbheek's blog post Decoupling GStreamer Pipelines, and also check out out the new ipcpipeline plugin for sending data from one process to another process (see above).

  • lcms is a new LCMS-based ICC color profile correction element

  • openmptdec is a new OpenMPT-based decoder for module music formats, such as S3M, MOD, XM, IT. It is built on top of a new GstNonstreamAudioDecoder base class which aims to unify handling of files which do not operate a streaming model. The wildmidi plugin has also been revived and is also implemented on top of this new base class.

  • The curl plugin has gained a new curlhttpsrc element, which is useful for testing HTTP protocol version 2.0 amongst other things.

  • The msdk plugin has gained a MPEG-2 video decoder(msdkmpeg2dec), VP8 decoder(msdkvp8dec) and a VC1/WMV decoder(msdkvc1dec)

Noteworthy new API

  • GstPromise provides future/promise-like functionality. This is used in the GStreamer WebRTC implementation.

  • GstReferenceTimestampMeta is a new meta that allows you to attach additional reference timestamps to a buffer. These timestamps don't have to relate to the pipeline clock in any way. Examples of this could be an NTP timestamp when the media was captured, a frame counter on the capture side or the (local) UNIX timestamp when the media was captured. The decklink elements make use of this.

  • GstVideoRegionOfInterestMeta: it's now possible to attach generic free-form element-specific parameters to a region of interest meta, for example to tell a downstream encoder to use certain codec parameters for a certain region.

  • gst_bus_get_pollfd can be used to obtain a file descriptor for the bus that can be poll()-ed on for new messages. This is useful for integration with non-GLib event loops.

  • gst_get_main_executable_path() can be used by wrapper plugins that need to find things in the directory where the application executable is located. In the same vein, GST_PLUGIN_DEPENDENCY_FLAG_PATHS_ARE_RELATIVE_TO_EXE can be used to signal that plugin dependency paths are relative to the main executable.

  • pad templates can be told about the GType of the pad subclass of the pad via newly-added GstPadTemplate API API or the gst_element_class_add_static_pad_template_with_gtype() convenience function. gst-inspect-1.0 will use this information to print pad properties.

  • new convenience functions to iterate over element pads without using the GstIterator API: gst_element_foreach_pad(), gst_element_foreach_src_pad(), and gst_element_foreach_sink_pad().

  • GstBaseSrc and appsrc have gained support for buffer lists: GstBaseSrc subclasses can use gst_base_src_submit_buffer_list(), and applications can use gst_app_src_push_buffer_list() to push a buffer list into appsrc.

  • The GstHarness unit test harness has a couple of new convenience functions to retrieve all pending data in the harness in form of a single chunk of memory.

  • GstAudioStreamAlign is a new helper object for audio elements that handles discontinuity detection and sample alignment. It will align samples after the previous buffer's samples, but keep track of the divergence between buffer timestamps and sample position (jitter). If it exceeds a configurable threshold the alignment will be reset. This simply factors out code that was duplicated in a number of elements into a common helper API.

  • The GstVideoEncoder base class implements Quality of Service (QoS) now. This is disabled by default and must be opted in by setting the "qos" property, which will make the base class gather statistics about the real-time performance of the pipeline from downstream elements (usually sinks that sync the pipeline clock). Subclasses can then make use of this by checking whether input frames are late already using gst_video_encoder_get_max_encode_time() If late, they can just drop them and skip encoding in the hope that the pipeline will catch up.

  • The GstVideoOverlay interface gained a few helper functions for installing and handling a "render-rectangle" property on elements that implement this interface, so that this functionality can also be used from the command line for testing and debugging purposes. The property wasn't added to the interface itself as that would require all implementors to provide it which would not be backwards-compatible.

  • A new base class, GstNonstreamAudioDecoder for non-stream audio decoders was added to gst-plugins-bad. This base-class is meant to be used for audio decoders that require the whole stream to be loaded first before decoding can start. Examples of this are module formats (MOD/S3M/XM/IT/etc), C64 SID tunes, video console music files (GYM/VGM/etc), MIDI files and others. The new openmptdec element is based on this.

  • Full list of API new in 1.14:

New RTP features and improvements

  • rtpulpfecenc and rtpulpfecdec are new elements that implement Generic Forward Error Correction (FEC) using Uneven Level Protection (ULP) as described in RFC 5109. This can be used to protect against certain types of (non-bursty) packet loss, and important packets such as those containing codec configuration data or key frames can be protected with higher redundancy. Equally, packets that are not particularly important can be given low priority or not be protected at all. If packets are lost, the receiver can then hopefully restore the lost packet(s) from the surrounding packets which were received. This is an alternative to, or rather complementary to, dealing with packet loss using retransmission (rtx). GStreamer has had retransmission support for a long time, but Forward Error Correction allows for different trade-offs: The advantage of Forward Error Correction is that it doesn't add latency, whereas retransmission requires at least one more roundtrip to request and hopefully receive lost packets; Forward Error Correction increases the required bandwidth however, even in situations where there is no packet loss at all, so one will typically want to fine-tune the overhead and mechanisms used based on the characteristics of the link at the time.

  • New Redundant Audio Data (RED) encoders and decoders for RTP as per RFC 2198 are also provided (rtpredenc and rtpreddec), mostly for chrome webrtc compatibility, as chrome will wrap ULPFEC-protected streams in RED packets, and such streams need to be wrapped and unwrapped in order to use ULPFEC with chrome.

  • a few new buffer flags for FEC support: GST_BUFFER_FLAG_NON_DROPPABLE can be used to mark important buffers, e.g. to flag RTP packets carrying keyframes or codec setup data for RTP Forward Error Correction purposes, or to prevent still video frames from being dropped by elements due to QoS. There already is a GST_BUFFER_FLAG_DROPPABLE. GST_RTP_BUFFER_FLAG_REDUNDANT is used to signal internally that a packet represents a redundant RTP packet and used in rtpstorage to hold back the packet and use it only for recovery from packet loss. Further work is still needed in payloaders to make use of these.

  • rtpbin now has an option for increasing timestamp offsets gradually: Sudden large changes to the internal ts_offset may cause timestamps to move backwards and may also cause visible glitches in media playback. The new "max-ts-offset-adjustment" and "max-ts-offset" properties let the application control the rate to apply changes to ts_offset. There have also been some EOS/BYE handling improvements in rtpbin.

  • rtpjitterbuffer has a new fast start mode: in many scenarios the jitter buffer will have to wait for the full configured latency before it can start outputting packets. The reason for that is that it often can't know what the sequence number of the first expected RTP packet is, so it can't know whether a packet earlier than the earliest packet received will still arrive in future. This behaviour can now be bypassed by setting the "faststart-min-packets" property to the number of consecutive packets needed to start, and the jitter buffer will start output packets as soon as it has N consecutive packets queued internally. This is particularly useful to get a first video frame decoded and rendered as quickly as possible.

  • rtpL8pay and rtpL8depay provide RTP payloading and depayloading for 8-bit raw audio

New element features

  • playbin3 has gained support or gapless playback via the "about-to-finish" signal where users can set the uri for the next item to play. For non-live streams this will be emitted as soon as the first uri has finished downloading, so with sufficiently large buffers it is now possible to pre-buffer the next item well ahead of time (unlike playbin where there would not be a lot of time between "about-to-finish" emission and the end of the stream). If the stream format of the next stream is the same as that of the previous stream, the data will be concatenated via the concat element. Whether this will result in true gaplessness depends on the container format and codecs used, there might still be codec-related gaps between streams with some codecs.

  • tee now does allocation query aggregation, which is important for zero-copy and efficient data handling, especially for video. Those who want to drop allocation queries on purpose can use the identity element's new "drop-allocation" property for that instead.

  • audioconvert now has a "mix-matrix" property, which obsoletes the audiomixmatrix element. There's also mix matrix support in the audio conversion and channel mixing API.

  • x264enc: new "insert-vui" property to disable VUI (Video Usability Information) parameter insertion into the stream, which allows creation of streams that are compatible with certain legacy hardware decoders that will refuse to decode in certain combinations of resolution and VUI parameters; the max. allowed number of B-frames was also increased from 4 to 16.

  • dvdlpcmdec: has gained support for Blu-Ray audio LPCM.

  • appsrc has gained support for buffer lists (see above) and also seen some other performance improvements.

  • flvmux has been ported to the GstAggregator base class which means it can work in defined-latency mode with live input sources and continue streaming if one of the inputs stops producing data.

  • jpegenc has gained a "snapshot" property just like pngenc to make it easier to output just a single encoded frame.

  • jpegdec will now handle interlaced MJPEG streams properly and also handles frames without an End of Image marker better.

  • v4l2: There are now video encoders for VP8, VP9, MPEG4, and H263. The v4l2 video decoder handles dynamic resolution changes, and the video4linux device provider now does much faster device probing. The plugin also no longer uses the libv4l2 library by default, as it has prevented a lot of interesting use cases like CREATE_BUFS, DMABuf, usage of TRY_FMT. As the libv4l2 library is totally inactive and not really maintained, we decided to disable it. This might affect a small number of cheap/old webcams with custom vendor formats for which we do not provide conversion in GStreamer. It is possible to re-enable support for libv4l2 at run-time however, by setting the environment variable GST_V4L2_USE_LIBV4L2=1.

  • rtspsrc now has support for RTSP protocol version 2.0 as well as ONVIF audio backchannels (see below for more details). It also sports a new "accept-certificate" signal for "manually" checking a TLS certificate for validity. It now also prints RTSP/SDP messages to the gstreamer debug log instead of stdout.

  • shout2send now uses non-blocking I/O and has a configurable network operations timeout.

  • splitmuxsink has gained a "split-now" action signal and new "alignment-threshold" and "use-robust-muxing" properties. If robust muxing is enabled, it will check and set the muxer's reserved space properties if present. This is primarily for use with mp4mux's robust muxing mode.

  • qtmux has a new prefill recording mode which sets up a moov header with the correct sample positions beforehand, which then allows software like Adobe Premiere and FinalCut Pro to import the files while they are still being written to. This only works with constant framerate I-frame only streams, and for now only support for ProRes video and raw audio is implemented. Adding support for additional codecs is just a matter of defining appropriate maximum frame sizes though.

  • qtmux also supports writing of svmi atoms with stereoscopic video information now. Trak timescales can be configured on a per-stream basis using the "trak-timescale" property on the sink pads. Various new formats can be muxed: MPEG layer 1 and 2, AC3 and Opus, as well as PNG and VP9.

  • souphttpsrc now does connection sharing by default: it shares its SoupSession with other elements in the same pipeline via a GstContext if possible (session-wide settings are all the defaults). This allows for connection reuse, cookie sharing, etc. Applications can also force a context to use. In other news, HTTP headers received from the server are posted as element messages on the bus now for easier diagnostics, and it's also possible now to use other types of proxy servers such as SOCKS4 or SOCKS5 proxies, support for which is implemented directly in gio. Before only HTTP proxies were allowed.

  • qtmux, mp4mux and matroskamux will now refuse caps changes of input streams at runtime. This isn't really supported with these containers (or would have to be implemented differently with a considerable effort) and doesn't produce valid and spec-compliant files that will play everywhere. So if you can't guarantee that the input caps won't change, use a container format that does support on the fly caps changes for a stream such as MPEG-TS or use splitmuxsink which can start a new file when the caps change. What would happen before is that e.g. rtph264depay or rtph265depay would simply send new SPS/PPS inband even for AVC format, which would then get muxed into the container as if nothing changed. Some decoders will handle this just fine, but that's often more luck than by design. In any case, it's not right, so we disallow it now.

  • matroskamux has Table of Content (TOC) support now (chapters etc.) and matroskademux TOC support has been improved. matroskademux has also seen seeking improvements searching for the right cluster and position.

  • videocrop now uses GstVideoCropMeta if downstream supports it, which means cropping can be handled more efficiently without any copying.

  • compositor now has support for crossfade blending, which can be used via the new "crossfade-ratio" property on the sink pads.

  • The avwait element has a new "end-timecode" property and posts "avwait-status" element messages now whenever avwait starts or stops passing through data (e.g. because target-timecode and end-timecode respectively have been reached).

  • 'alsamidisrc' element has been broken for many many years and has now been repaired allowing live capture from your MIDI HW.

  • h265parse and h265parse will try harder to make upstream output the same caps as downstream requires or prefers, thus avoiding unnecessary conversion. The parsers also expose chroma format and bit depth in the caps now.

  • The dtls elements now longer rely on or require the application to run a GLib main loop that iterates the default main context (GStreamer plugins should never rely on the application running a GLib main loop).

  • openh264enc allows to change the encoding bitrate dynamically at runtime now

  • nvdec is a new plugin for hardware-accelerated video decoding using the NVIDIA NVDEC API (which replaces the old VDPAU API which is no longer supported by NVIDIA)

  • The NVIDIA NVENC hardware-accelerated video encoders now support dynamic bitrate and preset reconfiguration and support the I420 4:2:0 video format. It's also possible to configure the gop size via the new "gop-size" property.

  • The MPEG-TS muxer and demuxer (tsmux, tsdemux) now have support for JPEG2000

  • openjpegdec and jpeg2000parse support 2-component images now (gray with alpha), and jpeg2000parse has gained limited support for conversion between JPEG2000 stream-formats. (JP2, J2C, JPC) and also extracts more details such as colorimetry, interlace-mode, field-order, multiview-mode and chroma siting.

  • The decklink plugin for Blackmagic capture and playback cards have seen numerous improvements:

    • decklinkaudiosrc and decklinkvideosrc now put hardware reference timestamp on buffers in form of GstReferenceTimestampMetas.
      This can be useful to know on multi-channel cards which frames from different channels were captured at the same time.

    • decklinkvideosink has gained support for Decklink hardware keying with two new properties ("keyer-mode" and "keyer-level") to control the built-in hardware keyer of Decklink cards.

    • decklinkaudiosink has been re-implemented around GstBaseSink instead of the GstAudioBaseSink base class, since the Decklink APIs don't fit very well with the GstAudioBaseSink APIs, which used to cause various problems due to inaccuracies in the clock calculations. Problems were audio drop-outs and A/V sync going wrong after pausing/seeking.

    • support for more than 16 devices, without any artificial limit

  • work continued on the msdk plugin for Intel's Media SDK which enables hardware-accelerated video encoding and decoding on Intel graphics hardware on Windows or Linux. Added the video memory, buffer pool, and context/session sharing support which helps to improve the performance and resource utilization. Rendernode support is in place which helps to avoid the constraint of having a running graphics server as DRM-Master. Encoders are exposing a number rate control algorithms now. More encoder tuning options like trellis-quantiztion (h264), slice size control (h264), B-pyramid prediction(h264), MB-level bitrate control, frame partitioning and adaptive I/B frame insertion were added, and more pixel formats and video codecs are supported now. The encoder now also handles force-key-unit events and can insert frame-packing SEIs for side-by-side and top-bottom stereoscopic 3D video.

  • dashdemux can now do adaptive trick play of certain types of DASH streams, meaning it can do fast-forward/fast-rewind of normal (non-I frame only) streams even at high speeds without saturating network bandwidth or exceeding decoder capabilities. It will keep statistics and skip keyframes or fragments as needed. See Sebastian's blog post DASH trick-mode playback in GStreamer for more details. It also supports webvtt subtitle streams now and has seen improvements when seeking in live streams.

  • kmssink has seen lots of fixes and improvements in this cycle, including:

    • Raspberry Pi (vc4) and Xilinx DRM driver support

    • new "render-rectangle" property that can be used from the command line as well as "display-width" and "display-height", and "can-scale" properties

    • GstVideoCropMeta support

Plugin and library moves

MPEG-1 audio (mp1, mp2, mp3) decoders and encoders moved to -good

Following the expiration of the last remaining mp3 patents in most jurisdictions, and the termination of the mp3 licensing program, as well as the decision by certain distros to officially start shipping full mp3 decoding and encoding support, these plugins should now no longer be problematic for most distributors and have therefore been moved from -ugly and -bad to gst-plugins-good. Distributors can still disable these plugins if desired.

In particular these are:

GstAggregator moved from -bad to core

GstAggregator has been moved from gst-plugins-bad to the base library in GStreamer and is now stable API.

GstAggregator is a new base class for mixers and muxers that have to handle multiple input pads and aggregate streams into one output stream. It improves upon the existing GstCollectPads API in that it is a proper base class which was also designed with live streaming in mind. GstAggregator subclasses will operate in a mode with defined latency if any of the inputs are live streams. This ensures that the pipeline won't stall if any of the inputs stop producing data, and that the configured maximum latency is never exceeded.

GstAudioAggregator, audiomixer and audiointerleave moved from -bad to -base

GstAudioAggregator is a new base class for raw audio mixers and muxers and is based on GstAggregator (see above). It provides defined-latency mixing of raw audio inputs and ensures that the pipeline won't stall even if one of the input streams stops producing data.

As part of the move to stabilise the API there were some last-minute API changes and clean-ups, but those should mostly affect internal elements.

It is used by the audiomixer element, which is a replacement for 'adder', which did not handle live inputs very well and did not align input streams according to running time. audiomixer should behave much better in that respect and generally behave as one would expected in most scenarios.

Similarly, audiointerleave replaces the 'interleave' element which did not handle live inputs or non-aligned inputs very robustly.

GstAudioAggregator and its subclases have gained support for input format conversion, which does not include sample rate conversion though as that would add additional latency. Furthermore, GAP events are now handled correctly.

We hope to move the video equivalents (GstVideoAggregator and compositor) to -base in the next cycle, i.e. for 1.16.

GStreamer OpenGL integration library and plugin moved from -bad to -base

The GStreamer OpenGL integration library and opengl plugin have moved from gst-plugins-bad to -base and are now part of the stable API canon. Not all OpenGL elements have been moved; a few had to be left behind in gst-plugins-bad in the new openglmixers plugin, because they depend on the GstVideoAggregator base class which we were not able to move in this cycle. We hope to reunite these elements with the rest of their family for 1.16 though.

This is quite a milestone, thanks to everyone who worked to make this happen!

Qt QML and GTK plugins moved from -bad to -good

The Qt QML-based qmlgl plugin has moved to -good and provides a qmlglsink video sink element as well as a qmlglsrc element. qmlglsink renders video into a QQuickItem, and qmlglsrc captures a window from a QML view and feeds it as video into a pipeline for further processing. Both elements leverage GStreamer's OpenGL integration. In addition to the move to -good the following features were added:

  • A proxy object is now used for thread-safe access to the QML widget which prevents crashes in corner case scenarios: QML can destroy the video widget at any time, so without this we might be left with a dangling pointer.

  • EGL is now supported with the X11 backend, which works e.g. on Freescale imx6

The GTK+ plugin has also moved from -bad to -good. It includes gtksink and gtkglsink which both render video into a GtkWidget. gtksink uses Cairo for rendering the video, which will work everywhere in all scenarios but involves an extra memory copy, whereas gtkglsink fully leverages GStreamer's OpenGL integration, but might not work properly in all scenarios, e.g. where the OpenGL driver does not properly support multiple sharing contexts in different threads; on Linux Nouveau is known to be broken in this respect, whilst NVIDIA's proprietary drivers and most other drivers generally work fine, and the experience with Intel's driver seems to be mixed; some proprietary embedded Linux drivers don't work; macOS works.

GstPhysMemoryAllocator interface moved from -bad to -base

GstPhysMemoryAllocator is a marker interface for allocators with physical address backed memory.

Plugin removals

  • the sunaudio plugin was removed, since it couldn't ever have been built or used with GStreamer 1.0, but no one even noticed in all these years.

  • the schroedinger-based Dirac encoder/decoder plugin has been removed, as there is no longer any upstream or anyone else maintaining it. Seeing that it's quite a fringe codec it seemed best to simply remove it.

API removals

  • some MPEG video parser API in the API unstable codecutils library in gst-plugins-bad was removed after having been deprecated for 5 years.

Miscellaneous changes

  • The video support library has gained support for a few new pixel formats:

    • NV16_10LE32: 10-bit variant of NV16, packed into 32bit words (plus 2 bits padding)
    • NV12_10LE32: 10-bit variant of NV12, packed into 32bit words (plus 2 bits padding)
    • GRAY10_LE32: 10-bit grayscale, packed in 32bit words (plus 2 bits padding)
  • decodebin, playbin and GstDiscoverer have seen stability improvements in corner cases such as shutdown while still starting up or shutdown in error cases (hat tip to the oss-fuzz project).

  • floating reference handling was inconsistent and has been cleaned up across the board, including annotations. This solves various long-standing memory leaks in language bindings, which e.g. often caused elements and pads to be leaked.

  • major gobject-introspection annotation improvements for large parts of the library API, including nullability of return types and function parameters, correct types (e.g. strings vs. filenames), ownership transfer, array length parameters, etc. This allows to use bigger parts of the GStreamer API to be safely used from dynamic language bindings (e.g. Python, Javascript) and allows static bindings (e.g. C#, Rust, Vala) to autogenerate more API bindings without manual intervention.

OpenGL integration

  • The GStreamer OpenGL integration library has moved to gst-plugins-base and is now part of our stable API.

  • new Mesa3D GBM backend. On devices with working libdrm support, it is possible to use Mesa3D's GBM library to set up an EGL context directly on top of KMS. This makes it possible to use the GStreamer OpenGL elements without a windowing system if a libdrm- and Mesa3D-supported GPU is present.

  • Prefer wayland display over X11: As most Wayland compositors support XWayland, the X11 backend would get selected.

  • gldownload can export dmabufs now, and glupload will advertise dmabuf as caps feature.

Tracing framework and debugging improvements

  • New memory ringbuffer based debug logger, useful for long-running applications or to retrieve diagnostics when encountering an error. The GStreamer debug logging system provides in-depth debug logging about what is going on inside a pipeline. When enabled, debug logs are usually written into a file, printed to the terminal, or handed off to a log handler installed by the application. However, at higher debug levels the volume of debug output quickly becomes unmanageable, which poses a problem in disk-space or bandwidth restricted environments or with long-running pipelines where a problem might only manifest itself after multiple days. In those situations, developers are usually only interested in the most recent debug log output. The new in-memory ringbuffer logger makes this easy: just installed it with gst_debug_add_ring_buffer_logger() and retrieve logs with gst_debug_ring_buffer_logger_get_logs() when needed. It is possible to limit the memory usage per thread and set a timeout to determine how long messages are kept around. It was always possible to implement this in the application with a custom log handler of course, this just provides this functionality as part of GStreamer.

  • 'fakevideosink is a null sink for video data that advertises video-specific metas and behaves like a video sink. See above for more details.

  • gst_util_dump_buffer() prints the content of a buffer to stdout.

  • gst_pad_link_get_name() and gst_state_change_get_name() print pad link return values and state change transition values as strings.

  • The latency tracer has seen a few improvements: trace records now contain timestamps which is useful to plot things over time, and downstream synchronisation time is now excluded from the measured values.

  • Miniobject refcount tracing and logging was not entirley thread-safe, there were duplicates or missing entries at times. This has now been made reliable.

  • The netsim element, which can be used to simulate network jitter, packet reordering and packet loss, received new features and improvements: it can now also simulate network congestion using a token bucket algorithm. This can be enabled via the "max-kbps" property. Packet reordering can be disabled now via the "allow-reordering" property: Reordering of packets is not very common in networks, and the delay functions will always introduce reordering if delay > packet-spacing, so by setting "allow-reordering" to FALSE you guarantee that the packets are in order, while at the same time introducing delay/jitter to them. By using the new "delay-distribution" property the user can control how the delay applied to delayed packets is distributed: This is either the uniform distribution (as before) or the normal distribution; in addition there is also the gamma distribution which simulates the delay on wifi networks better.

Tools

  • gst-inspect-1.0 now prints pad properties for elements that have pad subclasses with special properties, such as compositor or audiomixer. This only works for elements that use the newly-added GstPadTemplate API API or the gst_element_class_add_static_pad_template_with_gtype() convenience function to tell GStreamer about the special pad subclass.

  • gst-launch-1.0 now generates a gstreamer pipeline diagram (.dot file) whenever SIGHUP is sent to it on Linux/*nix systems.

  • gst-discoverer-1.0 can now analyse live streams such as rtsp:// URIs

GStreamer RTSP server

  • Initial support for RTSP protocol version 2.0 was added, which is to the best of our knowledge the first RTSP 2.0 implementation ever!

  • ONVIF audio backchannel support. This is an extension specified by ONVIF that allows RTSP clients (e.g. a control room operator) to send audio back to the RTSP server (e.g. an IP camera). Theoretically this could have been done also by using the RECORD method of the RTSP protocol, but ONVIF chose not to do that, so the backchannel is set up alongside the other streams. Format negotiation needs to be done out of band, if needed. Use the new ONVIF-specific subclasses GstRTSPOnvifServer and GstRTSPOnvifMediaFactory to enable this functionality.

  • The internal server streaming pipeline is now dynamically reconfigured on PLAY based on the transports needed. This means that the server no longer adds the pipeline plumbing for all possible transports from the start, but only if needed as needed. This improves performance and memory footprint.

  • rtspclientsink has gained an "accept-certificate" signal for manually checking a TLS certificate for validity.

  • Fix keep-alive/timeout issue for certain clients using TCP interleave as transport who don't do keep-alive via some other method such as periodic RTSP OPTION requests. We now put netaddress metas on the packets from the TCP interleaved stream, so can map RTCP packets to the right stream in the server and can handle them properly.

  • Language bindings improvements: in general there were quite a few improvements in the gobject-introspection annotations, but we also extended the permissions API which was not usable from bindings before.

  • Fix corner case issue where the wrong mount point was found when there were multiple mount points with a common prefix.

GStreamer VAAPI

  • Improve DMABuf's usage, both upstream and dowstream, and memory:DMABuf caps feature is also negotiated when the dmabuf-based buffer cannot be mapped onto user-space.

  • VA initialization was fixed when it is used in headless systems.

  • VA display sharing, through GstContext, among the pipeline, has been improved, adding the possibility to the application share its VA display (external display) via gst.vaapi.app.Display context.

  • VA display cache was removed.

  • libva's log messages are now redirected into the GStreamer log handler.

  • Decoders improved their upstream re-negotiation by avoiding to re-instantiate the internal decoder if stream caps are compatible with the previous one.

  • When downstream doesn't support GstVideoMeta and the decoded frames don't have standard strides, they are copied onto system memory-based buffers.

  • H.264 decoder has a low-latency property, for live streams which doesn't conform the H.264 specification but still it is required to push the frames to downstream as soon as possible.

  • As part of the Google Summer of Code 2017 the H.264 decoder drops MVC and SVC frames when base-only property is enabled.

  • Added support for libva-2.0 (VA-API 1.0).

  • H.264 and H.265 encoders handle Region-Of-Interest metas by adding a delta-qp for every rectangle within the frame specified by those metas.

  • Encoders for H.264 and H.265 set the media profile by the downstream caps.

  • H.264 encoder inserts an AU delimiter for each encoded frame when aud property is enabled (it is only available for certain drivers and platforms).

  • H.264 encoder supports for P and B hierarchical prediction modes.

  • All encoders handles a quality-level property, which is a number from 1 to 8, where a lower number means higher quality, but slower processing, and vice-versa.

  • VP8 and VP9 encoders support constant bit-rate mode (CBR).

  • VP8, VP9 and H.265 encoders support variable bit-rate mode (VBR).

  • Resurrected GstGLUploadTextureMeta handling for EGL backends.

  • H.265 encoder can configure its number of reference frames via the refs property.

  • Add H.264 encoder mbbrc property, which controls the macro-block bitrate as auto, on or off.

  • Add H.264 encoder temporal-levels property, to select the number of temporal levels to be included.

  • Add to H.264 and H.265 encoders the properties qp-ip and qp-ib, to handle the QP (quality parameter) difference between the I and P frames, and the I and B frames, respectively.

  • vaapisink was demoted to marginal rank on Wayland because COGL cannot display YUV surfaces.

More details in Víctor's blog post GStreamer VA-API 1.14: what’s new?.

GStreamer Editing Services and NLE

  • Handle crossfade in complex scenarios by using the new compositorpad::crossfade-ratio property

  • Add API allowing to stop using proxies for clips in the timeline

  • Allow management of none square pixel aspect ratios by allowing application to deal with them in the way they want

  • Misc fixes around the timeline editing API

GStreamer validate

  • Handle running scenarios on live pipelines (in the "content sense", not the GStreamer one)

  • Implement RTSP support with a basic server based on gst-rtsp-server, and add RTSP 1.0 and 2.0 integration tests

  • Implement a plugin that allows users to implement configurable tests. It currently can check if a particular element is added a configurable number of time in the pipeline. In the future that plugin should allow us to implement specific tests of any kind in a descriptive way

  • Add a verbosity configuration which behaves in a similare way as the gst-launch-1.0 verbose flags allowing the informations to be outputed on any running pipeline when enabling GstValidate.

  • Misc optimization in the launcher, making the tests run much faster.

GStreamer C# bindings

  • Port to the meson build system, autotools support has been removed

  • Use a new GlibSharp version, set as a meson subproject

  • Update wrapped API to GStreamer 1.14

  • Removed the need for "glue" code

  • Provide a nuget

  • Misc API fixes

Build and Dependencies

  • the new WebRTC support in gst-plugins-bad depends on the GStreamer elements that ship as part of libnice, and libnice version 1.1.14 is required. Also the dtls and srtp plugins.

  • gst-plugins-bad no longer depends on the libschroedinger Dirac codec library.

  • The srtp plugin can now also be built against libsrtp2.

  • some plugins and libraries have moved between modules, see the Plugin and library moves section above, and their respective dependencies have moved with them of course, e.g. the GStreamer OpenGL integration support library and plugin is now in gst-plugins-base, and mpg123, LAME and twoLAME based audio decoder and encoder plugins are now in gst-plugins-good.

  • Unify static and dynamic plugin interface and remove plugin specific static build option: Static and dynamic plugins now have the same interface. The standard --enable-static/--enable-shared toggle is sufficient. This allows building static and shared plugins from the same object files, instead of having to build everything twice.

  • The default plugin entry point has changed. This will only affect plugins that are recompiled against new GStreamer headers. Binary plugins using the old entry point will continue to work. However, plugins that are recompiled must have matching plugin names in GST_PLUGIN_DEFINE and filenames, as the plugin entry point for shared plugins is now deduced from the plugin filename. This means you can no longer have a plugin called foo living in a file called libfoobar.so or such, the plugin filename needs to match. This might cause problems with some external third party plugin modules when they get rebuilt against GStreamer 1.14.

Note to packagers and distributors

A number of libraries, APIs and plugins moved between modules and/or libraries in different modules between version 1.12.x and 1.14.x, see the Plugin and library moves section above. Some APIs have seen minor ABI changes in the course of moving them into the stable APIs section.

This means that you should try to ensure that all major GStreamer modules are synced to the same major version (1.12 or 1.13/1.14) and can only be upgraded in lockstep, so that your users never end up with a mix of major versions on their system at the same time, as this may cause breakages.

Also, plugins compiled against >= 1.14 headers will not load with GStreamer <= 1.12 owing to a new plugin entry point (but plugin binaries built against older GStreamer versions will continue to load with newer versions of GStreamer of course).

There is also a small structure size related ABI breakage introduced in the gst-plugins-bad codecparsers library between version 1.13.90 and 1.13.91. This should "only" affect gstreamer-vaapi, so anyone who ships the release candidates is advised to upgrade those two modules at the same time.

Platform-specific improvements

Android

  • ahcsrc (Android camera source) does autofocus now

macOS and iOS

  • no major changes in macOS and iOS support, only bugfixes

Windows

  • The GStreamer wasapi plugin was rewritten and should not only be usable now, but in top shape and suitable for low-latency use cases. The Windows Audio Session API (WASAPI) is Microsoft's most modern method for talking with audio devices, and now that the wasapi plugin is up to scratch it is preferred over the directsound plugin. The ranks of the wasapisink and wasapisrc elements have been updated to reflect this. Further improvements include:

    • support for more than 2 channels

    • a new "low-latency" property to enable low-latency operation (which should always be safe to enable)

    • support for the AudioClient3 API which is only available on Windows 10: in wasapisink this will be used automatically if available; in wasapisrc it will have to be enabled explicitly via the "use-audioclient3" property, as capturing audio with low latency and without glitches seems to require setting the realtime priority of the entire pipeline to "critical", which cannot be done from inside the element, but has to be done in the application.

    • set realtime thread priority to avoid glitches

    • allow opening devices in exclusive mode, which provides much lower latency compared to shared mode where WASAPI's engine period is 10ms. This can be activated via the "exclusive" property.

    • Also see Nirbheek's blog post Low Latency Audio on Windows with GStreamer.

  • There are now GstDeviceProvider implementations for the wasapi and directsound plugins, so it's now possible to discover both audio sources and audio sinks on Windows via the GstDeviceMonitor API

  • debug log timestamps are now higher granularity owing to g_get_monotonic_time() now being used as fallback in gst_utils_get_timestamp(). Before that, there would sometimes be 10-20 lines of debug log output sporting the same timestamp.

Contributors

Aaron Boxer, Adrián Pardini, Adrien SCH, Akinobu Mita, Alban Bedel, Alessandro Decina, Alex Ashley, Alicia Boya García, Alistair Buxton, Alvaro Margulis, Anders Jonsson, Andreas Frisch, Andrejs Vasiljevs, Andrew Bott, Antoine Jacoutot, Antonio Ospite, Antoni Silvestre, Anton Obzhirov, Anuj Jaiswal, Arjen Veenhuizen, Arnaud Bonatti, Arun Raghavan, Ashish Kumar, Aurélien Zanelli, Ayaka, Branislav Katreniak, Branko Subasic, Brion Vibber, Carlos Rafael Giani, Cassandra Rommel, Chris Bass, Chris Paulson-Ellis, Christoph Reiter, Claudio Saavedra, Clemens Lang, Cyril Lashkevich, Daniel van Vugt, Dave Craig, Dave Johnstone, David Evans, David Schleef, Deepak Srivastava, Dimitrios Katsaros, Dmitry Zhadinets, Dongil Park, Dustin Spicuzza, Eduard Sinelnikov, Edward Hervey, Enrico Jorns, Eunhae Choi, Ezequiel Garcia, fengalin, Filippo Argiolas, Florent Thiéry, Florian Zwoch, Francisco Velazquez, François Laignel, fvanzile, George Kiagiadakis, Georg Lippitsch, Graham Leggett, Guillaume Desmottes, Gurkirpal Singh, Gwang Yoon Hwang, Gwenole Beauchesne, Haakon Sporsheim, Haihua Hu, Håvard Graff, Heekyoung Seo, Heinrich Fink, Holger Kaelberer, Hoonhee Lee, Hosang Lee, Hyunjun Ko, Ian Jamison, James Stevenson, Jan Alexander Steffens (heftig), Jan Schmidt, Jason Lin, Jens Georg, Jeremy Hiatt, Jérôme Laheurte, Jimmy Ohn, Jochen Henneberg, John Ludwig, John Nikolaides, Jonathan Karlsson, Josep Torra, Juan Navarro, Juan Pablo Ugarte, Julien Isorce, Jun Xie, Jussi Kukkonen, Justin Kim, Lasse Laursen, Lubosz Sarnecki, Luc Deschenaux, Luis de Bethencourt, Marcin Lewandowski, Mario Alfredo Carrillo Arevalo, Mark Nauwelaerts, Martin Kelly, Matej Knopp, Mathieu Duponchelle, Matteo Valdina, Matt Fischer, Matthew Waters, Matthieu Bouron, Matthieu Crapet, Matt Staples, Michael Catanzaro, Michael Olbrich, Michael Shigorin, Michael Tretter, Michał Dębski, Michał Górny, Michele Dionisio, Miguel París, Mikhail Fludkov, Munez, Nael Ouedraogo, Neos3452, Nicholas Panayis, Nick Kallen, Nicola Murino, Nicolas Dechesne, Nicolas Dufresne, Nirbheek Chauhan, Ognyan Tonchev, Ole André Vadla Ravnås, Oleksij Rempel, Olivier Crête, Omar Akkila, Orestis Floros, Patricia Muscalu, Patrick Radizi, Paul Kim, Per-Erik Brodin, Peter Seiderer, Philip Craig, Philippe Normand, Philippe Renon, Philipp Zabel, Pierre Pouzol, Piotr Drąg, Ponnam Srinivas, Pratheesh Gangadhar, Raimo Järvi, Ramprakash Jelari, Ravi Kiran K N, Reynaldo H. Verdejo Pinochet, Rico Tzschichholz, Robert Rosengren, Roland Peffer, Руслан Ижбулатов, Sam Hurst, Sam Thursfield, Sangkyu Park, Sanjay NM, Satya Prakash Gupta, Scott D Phillips, Sean DuBois, Sebastian Cote, Sebastian Dröge, Sebastian Rasmussen, Sejun Park, Sergey Borovkov, Seungha Yang, Shakin Chou, Shinya Saito, Simon Himmelbauer, Sky Juan, Song Bing, Sreerenj Balachandran, Stefan Kost, Stefan Popa, Stefan Sauer, Stian Selnes, Thiago Santos, Thibault Saunier, Thijs Vermeir, Tim Allen, Tim-Philipp Müller, Ting-Wei Lan, Tomas Rataj, Tom Bailey, Tonu Jaansoo, U. Artie Eoff, Umang Jain, Ursula Maplehurst, VaL Doroshchuk, Vasilis Liaskovitis, Víctor Manuel Jáquez Leal, vijay, Vincent Penquerc'h, Vineeth T M, Vivia Nikolaidou, Wang Xin-yu (王昕宇), Wei Feng, Wim Taymans, Wonchul Lee, Xabier Rodriguez Calvar, Xavier Claessens, XuGuangxin, Yasushi SHOJI, Yi A Wang, Youness Alaoui,

... and many others who have contributed bug reports, translations, sent suggestions or helped testing.

Bugs fixed in 1.14

More than 800 bugs have been fixed during the development of 1.14.

This list does not include issues that have been cherry-picked into the stable 1.12 branch and fixed there as well, all fixes that ended up in the 1.12 branch are also included in 1.14.

This list also does not include issues that have been fixed without a bug report in bugzilla, so the actual number of fixes is much higher.

Stable 1.14 branch

After the 1.14.0 release there will be several 1.14.x bug-fix releases which will contain bug fixes which have been deemed suitable for a stable branch, but no new features or intrusive changes will be added to a bug-fix release usually. The 1.14.x bug-fix releases will be made from the git 1.14 branch, which is a stable branch.

1.14.0

1.14.0 was released on 19 March 2018.

1.14.1

The first 1.14 bug-fix release (1.14.1) was released on 17 May 2018.

This release only contains bugfixes and it should be safe to update from 1.14.0.

Noteworthy bugfixes in 1.14.1

  • GstPad: Fix race condition causing the same probe to be called multiple times
  • Fix occasional deadlocks on windows when outputting debug logging
  • Fix debug levels being applied in the wrong order
  • GIR annotation fixes for bindings
  • audiomixer, audioaggregator: fix some negotiation issues
  • gst-play-1.0: fix leaving stdin in non-blocking mode after exit
  • flvmux: wait for caps on all input pads before writing header even if source is live
  • flvmux: don't wake up the muxer unless there is data, fixes busy looping if there's no input data
  • flvmux: fix major leak of input buffers
  • rtspsrc, rtsp-server: revert to RTSP RFC handling of sendonly/recvonly attributes
  • rtpvrawpay: fix payloading with very large mtu sizes where everything fits into a single RTP packet
  • v4l2: Fix hard-coded enabled v4l2 probe on Linux/ARM
  • v4l2: Disable DMABuf for emulated formats when using libv4l2
  • v4l2: Always set colorimetry in S_FMT
  • asfdemux: Set stream-format field for H264 streams and handle H.264 in bytestream format
  • x265enc: Fix tagging of keyframes on output buffers
  • ladspa: Fix critical during plugin load on Windows
  • decklink: Fix COM initialisation on Windows
  • h264parse: fix re-use across pipeline stop/restart
  • mpegtsmux: fix force-keyframe event handling and PCR/PMT changes that would confuse some players with generated HLS streams
  • adaptivedemux: Support period change in live playlist
  • rfbsrc: Fix support for applevncserver and support NULL pool in decide_allocation
  • jpegparse: Fix APP1 marker segment parsing
  • h265parse: Make caps writable before modifying them, fixes criticals
  • fakevideosink: request an extra buffer if enable-last-sample is enabled
  • wasapisrc: Don't provide a clock based on WASAPI's clock
  • wasapi: Only use audioclient3 when low-latency, as it might otherwise glitch with slow CPUs or VMs
  • wasapi: Don't derive device period from latency time, should make it more robust against glitches
  • audiolatency: Fix wave detection in buffers and avoid bogus pts values while starting
  • msdk: fix plugin load on implementations with only HW support
  • msdk: dec: set framerate to the driver only if provided, not in 0/1 case
  • msdk: Don't set extended coding options for JPEG encode
  • rtponviftimestamp: fix state change function init/reset causing races/crashes on shutdown
  • decklink: fix initialization failure in windows binary
  • ladspa: Fix critical warnings during plugin load on Windows and fix dependencies in meson build
  • gl: fix cross-compilation error with viv-fb
  • qmlglsink: make work with eglfs_kms
  • rtspclientsink: Don't deadlock in preroll on early close
  • rtspclientsink: Fix client ports for the RTCP backchannel
  • rtsp-server: Fix session timeout when streaming data to client over TCP
  • vaapiencode: h264: find best profile in those available, fixing negotiation errors
  • vaapi: remove custom GstGL context handling, use GstGL instead. Fixes GL Context sharing with WebkitGtk on wayland
  • gst-editing-services: various fixes
  • gst-python: bump pygobject req to 3.8; fix GstPad.set_query_function(); dist autogen.sh and configure.ac in tarball
  • g-i: pick up GstVideo-1.0.gir from local build directory in GstGL build
  • g-i: update constant values for bindings
  • avoid duplicate symbols in plugins across modules in static builds
  • ... and many, many more!

Cerbero build tool and packaging changes in 1.14.1

Toolchain updates on iOS and Android necessitated a fairly large number of changes in our cerbero build tool used to create our binary packages for the various platforms we support:

  • Add support for Ubuntu 18.04 in cerbero
  • Fix generation of fat shared libraries on macOS
  • gnutls: also rename assembly functions on macos/ios to fix link errors
  • gnutls: fix assembly symbol names for windows x86
  • openssl: fix linking on android/armv7
  • openssl: fix linker issue with Android NDK's r16 binutils
  • ffmpeg: disable asm for android x86 to fix issues when linking with apps
  • x264: disable asm for android x86 to fix issues when linking with apps
  • gnutls: rename private symbols for armv8, x86 to not conflict with openssl
  • mpg123: disable assembly on android/x86 to fix linker problems with relocations
  • Check built version while loading recipe and rebuild if needed
  • Fix packaging of libgcc_s_sjlj which was missing in Windows packages
  • Make not-found in library search fatal so we don't accidentally ship broken packages
  • ship the proxy plugin which was new in 1.14
  • Fix git commands accidentally pulling in locally built libraries and failing

Contributors to 1.14.1

Antonio Ospite, Aurélien Zanelli, Brendan Shanks, Carlos Rafael Giani, Edward Hervey, Emilio Pozuelo Monfort, Enrique Ocaña González, Garima Gaur, Georg Lippitsch, Guillaume Desmottes, Havard Graff, Hoonhee Lee, Hyunjun Ko, James Stevenson, Jan Alexander Steffens (heftig), Jan Schmidt, Joakim Johansson, Jun Xie, Kai Kang, Kirill Marinushkin, Mark Nauwelaerts, Matej Knopp, Mathieu Duponchelle, Matthew Waters, Matthias Fend, Michael Olbrich, Mikhail Fludkov, Nicolas Dufresne, Nirbheek Chauhan, Olivier Crête, Omar Akkila, Patrik Nilsson, Philippe Normand, Pierre Labastie, Sebastian Dröge, Seungha Yang, Sreerenj Balachandran, Stian Selnes, Takeshi Sato, Thibault Saunier, Tim-Philipp Müller, U. Artie Eoff, Víctor Manuel Jáquez Leal, Vivia Nikolaidou, Whoopie, Xabier Rodriguez Calvar, Xavier Claessens, Zeeshan Ali, and countless others.

List of bugs fixed in 1.14.1

For a full list of bugfixes see Bugzilla. Note that this is not the full list of changes. For the full list of changes please refer to the GIT logs or ChangeLogs of the particular modules.

1.14.2

The second 1.14 bug-fix release (1.14.2) was released on 20 July 2018.

This release only contains bugfixes and it should be safe to update from 1.14.x.

Noteworthy bugfixes in 1.14.2

  • asfdemux: Only send flush-stop event for flushing seeks
  • glcolorbalance: Support OES textures for input/passthrough, avoids possibly-unnecessary extra texture copy on Android in the default GL path inside glimagesink.
  • parsebin: Don't try to continue autoplugging a parser if we got raw caps
  • audiobasesrc: Round down segsize to an integer number of samples
  • scaletempo: Mark as Audio in classification
  • souphttpsrc: thread-safety fixes
  • v4l2bufferpool: Validate that capture buffers were queued, to detect when buffer importation was refused by the driver.
  • v4l2bufferpool: Only return eos for M2M devices not v4l2src when buggy driver sends empty buffer
  • v4l2allocator: Fix userptr importation
  • v4l2src: Try to avoid TRY_FMT when camera is streaming, some drivers don't like it
  • v4l2videoenc: Only renegotiate with upstream, fixes use in GstRtspServer pipeline
  • v4l2: many other fixes
  • pitch: fix latency reporting, and various other things
  • dvb: fix wrong (GPL) license headers in camconditionalaccess code
  • webrtc: Fix transportsendbin to fix spurious shut-down failures in webrtcbin if DTLS negotiation hasn't completed yet.
  • webrtc: Don't deadlock on blocked pads on shutdown
  • webrtcbin: copy sticky events on our ghostpads so users can use gst_pad_get_current_caps() to determine what to do with newly-added pads.
  • webrtcbin: fix rtpstorage configuration on 32-bit systems
  • webrtcbin: implement support for FEC and RTX
  • gstplayer: Fix duration-changed CRITICAL warning if duration did not actually change
  • gstplayer: Avoid trying to join the player thread from itself
  • codecparsers: mpeg2 parsing fixes for zero-sized packets
  • wasapisink: fix a rounding error when calculating the buffer frame count
  • wasapisink: fix missing unlock in case IAudioClient_Start fails
  • wasapi: fix potential crash with MinGW
  • rtsp-server: fix race during udpsrc setup, avoiding pushing data on unlinked udpsrc pad
  • rtsp-server: fix waiting for multiple streams in rtspclientsink
  • gst-editing-services: group: Fix handling clips that are added to a layer
  • gst-editing-services: python binding fixes
  • gst-validate launcher: Allow retrieving coredumps from within flatpak
  • gst-validate launcher: Fix the --forever switch which was not stopping on error
  • vaapi: h264 encoder negotiation fixes
  • vaapi: fix issues with native EGL display
  • more GIR annotations fixes, especially for arrays
  • gstreamer-sharp bindings were updated for g-i annotation fixes in other modules
  • fuzzing fixes
  • memory leak fixes
  • build fixes:
    • build fixes for MSVC compiler
    • meson: Fix detection of glib-mkenums under MSYS2 plus other meson buil fixes
    • Fix static build symbol redefinition errors (xvimage, gst-libav)
    • qmlgl: build fixes for conflicting declaration of type GLsync for non-android
    • gl: build fixes for missing EGLuint64KHR typedef
  • ... and many more!

Contributors to 1.14.2

Alessandro Decina, Antoine Jacoutot, Brendan Shanks, Carlos Rafael Giani, Christoph Reiter, Edward Hervey, Göran Jönsson, Guillaume Desmottes, Hyunjun Ko, Iñigo Huguet, Jan Schmidt, Johan Bjäreholt, Louis-Francis Ratté-Boulianne, Lyon Wang, Marian Mihailescu, Mark Nauwelaerts, Mathieu Duponchelle, Matthew Waters, Michael Tretter, Nicolas Dufresne, Nirbheek Chauhan, Philipp Zabel, Roland Jon, Sebastian Dröge, Seungha Yang, Sreerenj Balachandran, Suhas Nayak, Thibault Saunier, Tim-Philipp Müller, Víctor Manuel Jáquez Leal, Vivia Nikolaidou, wangzq, and many others. Thank you all.

List of bugs fixed in 1.14.2

For a full list of bugfixes see Bugzilla. Note that this is not the full list of changes. For the full list of changes please refer to the GIT logs or ChangeLogs of the particular modules.

1.14.3

The third 1.14 bug-fix release (1.14.3) was released on 16 September 2018.

This release only contains bugfixes and it should be safe to update from 1.14.x.

Highlighted bugfixes in 1.14.3

  • opusenc: fix crash on 32-bit platforms
  • compositor: fix major buffer leak when doing crossfading on some but not all pads
  • wasapi: various fixes for wasapisrc and wasapisink regressions
  • x264enc: Set bit depth to fix "This build of x264 requires 8-bit depth. Rebuild to..." runtime errors with x264 version ≥ 153
  • audioaggregator, audiomixer: caps negotiation fixes
  • input-selector: latency handling fixes
  • playbin, playsink: audio visualization support fixes
  • dashdemux: fix possible crash if stream is neither isobmff nor isoff_ondemand profile
  • opencv: Fix build for opencv >= 3.4.2
  • h265parse: miscellaneous fixes backported from h264parse
  • pads: fix changing of pad offsets from inside pad probes
  • pads: ensure that pads are blocked for IDLE probes if they are called from the streaming thread too

Other noteworthy bugfixes in 1.14.3

  • queries: Set default values for position and duration query results
  • segment: make gst_segment_position_from_running_time_full() handle positions before the segment properly
  • aggregator: annotate GstAggregatorClass::update_src_caps for bindings
  • aggregator: Don't leak peer pad of inactive pads when (not) forwarding QoS events to them
  • baseparse: avg_bitrate calculation critical warning fix
  • typefind: improved flow return handling in pull mode, flushing is not an error
  • gl: Don't steal callers reference when setting non-floating elements via properties
  • gl: Also don't leak floating references to elements set via properties
  • tagdemux: Properly propagate gst_pad_pull_range() errors
  • aacparse: fix codec_data buffer leak
  • rtpgstpay: Add support for force-keyunit events
  • rtpL8pay: don't try to modify a read-only structure
  • rtpvp8pay, rtpvp9pay, rtpopuspay: Fix VP8/VP9/OPUS dual encoding name handling
  • rtp payloaders: Use running_time instead of PTS for config-interval calculations
  • qtdemux: Don't assert in prefill mode if a track has no samples at all
  • qmlgl: Ensure GL headers are included
  • v4l2src: fix first input used is always used next times
  • v4l2object: Only offer MMAP/DMABUF pool
  • v4l2object: stop V4L2 from zeroing extended colorimetry for non-mplane
  • v4l2object: improve colorspace handling for JPEG sources
  • splitmuxsink: fix handling of repeated timestamps and a leak if sink pads are not released explicitly
  • player: Set default position and duration value to GST_CLOCK_TIME_NONE
  • videoaggregator: Make sure to hold object lock while iterating sink pads
  • audiobuffersplit: improve resync handling and compensate better for accumulated errors
  • kmssink: add support for Xilinx DRM Driver, mxsfb-drm driver and the Allwinner DRM driver (sun4i-drm)
  • rsvg: Also accept </svg:svg> as ending tag
  • ges: project: Compute relocation URIs in missing-uri signal
  • ges: formatter: Serialize Transition border and invert properties
  • ges: clip: Resync priorities when removing an effect

Contributors to 1.14.3

Christoph Reiter, Devarsh Thakkar, Edward Hervey, Gary Bisson, Iñigo Huguet, Jan Alexander Steffens (heftig), Jan Schmidt, Jerome Laheurte, Marcos Kintschner, Mathieu Duponchelle, Matthew Waters, Michael Olbrich, Nicolas Dufresne, Nirbheek Chauhan, Paul Kocialkowski, Philippe Normand, Philipp Zabel, Roland Jon, Sebastian Dröge, Seungha Yang, Thibault Saunier, Tim-Philipp Müller, Yuji Kuwabara, and many others. Thank you all.

List of bugs fixed in 1.14.3

For a full list of bugfixes see Bugzilla. Note that this is not the full list of changes. For the full list of changes please refer to the GIT logs or ChangeLogs of the particular modules.

1.14.4

The fourth 1.14 bug-fix release (1.14.4) was released on 2 October 2018.

This release only contains bugfixes and it should be safe to update from 1.14.x.

Highlighted bugfixes in 1.14.4

  • glviewconvert: wait and set the gl sync meta on buffers
  • glviewconvert: Copy composition meta from the primary buffer to both outputs
  • glcolorconvert: Don't copy overlay composition meta over to NULL outbufs
  • matroskademux: add functionality needed for MSE use case fixing youtube playback in epiphany/webkit-gtk
  • msdk: fix build on windows
  • opusenc: fix another crash on 32-bit x86 on windows (alignment issue in SSE optimisations)
  • osxaudio: add support for parsing more channel layouts
  • tagdemux: Use upstream GST_EVENT_STREAM_START (and stream-id) if present
  • vorbisdec: fix header handling regression: init decoder immediately once we have headers
  • wasapisink: recover from low buffer levels in shared mode
  • fix GstSegment unit test which would fail on some 32-bit x86 CPUs

Contributors to 1.14.4

Alicia Boya García, Christoph Reiter, Edward Hervey, Jan Schmidt, Matthew Waters, Nicola Murino, Nicolas Dufresne, Sebastian Dröge, Tim-Philipp Müller, Wangfei, and many others. Thank you all.

List of bugs fixed in 1.14.4

For a full list of bugfixes see Bugzilla. Note that this is not the full list of changes. For the full list of changes please refer to the GIT logs or ChangeLogs of the particular modules.

1.14.5

The fifth and likely last 1.14 bug-fix release (1.14.5) was released on 29 May 2019.

This release only contains bugfixes and it should be safe to update from 1.14.x.

Highlighted bugfixes in 1.14.5

GStreamer core
  • aggregator: take the pad lock around queue gap event removal
  • aggregator: don't leak gap buffer when out of segment
  • buffer: fix possible memory corruption in gst_buffer_foreach_meta() when removing metas
  • bus: Make removing of signal/bus watches thread-safe
  • bus: Don't allow removing signal watches with gst_bus_remove_watch()
  • controlbinding: Check if the weak pointer was cleared before explicitly removing it
  • ptp clock: Wait for ANNOUNCE before selecting a master; increase tolerance for late follow-up and delay-resp
  • segment: Allow stop == -1 in gst_segment_to_running_time() and negative rate
  • g-i: annotations fixes
gst-plugins-base
  • audioconvert: fix endianness conversion for unpacked formats (e.g. S24_32BE)
  • audioringbuffer: Fix wrong memcpy address when reordering channels
  • decodebin2: Make sure to remove pad probes when freeing GstDecodeGroup
  • glviewconvert: fix output when a transformation matrix is used
  • glupload: prevent segfault when updating caps
  • gl/egl: Determine correct format on dmabuf import
  • glupload: dmabuf: be explicit about gl formats used
  • id3tag: validate the year from v1 tags before passing to GstDateTime
  • rtpbasepayload: fix sequence numbers when using buffer lists
  • rtspconnection: fix security issue, potential heap overflow (CVE-2019-9928)
  • rtspconnection: fix GError set over the top of a previous GError
  • rtspconnection: do not duplicate authentication headers
  • subparse: don't assert when failing to parse subrip timestamp
  • video: various convert sample frame fixes
  • video-converter: fix conversion from I420_10LE/BE, I420_12LE/BE, A420_10LE/BE to BGRA/RGBA which created corrupted output
  • video-format: Fix GBRA_10/12 alpha channel pixel strides
gst-plugins-good
  • flv: Use 8kHz sample rate for alaw/mulaw audio
  • flvdemux: Do not error out if the first added and chained pad is not linked
  • flvmux: try harder to make sure timestamps are always increasing
  • gdkpixbufdec: output a TIME segment which is what's expected for raw video
  • matroskademux: fix handling of MS ACM audio
  • matroska: fix handling of FlagInterlaced
  • pulsesink: Deal with not being able to convert a format to caps
  • rtph265depay, rtph264depay; aggregation packet marker handling fixes
  • rtpmp4gdepay: detect broken senders who send AAC with ADTS frames
  • rtprawdepay: keep buffer pool around when flushing/seeking
  • rtpssrcdemux: Forward serialized events to all pads
  • qmlglsink: Handle OPENGL header guard changes
  • qtdemux: fix track language code parsing; ignore corrupted CTTS box
  • qtmux: Correctly set tkhd width/height to the display size
  • splitmuxsink: various timecode meta handling fixes
  • splitmuxsink: make work with audio-only encoders as muxers, e.g. wavenc
  • v4l2sink: fix pool-less allocation query handling
  • v4l2dec/enc: fix use after free when handling events
  • vpx: Fix build against libvpx 1.8
  • webmmux: allow resolutions above 4096
gst-plugins-ugly
  • sid: Fix cross-compilation by using AC_TRY_LINK instead of AC_TRY_RUN
  • x264: Only enable dynamic loading code for x264 before v253
gst-plugins-bad
  • assrender: fix disappearing subtitles when seeking back in time
  • decklinkvideosink: fix segfault when audiosink is closed before videosink
  • decklinkvideosrc: respect pixel format property even if mode is set to auto
  • d3dvideosink: Fix calculating buffer size of packed format; don't leak thread object
  • dtls: Don't abort on non-fatal issues, make work with newer OpenSSL versions
  • msdk: more robust error handling; fix intel sdk libdir path
  • nvenc: Ensure drain all frames on finish; fix element reuse and clean up properly
  • openh264dec: Fix handling of errors when doing EOS
  • shmsrc: fixes a crash when is-live is true due a race condition
  • shmsink: fix possible (racy) deadlock on shutdown
  • siren: Fix invalid floating point operation
  • tsdemux: Skew correction improvements: use upstream DTS if set
  • wasapi: number of segments was always 2 (the absolute minimum) by accident
  • wasapi: Fix infinite loop when the device disappears
gst-libav
  • libav: Update internal snapshot to ffmpeg n3.4.6
  • avdemux: fix negative pts if start_time is bigger than the ts
gst-rtsp-server
  • rtsp-client: Fix crash in close handler and remove timeout GSource on cleanup
  • rtsp-stream: Use cached address when allocating sockets
  • rtsp-media: Handle set state when preparing
  • rtsp-media: Fix race condition in finish_unprepare
  • rtsp-stream: Use seqnum-offset for rtpinfo
  • rtsp-stream: add source elements to the pipeline before activation for stream-status create message
gst-editing-services
  • Fix compilation with latest GLib
  • layer: Resort clips before syncing priorities
  • timeline: Better handle loading inconsistent timelines
gstreamer-vaapi
  • thread-safety and memory leak fixes
  • improve caps negotiation if downstream takes ANY caps
  • fix build with -DG_DISABLE_ASSERT
gst-omx
  • fix caps leak
cerbero
  • Add support for MacOSX 10.14, iOS 12.1, Fedora 29/30, Linux Mint Tara (19)
  • Miscellaneous tarball download / error handling improvements
  • disable parallel builds by default on Windows

Contributors to 1.14.5

Aaron Boxer, Adam Jackson, Aleix Conchillo Flaqué, Alexandru Băluț, Alicia Boya García, Andreas Frisch, Antonio Ospite, Arun Raghavan, Benjamin Berg, Brad Reitmeyer, Christopher Snowhill, Daniel Drake, Daniel Stone, Dardo D Kleiner, David Ing, Denis Nagorny, Edward Hervey, Erlend Eriksen, Florent Thiéry, Freyr666, Göran Jönsson, Guillaume Desmottes, Haihao Xiang, Haihua Hu, Havard Graff, He Junyan, Helmut Grohne, Ilya Smelykh, Jacek Tomaszewski, James Cowgill, Jan Alexander Steffens (heftig), Jan Schmidt, Johan Bjäreholt, Jordan Petridis, Josep Torra, Joshua M. Doe, Justin Kim, Kristofer Bjorkstrom, Lars Petter Endresen, Lars Wiréen, Linus Svensson, Lucas Stach, Maciej Wolny, Marc-André Lureau, Marc Leeman, Marcos Kintschner, Marco Trevisan (Treviño), Marouen Ghodhbane, Matej Knopp, Mathieu Duponchelle, Matthew Waters, Michael Olbrich, Michael Tretter, mrk501, Naveen Cherukuri, Nicola Murino, Nicolas Dufresne, Niels De Graef, Nirbheek Chauhan, okuoku, Olivier Crête, Patricia Muscalu, Per Forlin, Peter Körner, Philippe Normand, Philipp Zabel, Roland Jon, Russel Winder, Santiago Carot-Nemesio, Sebastian Dröge, Seungha Yang, Sjoerd Simons, Thiago Santos, Thibault Saunier, Tim-Philipp Müller, Tobias Ronge, Tomislav Tustonić, U. Artie Eoff, Víctor Manuel Jáquez Leal, Vincenzo Bono, Vivia Nikolaidou, Wangfei, Wim Taymans, Xabier Rodriguez Calvar, Xavier Claessens, Xiang, Haihao, Yeongjin Jeong, and many others. Thank you all!

List of bugs fixed in 1.14.5

For a full list of bugfixes see Bugzilla. Note that this is not the full list of changes. For the full list of changes please refer to the GIT logs or ChangeLogs of the particular modules.

During the release cycle issue and patch tracking moved from bugzilla to gitlab, so information about this release may be on either of those two trackers.

MRs with milestone 1.14.5: https://gitlab.freedesktop.org/groups/gstreamer/-/merge_requests?scope=all&utf8=%E2%9C%93&state=all&milestone_title=1.14.5

Known Issues

  • The webrtcdsp element (which is unrelated to the newly-landed GStreamer webrtc support) is currently not shipped as part of the Windows binary packages due to a build system issue.

  • The gst-libav module in 1.14 will only build against older ffmpeg 3.x versions and won't build against the newly-released ffmpeg 4.0 (as in RPM Fusion for Fedora 28) due to API changes. Use the internal ffmpeg copy instead if you build using autotools. This is fixed in git master / upcoming 1.16, but won't be backported to the 1.14 branch as it is rather intrusive and difficult to support both old and new APIs at the same time.

Schedule for 1.16

Our next major feature release will be 1.16, and 1.15 will be the unstable development version leading up to the stable 1.16 release. The development of 1.15/1.16 will happen in the git master branch.

1.16.0 was released on 19 April 2019 and is backwards-compatible to the stable 1.14, 1.12, 1.10, 1.8, 1.6, 1.4, 1.2 and 1.0 release series.


These release notes have been prepared by Tim-Philipp Müller with contributions from Sebastian Dröge, Sreerenj Balachandran, Thibault Saunier and Víctor Manuel Jáquez Leal.

License: CC BY-SA 4.0


Report a problem on this page.