May 09, 2017

Víctor JáquezGStreamer Spring Hackfest 2017 & GStreamer 1.12

Greetings earthlings!

Two things:

One

GStreamer 1.12 is out! And with it, gstreamer-vaapi. Among other new features and improvements we have:

  • GstVaapiDisplay now inherits from GstObject, thus the VA display logging messages are better and tracing the context sharing is more readable.
  • When uploading raw images into a VA surfaces now VADeriveImages are tried first, improving the upload performance, if it is possible.
  • The decoders and the post-processor now can push dmabuf-based buffers to downstream under certain conditions. For example:
    GST_GL_PLATFORM=egl gst-play-1.0 video-sample.mkv --videosink=glimagesink
  • Refactored the wrapping of VA surface into gstreamer memory, adding lock when mapping and unmapping, and many other fixes.
  • Now vaapidecodebin loads vaapipostproc dynamically. It is possible to avoid it usage with the environment variable GST_VAAPI_DISABLE_VPP=1.
  • Regarding encoders: they have primary rank again, since they can discover, in run-time, the color formats they can use for upstream raw buffers and caps renegotiation is now possible. Also the encoders push encoding info downstream via tags.
  • About specific encoders: added constant bit-rate encoding mode for VP8 and H265 encoder handles P010_10LE color format.
  • Regarding decoders, flush operation has been improved, now the internal VA encoder is not recreated at each flush. Also there are several improvements in the handling of H264 and H265 streams.
  • VAAPI plugins try to create their own GstGL context (when available) if they cannot find it in the pipeline, to figure out what type of VA Display they should create.
  • Regarding vaapisink for X11, if the backend reports that it is unable to render correctly the current color format, an internal VA post-processor, is instantiated (if available) and converts the color format.

And

Two

GStreamer Spring Hackfest 2017 is in less than two weeks!

It is going to be held at Igalia premises inCoruña. Keep an eye on it 😉

by vjaquez at May 09, 2017 11:14 AM

May 08, 2017

Zeeshan AliRust Memory Management

(Zeeshan Ali)

In the light of my latest fascination with Rust programming language, I've started to make small presentation about Rust at my office, since I'm not the only one at our company who is interested in Rust. My first presentation in Feb was about a very general introduction to the language but at that time I had not yet really used the language for anything real myself so I was a complete novice myself and didn't have a very good idea of how memory management really works. While working on my gps-share project in my limited spare time, I came across quite a few issues related to memory management but I overcame all of them with help from kind folks at #rust-beginners IRC channel and the small but awesome Rust-GNOME community.

Having learnt some essentials of memory management, I thought I share my knowledge/experience with folks at the office. The talk was not well-attended due to conflicts with other meetings at office but the few folks who attended were very interested and asked some interesting and difficult questions (i-e the perfect audience). One of the questions was if I could put this up as a blog post so here I am. :)

Basics


Let's start with some basics: In Rust,

  1. stack allocation is preferred over the heap allocation and that's where everything is allocated by default.
  2. There is strict ownership semantics involved so each value can only and only have one owner at a particular time.
  3. When you pass a value to a function, you move the ownership of that value to the function argument and similarly, when you return a value from a function, you pass the ownership of the return value to the caller.

Now these rules make Rust very secure but at the same time if you had no way to allocate on the heap or be able to share data between different parts of your code and/or threads, you can't get very far with Rust. So we're provided with mechanisms to (kinda) work around these very strict rules, without compromising on safety these rules provide. Let's start with a simple code that will work fine in many other languages:

fn add_first_element(v1: Vec<i32>, v2: Vec<i32>) -> i32 {
return v1[0] + v2[0];
}

fn main() {
let v1 = vec![1, 2, 3];
let v2 = vec![1, 2, 3];

let answer = add_first_element(v1, v2);

// We can use `v1` and `v2` here!
println!("{} + {} = {}", v1[0], v2[0], answer);
}

This gives us an error from rustc:

error[E0382]: use of moved value: `v1`
--> sample1.rs:13:30
|
10 | let answer = add_first_element(v1, v2);
| -- value moved here
...
13 | println!("{} + {} = {}", v1[0], v2[0], answer);
| ^^ value used here after move
|
= note: move occurs because `v1` has type `std::vec::Vec<i32>`, which does not implement the `Copy` trait

error[E0382]: use of moved value: `v2`
--> sample1.rs:13:37
|
10 | let answer = add_first_element(v1, v2);
| -- value moved here
...
13 | println!("{} + {} = {}", v1[0], v2[0], answer);
| ^^ value used here after move
|
= note: move occurs because `v2` has type `std::vec::Vec<i32>`, which does not implement the `Copy` trait

What's happening is that we passed 'v1' and 'v2' to add_first_element() and hence we passed its ownership to add_first_element() as well and hence we can't use it afterwards. If Vec was a Copy type (like all primitive types), we won't get this error because Rust will copy the value for add_first_element and pass those copies to it. In this particular case the solution is easy:

Borrowing


fn add_first_element(v1: &Vec<i32>, v2: &Vec<i32>) -> i32 {
return v1[0] + v2[0];
}

fn main() {
let v1 = vec![1, 2, 3];
let v2 = vec![1, 2, 3];

let answer = add_first_element(&v1, &v2);

// We can use `v1` and `v2` here!
println!("{} + {} = {}", v1[0], v2[0], answer);
}

This one compiles and runs as expected. What we did was to convert the arguments into reference types. References are Rust's way of borrowing the ownership. So while add_first_element() is running, it owns 'v1' and 'v2' but not after it returns. Hence this code works.

While borrowing is very nice and very helpful, in the end it's temporary. The following code won't build:

struct Heli {
reg: String
}

impl Heli {
fn new(reg: String) -> Heli {
Heli { reg: reg }
}

fn hover(& self) {
println!("{} is hovering", self.reg);
}
}

fn main() {
let reg = "G-HONI".to_string();
let heli = Heli::new(reg);

println!("Registration {}", reg);
heli.hover();
}

rustc says:

error[E0382]: use of moved value: `reg`
--> sample3.rs:20:33
|
18 | let heli = Heli::new(reg);
| --- value moved here
19 |
20 | println!("Registration {}", reg);
| ^^^ value used here after move
|
= note: move occurs because `reg` has type `std::string::String`, which does not implement the `Copy`

If String had Copy trait implemented for it, this code would have compiled. But if efficiency is a concern at all for you (it is for Rust), you wouldn't want most values to be copied around all the time. We can't use a reference here as Heli::new() above needs to keep the passed 'reg'. Also note that the issue here is not that 'reg' was passed to Heli:new() and used afterwards by Heli::hover() afterwards but the fact that we tried to use 'reg' after we have given its ownership to Heli instance through Heli::new().

I realize that the above code doesn't make use of borrowing but if we were to make use of that, we'll have to declare lifetimes for the 'reg' field and the code still won't work because we want to keep the 'reg' in our Heli struct. There is a better solution here:

Rc


use std::rc::Rc;                                                                                         

struct Heli {
reg: Rc<String>
}

impl Heli {
fn new(reg: Rc<String>) -> Heli {
Heli { reg: reg }
}

fn hover(& self) {
println!("{} is hovering", self.reg);
}
}

fn main() {
let reg = Rc::new("G-HONI".to_string());
let heli = Heli::new(reg.clone());

println!("Registration {}", reg);
heli.hover();
}

This code builds and runs successfully. Rc stands for "Reference Counted" so by putting data into this generic container, adds reference counting to the data in question. Note that while you had to explicitly call clone() method of Rc to increment its refcount, you don't need to do anything to decrease the refcount. Each time an Rc reference goes out of scope, the reference is decremented automatically and when it reaches 0, the container Rc and its contained data are freed.

Cool, Rc is super easy to use so we can just use it in all situations where we need shared ownership? Not quite! You can't use Rc to share data between threads. So this code won't compile:

use std::rc::Rc;                                                                                         
use std::thread;

struct Heli {
reg: Rc<String>
}

impl Heli {
fn new(reg: Rc<String>) -> Heli {
Heli { reg: reg }
}

fn hover(& self) {
println!("{} is hovering", self.reg);
}
}

fn main() {
let reg = Rc::new("G-HONI".to_string());
let heli = Heli::new(reg.clone());

let t = thread::spawn(move || {
heli.hover();
});
println!("Registration {}", reg);

t.join().unwrap();
}

It results in:

error[E0277]: the trait bound `std::rc::Rc<std::string::String>: std::marker::Send` is not satisfied in `[closure@sample5.rs:22:27: 24:6 heli:Heli]`
--> sample5.rs:22:13
|
22 | let t = thread::spawn(move || {
| ^^^^^^^^^^^^^ within `[closure@sample5.rs:22:27: 24:6 heli:Heli]`, the trait `std::marker::Send` is not implemented for `std::rc::Rc<std::string::String>`
|
= note: `std::rc::Rc<std::string::String>` cannot be sent between threads safely
= note: required because it appears within the type `Heli`
= note: required because it appears within the type `[closure@sample5.rs:22:27: 24:6 heli:Heli]`
= note: required by `std::thread::spawn`

The issue here is that to be able to share data between more than one threads, the data must be of a type that implements Send trait. However not only implementing Send for all types would be very impractical solution, there is also performance penalties associated with implementing Send (which is why Rc doesn't implement Send).

Introducing Arc


Arc stands for Atomic Reference Counting and it's the thread-safe sibling of Rc.

use std::sync::Arc;                                                                                      
use std::thread;

struct Heli {
reg: Arc<String>
}

impl Heli {
fn new(reg: Arc<String>) -> Heli {
Heli { reg: reg }
}

fn hover(& self) {
println!("{} is hovering", self.reg);
}
}

fn main() {
let reg = Arc::new("G-HONI".to_string());
let heli = Heli::new(reg.clone());

let t = thread::spawn(move || {
heli.hover();
});
println!("Registration {}", reg);

t.join().unwrap();
}

This one works and the only difference is that we used Arc instead of Rc. Cool, so now we have a very efficient by thread-unsafe way to share data between different parts of the code but also a thread-safe mechanism as well. We're done then? Not quite! This code won't work:

use std::sync::Arc;                                                                                      
use std::thread;

struct Heli {
reg: Arc<String>,
status: Arc<String>
}

impl Heli {
fn new(reg: Arc<String>, status: Arc<String>) -> Heli {
Heli { reg: reg,
status: status }
}

fn hover(& self) {
self.status.clear();
self.status.push_str("hovering");
println!("{} is {}", self.reg, self.status);
}
}

fn main() {
let reg = Arc::new("G-HONI".to_string());
let status = Arc::new("".to_string());
let mut heli = Heli::new(reg.clone(), status.clone());

let t = thread::spawn(move || {
heli.hover();
});
println!("main: {} is {}", reg, status);

t.join().unwrap();
}

This gives us two errors:

error: cannot borrow immutable borrowed content as mutable
--> sample7.rs:16:9
|
16 | self.status.clear();
| ^^^^^^^^^^^ cannot borrow as mutable

error: cannot borrow immutable borrowed content as mutable
--> sample7.rs:17:9
|
17 | self.status.push_str("hovering");
| ^^^^^^^^^^^ cannot borrow as mutable

The issue is that Arc is unable to handle mutation of data from difference threads and hence doesn't give you mutable reference to contained data.

Mutex


For sharing mutable data between threads, you need another type in combination with Arc: Mutex. Let's make the above code work:

use std::sync::Arc;                                                                                      
use std::sync::Mutex;
use std::thread;

struct Heli {
reg: Arc<String>,
status: Arc<Mutex<String>>
}

impl Heli {
fn new(reg: Arc<String>, status: Arc<Mutex<String>>) -> Heli {
Heli { reg: reg,
status: status }
}

fn hover(& self) {
let mut status = self.status.lock().unwrap();
status.clear();
status.push_str("hovering");
println!("thread: {} is {}", self.reg, status.as_str());
}
}

fn main() {
let reg = Arc::new("G-HONI".to_string());
let status = Arc::new(Mutex::new("".to_string()));
let heli = Heli::new(reg.clone(), status.clone());

let t = thread::spawn(move || {
heli.hover();
});

println!("main: {} is {}", reg, status.lock().unwrap().as_str());

t.join().unwrap();
}

This code will work. Notice how you don't have to explicitly unlock the mutex after using. Rust is all about scopes. When the unlocked value goes out of the scope, mutex is automatically unlocked.

Other container types


Mutexes are rather expensive and sometimes you have shared date between threads but not all threads are mutating it (all the time) and that's where RwLock becomes useful. I won't go into details here but it's almost identical to Mutex, except that threads can take read-only locks and since it's possible to safely share non-mutable state between threads, it's a lot more efficient than threads locking other threads each time they access the data.

Another container types I didn't mention above, is Box. The basic use of Box is that it's a very generic and simple way of allocating data on the heap. It's typically used to turn an unsized type into a sized type. The module documentation has a simple example on that.

What about lifetimes


One of my colleagues who had had some experience with Rust was surprised that I didn't cover lifetimes in my talk. Firstly, I think it deserves a separate talk of it's own. Secondly, if you make clever use of the container types available to you and described above, most often you don't have to deal with lifetimes. Thirdly, lifetimes is Rust is something that I still struggle with, each time I have to deal with it so I feel a bit unqualified to teach others about how they work.

The end


I hope you find some of the information above useful. If you are looking for other resources on learning Rust, the Rust book is currently your best bet. I am still a newbie at Rust so if you see some mistakes in this post, please do let me know in the comments section.

Happy safe hacking!

May 08, 2017 07:04 AM

May 04, 2017

GStreamerGStreamer 1.12.0 stable release

(GStreamer)

The GStreamer team is pleased to announce the first release in the stable 1.12 release series. The 1.12 release series is adding new features on top of the 1.0, 1.2, 1.4, 1.6, 1.8 and 1.10 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework.

Full release notes can be found here.

Binaries for Android, iOS, Mac OS X and Windows will be provided in the next days.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

May 04, 2017 04:00 PM

April 27, 2017

GStreamerGStreamer 1.12.0 release candidate 2 (1.11.91)

(GStreamer)

The GStreamer team is pleased to announce the second release candidate of the stable 1.12 release series. The 1.12 release series is adding new features on top of the 1.0, 1.2, 1.4, 1.6, 1.8 and 1.10 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework.

Full release notes will be provided with the 1.12.0 release, highlighting all the new features, bugfixes, performance optimizations and other important changes. A initial, unfinished version of the release notes can be found here already.

Binaries for Android, iOS, Mac OS X and Windows will be provided in the next days.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

April 27, 2017 03:00 PM

April 26, 2017

Sebastian DrögeRTP for broadcasting-over-IP use-cases in GStreamer: PTP, RFC7273 for Ravenna, AES67, SMPTE 2022 & SMPTE 2110

(Sebastian Dröge)

It’s that time of year again where the broadcast industry gathers at NAB show, which seems like a good time to me to revisit some frequently asked questions about GStreamer‘s support for various broadcasting related standards

Even more so as at this year’s NAB there seems to be a lot of hype about the new SMPTE 2110 standard, which defines how to transport and synchronize live media streams over IP networks, and which fortunately (unlike many other attempts) is based on previously existing open standards like RTP.

While SMPTE 2110 is the new kid on the block here, there are various other standards based on similar technologies. There’s for example AES67 by the Audio Engineering Society for audio-only, Ravenna which is very similar, the slightly older SMPTE 2022 and VSF TR3/4 by the Video Services Forum.

Other standards, like MXF for storage of media (which is supported by GStreamer since years), are also important in the broadcasting world, but let’s ignore these other use-cases here for now and focus on streaming live media.

Media Transport

All of these standards depend on RTP in one way or another, use PTP or similar services for synchronization and are either fully (as in the case of AES67/Ravenna) supported by GStreamer already or at least to a big part, with the missing parts being just some extensions to existing code.

There’s not really much to say here about the actual media transport as GStreamer has had solid support for RTP for a very long time and has a very flexible and feature-rich RTP stack that includes support for many optional extensions to RTP and is successfully used for broadcasting scenarios, real-time communication (e.g. WebRTC and SIP) and live-video streaming as required by security cameras for example.

Over the last months and years, many new features have been added to GStreamer’s RTP stack for various use-cases and the code was further optimized, and thanks to all that the amount of work needed for new standards based on RTP, like the beforementioned ones, is rather limited. For AES67 no additional work was needed to support it, for example.

The biggest open issue for the broadcasting-related standards currently is the need of further optimizations for high-resolution, high-framerate streaming of video. In these cases we currently run into performance problems due to the high amount of packets per second, and some serious optimizations would be needed. However there are already various ideas how to improve this situation that are just waiting to be implemented.

Synchronization

I previously already wrote about PTP in GStreamer, which is supported in GStreamer for synchronizing media and that support is now merged and has been included since the 1.8 release. In addition to that NTP is also supported now since 1.8.

In theory other clocks could also be used in some scenarios, like clocks based on a GPS receiver, but that’s less common and not yet supported by GStreamer. Nonetheless all the infrastructure for supporting arbitrary clocks exists, so adding these when needed is not going to be much work.

Clock Signalling

One major new feature that was added in the last year, for the 1.10 release of GStreamer, was support for RFC7273. While support for PTP in theory allows you to synchronize media properly if both sender and receiver are using the same clock, what was missing before is a way to signal what this specific clock exactly is and what offsets have to be applied. This is where RFC7273 becomes useful, and why it is used as part of many of the standards mentioned before. It defines a common interface for specifying this information in the SDP, which commonly is used to describe how to set up an RTP session.

The support for that feature was merged for the 1.10 release and is readily available.

Help needed? Found bugs or missing features?

While the basics are all implemented in GStreamer, there are still various missing features for optional extensions of the before mentioned standards or even, in some cases, required parts of the standard. In addition to that some optimizations may still be required, depending on your use-case.

If you run into any problems with the existing code, or need further features for the various standards implemented, just drop me a mail.

GStreamer is already used in the broadcasting world in various areas, but let’s together make sure that GStreamer can easily be used as a batteries-included solution for broadcasting use-cases too.

by slomo at April 26, 2017 04:01 PM

April 25, 2017

Christian SchallerRed Hat job opening for Linux Graphics stack developer

(Christian Schaller)

So we have a new job available for someone interested in joing our team and work on improving the Linux graphics stack. The focus of this job will be on GPU compute related work, but you should also expect to be spending time on improving the graphics driver stack in general. We are looking for someone at the Principal Engineer level, but I do recommend that even if you don’t feel you are quite at that level yet you should apply because to be fair the amount of people with the kind of experience we are looking for are few and far between, so in the end there is a chance we will hire two more junior developers instead if we have candidates with the right profile.

We are quite flexible on working location for this job, so for the right candidate working remotely is definitely a possibility. And of course if you are interested in joining us at one of our offices that is an option too, for instance we have existing team members working out of our Boston (USA), Brno(Czech Republic), Brisbane (Australia) and Munich (Germany) offices.

GPU Compute is rapidly growing in importance and use so this is your chance to be in the middle of it and work for what I personally think is one of the best companies in the world to work for.

So be sure to submit an application though the Red Hat hiring portal.

by uraeus at April 25, 2017 05:53 PM

April 10, 2017

Zeeshan AliGNOME ❤ Rust Hackfest in Mexico

(Zeeshan Ali)
While I'm known as a Vala fanboy in GNOME, I've tried to stress time and again that I see Vala as more a practical solution than an ideal one. "Safe programming" has always been something that intrigued me, having dealt with numerous crashes and other hard-to-debug runtime issues in the past. So when I first heard of Rust some years back, it got me super excited but it was not exactly stable  and there was no integration with GNOME libraries or D-Bus and hence it was not at all a viable option for developing desktop code. Lately (in past 2 years) things have significantly changed. Not only we have Rust 1.0 but we also have crates that provide integration with GNOME libraries and D-Bus. On top of that, some of us took steps to start converting some C code into Rust and many of us started seriously talking with Rust hackers to make Rust a first class programming language for GNOME.

To make things really go foward, we decided to arrange a hackfest, which took place last week at the Red Hat offices in Mexico city. The event was a big success in my opinion. The actual work done and started during the hackfest aside, it brought two communities much closer together and we learnt quite a lot from each other in a very short amount of time. The main topics at the hackfest were:
  • GObject-introspection consumption by Rust.
  • GObject creation from Rust.
  • Better out of the box Rust support in GNOME Builder
  • GMainLoop and Tokio integration
  • D-Bus bindings
While most folks were focused on the first three and I did participate in discussions on all these topics (except for Builder, of which I don't know anything), I spent most of my time looking into the last one. D-Bus is widely used in automotive industry these days and I serve that industry these days so it made sense, aside from my person interest in D-Bus. We established (some of it before the hackfest) that to make Rust attractive to C and Vala developers, we need to provide:
  1. Syntactic sugar for making D-Bus coding simple

    Very similar to what Vala offers. Antoni already had a project, dbus-macros that targets this goal through the use of Rust's (powerful) macro system. So I spent a lot of time fixing and improving dbus-macros crate. Having Antoni and other Rust experts in the same room, greatly helped me get around some very hard to decipher compiler issues. I found out (the hard way) that while rustc is very good at spotting errors, it fails miserably to give you the proper context when it comes to macros. I complained enough about this to Mozilla folks that I'm sure they'll be looking into fixing that experience at some point in near future. :)

    We also contacted the author of dbus crate, David Henningsson over e-mail about a few D-Bus related subjects (more below) including this one. (I was surprised to find out that he also lives in Sweden). His opinion was that we probably should be using procedural macros for this. I agree with him, except that procedural macros are not yet in stable Rust. So for now, I decided to continue with current approach of the project.

    During the hackfest, I became the maintainer of the dbus-macros crate since the first thing I did was to reduce the very small amount of code by 70%. Next, I created a backlog for myself and worked my way through it one issue at a time. I'm going to continue with that.

  2. Asynchronous D-Bus methods

    While ability to make D-Bus method calls asynchronously from clients is very important (you don't want to block the UI for your IPC), it would be also very nice for services to be able to asynchronously handle method calls as well. Brian Anderson from Mozilla was working on this during the hackfest. His approach was to hack dbus crate to add async API through the use of tokio crate. I spent most of the second day of hackfest, sitting next to Brian for some peer-programming. The author of tokio, Alex Crichton, sitting next to us helped us a lot in understanding tokio API. In the end, Brian submitted a working proof of concept for client-side async calls, which will hopefully provide a very good bases for David's actual implementation.

  3. Code generation from D-Bus introspection XML

    With both GLib and Qt providing utilities to generate code for handling D-Bus for a decade now, most projects doing D-Bus make use of this. My intention was too look into this during the hackfest but just before, I found out that David had not only already started this work in dbus crate but also his approach is exactly what I'd have gone for. So while I decided not to work on this, I did have lengthy (electronic) conversations with David about how to consolidate code generation with dbus-macros.

    Ideally, the API of the generated code should be very similar to one you'd manually create using dbus-macros to make it easy for developers to switch from one approach to another. But since David and I didn't agree with current dbus-macros approach, I kind of gave-up on this goal, at least for now. Once macro procedures stabilize, there is a good chance we will change dbus-macros (though it'll be a completely new version or maybe even a different crate) to make use of them and we can revisit consolidation of code generation and dbus-macros.
A few weeks prior to the event, I decided to create a new project, gps-share. The aim is to provide ability to share your (standalone) GPS device from your laptop/desktop to other devices on the network and at the same time add standalone GPS device support into Geoclue (without any new feature code in Geoclue). I decided to write it in Rust for a few reasons, one of them being my desire to learn enough about the language before the event (I hadn't wrote any serious/complicated code in Rust before) and another one was to have an actual test case for D-Bus adventures (it's supposed to talk to Avahi on D-Bus). I'm glad that I did that since I encountered a few issues with dbus-macros when using them in gps-share and the awesome Mozilla folks were able to help me figure them out very quickly. Otherwise it would have taken me a very long time to figure the issues.




On the last day of hackfest, after a delicious lunch, we decided to go for a long stroll around Mexico city and hang out in the park, where we had more interesting conversations, about life, universe and everything (including Rust and GNOME).

After the hackfest, I stayed around for 3 more days. On Saturday, I mostly hung out with Federico, Christian, Antoni and Joaquín. We walked around the city center and watched Federico and Joaquín interviewed by Rancho Electronico folks. I was really excited to see that they use GNOME for their desktop and GStreamer for streaming. The guy handling the streaming was very glad to meet someone with GStreamer experience.

On Sunday, I rented a car and went to a hike at Tepoztlán with Felipe. Driving in Mexico wasn't easy so having a Mexican with me, helped a lot.


And on Monday, we drove to the Sun pyramid.


I would like to thank both GNOME Foundation and my employer, Pelagicore for sponsoring my participation to this event.


April 10, 2017 07:11 PM

GStreamerGStreamer 1.12.0 release candidate 1 (1.11.90, binaries)

(GStreamer)

Pre-built binary images of the 1.12.0 release candidate 1 of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

April 10, 2017 02:00 PM

April 07, 2017

GStreamerGStreamer 1.12.0 release candidate 1 (1.11.90)

(GStreamer)

The GStreamer team is pleased to announce the first release candidate of the stable 1.12 release series. The 1.12 release series is adding new features on top of the 1.0, 1.2, 1.4, 1.6, 1.8 and 1.10 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework.

Full release notes will be provided with the 1.12.0 release, highlighting all the new features, bugfixes, performance optimizations and other important changes.

Binaries for Android, iOS, Mac OS X and Windows will be provided in the next days.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

April 07, 2017 02:00 PM

April 06, 2017

Christian SchallerYou get what you pay for so start paying for your media

(Christian Schaller)

Warning! This is not a directly Linux/Tech related blog post (it is not the only thing I care about in this world :).

One thing that has been on my mind for a while is the state of journalism. The quality of journalism seems to have
been declining over the last decade and I think it is clear that the new internet driven expectation that news content
is free for the consumer is a big part of the explanation. We all know that newspapers and TV news teams have seen their
staff cut as advertising revenue has not been strong enough to keep staffing up. And in my opinion advertising is in itself
a horrible way to finance something like news and information as it drivers a lot of unwanted behaviour both in terms of
avoiding critical journalism that might drive away advertisers and an intensive drive for ‘clicks’ per news article that often
comes at the expense of level of accuracy and making news very scandal driven.

So I have come to believe over the last few years that if we want to see quality journalism and a healthy democracy we need
to move away from free news content to accepting that we get what we pay for and start paying for our news again. This is true
both in terms of mainstream media, but also in terms of topical media like tech media.

So as a start I began paying for some of my most use news sites last year. I am now a paying subscriber to the Economist, which is one
of the best sources of quality news in my opinion and on the tech side I am a paying subscriber to Phoronix (I am also a paying subscriber to LWN through work). Anyway, I feel strongly enough about this to write this blog and hope that other people reading it agree with my thinking here and start paying for the content you enjoy, be that through subscriptions or patreons or similar. And maybe we can be part of a process to change the expectation and understanding of the value of well funded independent media. Lets help make news something that is made to help inform us as readers and not something that is made to help someone sell something to us.

by uraeus at April 06, 2017 05:30 PM

Christian SchallerWelcoming Ubuntu to GNOME and Wayland

(Christian Schaller)

So as most of you probably know Mark Shuttleworth just announced that they will be switching to GNOME 3 and Wayland again for Ubuntu. So I would like to on behalf of the Red Hat Desktop and Fedora teams to welcome them and say that we look forward to keep working with great Canonical and Ubuntu people like Allison Lortie and Robert Ancell on projects of shared interest around GNOME, Wayland and hopefully Flatpak.

It is worth mentioning that even as we been competing with Unity and Ubuntu we have also been collaborating with them, most recently on working with them to integrate the features they wanted from GNOME Software like the user reviews, but of course now sharing a bigger set of technologies collaboration will be even easier.

I am personally happy to see this convergence of efforts happening because I have for a long time felt that the general level of investment in the Linux desktop has not been great enough to justify the plethora of Linux desktops out there, so by now having reached a position where Canonical, Endless, Red Hat and Suse again share one desktop technology stack and along with consulting companies like Centricular, CodeThink, Collabora and Igalia helping push parts of the stack forward we are at least all pulling in the same direction.
This change should also make life easier for ISV who now have a more clear target if they want to try to integrate their UI with the Linux desktop as ‘the linux desktop’ becomes a more meaningful term with this change.

And to state the obvious, we will continue our effort around Fedora Workstation to continue to lead on innovation and engineering and setting the direction for where Desktop Linux goes in the future.

by uraeus at April 06, 2017 03:16 PM

April 03, 2017

Christian SchallerHacker News feedback on what they want from their Desktop – We got it

(Christian Schaller)

So there is a thread on Hacker News based on a question from a Canonical employee asking for feedback on what people want from the next version of Ubuntu. I always try to read such threads even when they are not about Fedora or Red Hat. I fact I often read such articles and threads about non-Linux systems too to help understand what people are looking for and thus enable us to prioritize what we do with Fedora Workstation even better.
Fedora Workstation

Over the last few years I do feel we managed to nail down what the major pain points are and crossed them out one by one or gotten people assigned to work on them. So a lot of the items people asked for in that thread we already have in Fedora Workstation or have already in our roadmap. So I thought it would be nice to write them up and maybe encourage people to take a look at Fedora Workstation if you haven’t done so already. The list below is my trying to go through the long thread and pick up important and recurring topics, so I hope I got most of them, but if I missed something feel free to add a comment and I will try to answer.

1. Handling of DPI scaling and HiDPI
This has been something we been working on for quite a while. I think we where the first distribution to implemented general HiDPi and put a lot of engineering time into updating Wayland and GNOME to make it happen. That said things are not perfect yet. But we are working on resolving those. Jonas Ådahl and Rui Matos are currently trying to resolve the two main issues we still see. The first item is non-integer UI scaling. Currently we only offer integer scaling meaning that we only offer 2x scaling. This is to much however so we are working on a solution to offer fractional scaling like 1.5 for instance. We are certain to have that ready for Fedora Workstation 27, but there is a small hope we can finalize it already for Fedora Workstation 26. The other item is dealing with applications relying on XWayland because they do not support DPI scaling across two or more monitors, unlike native Wayland applications. We are dealing with that in two ways, one being working with upstreams to get their applications Wayland native like the work we been doing with LibreOffice and Firefox. We are also trying to come up with a scaling solution for XWayland using applications, but we haven’t been able to come up with a solution there yet.

2. Multitouch gestures like 3-finger swipe to change workspace.
This is another item we have put significant effort into. Over the last few years we made sure that we went from almost no touch support in the desktop to now supporting touch throughout the stack. The one big remaining item that was holding items like proper gestures back is that until kernel 4.12 is out the Synaptics touchpads are using PS/2. This causes only 2 touches to be reported, which is not great when you want 3 fingers gestures. With the new kernel, we will be using a different bus for those Synaptics touchpads, and we will have proper 5 fingers support. Benjamin Tissoires on our team has spent 4 years to get this code upstream but it’s finally here. We plan on backport this code to Fedora Workstation 26. Of course application developers will need to make use of the infrastructure in their applications for this feature to be fully realized everywhere.
Multitouch

3. Battery life
This is something we realize is a major issue and it has been on our agenda for a long time. It is a really hard issue to resolve because it is tied into a lot of things outside of control, like hardware used and in some cases third party drivers. That said, as many of you might know we recently set up a Laptop team here inside the bigger Red Hat desktop team and battery life is one of their top priorities. Christian Kellner is our point man on battery life and he has taken over the GNOME Battery bench tool that was originally created by Owen Taylor when we starting looking at battery life. He is currently working on improving GNOME battery bench and talking to hardware vendors to figure out what we can do. We are also actively speaking with NVidia to ensure that we can provide good battery life for hybrid graphics users when the binary NVidia driver is installed. We hope to agree with them on interfaces that should allow us to provide top notch battery life for such systems, but we are beholden to changes in the binary drivers to make that happen so it is also an example of the limits of what we can do on our own here.
GNOME Battery benchGNOME Battery bench

4. UEFI issues
There where people on the Hacker News thread talking about issues with UEFI. Once again this is an area where we have a dedicated engineer assigned to UEFI and making sure it works great. In fact Peter Jones who is our UEFI point man is on the UEFI standards commitee doing ongoing work to ensure the standard is open source friendly and well supported by Linux. It is also worth mentioning that we created the Linux Vendor Firmware Service to make updating UEFI firmware and easy process. So if you see firmware updates offered in GNOME Software for your laptop or other devices that is because of the work we put into this service. We expect to have most of the major vendors signed up by the end of the year, so if your system is currently not supported that is hopefully a temporary thing. So this is both something that works well under Fedora and RHEL due to having someone dedicated to the effort and it is another example us doing the heavy lifting to make things actually happen.
UEFI firmware updatesUEFI Firmware updates

5. We got Wayland
A lot of people in the thread asked about Wayland support and well, we got Wayland! And this is another area where we dedicated serious engineering resources to it and are continuing to do so. For instance in addition to the above mentioned multi-DPI system work we are working on items such as HDR (High dynamic range) and next generation hybrid graphics support in Wayland. We are also working with NVidia to ensure their binary driver works well with Wayland.
Wayland Graphics

6. Something like Redshift
Some time ago we picked up on the growing popularity of tools such as Redshift and f.lux and this was another often repeated request in that Hacker News thread. Well we once again invested our resources into this and thus in the newly released GNOME 3.24 there is built in support for this feature, called Night Light. We drove this feature work and it will of course be available alongside GNOME 3.24 in Fedora Workstation 26.
GNOME Night lightNightlight

7. Improved GPU driver update
This is another item we be spending significant time and resources on. We have a team dedicated to work on the linux graphics stack, which includes people like the graphics subsystem kernel maintainer and RADV creator Dave Airlie, Nouveau maintainer Ben Skeggs, core X, Mesa and Wayland dev Adam Jackson, Freedreeno creator and maintainer Rob Clark and more. This team is pushing the linux graphics stack forward alongside their colleagues at Intel, AMD and NVidia. The one thing we recently been working on for instance is dealing with the NVidia binary driver which has been a pain for a long time due to the file level conflict with Mesa. We didn’t want to do a workaround or hack, so what we did was work with NVidia on their glvnd proposal to make that a reality. This included supporting glvnd in Mesa in addition to the NVidia driver, but also working with the OS level tools to ensure fallbacks and autodetection worked fine. We got the basics of glvnd support already in Fedora and are polishing it up. Hans de Goede who took over that work from Adam Jackson has recently be working with the fine folks at rpmfusion and negativo17 to make sure we have some good packages available taking full advantage of his work, and thus enabling easy install and upgrade of these drivers. We are also planning to start offering a COPR with the latest and greatest Mesa drivers going forward to ensure you can always have the latest drivers available if you want to test and try them out.
NVidia driver installNVidia driver install

8.Improved printer support
Even in this digital work printing is still important and thus we got people dedicated to this task too. So Marek Kasik is working on ensuring we keep CUPS working well and Felipe Borges recently wrote a blog entry talking about the redesigned printer control panel. So this is another area we are spending serious resources on and continuously trying to improve.
New Printer panelNew Printer Panel

9. Improve Bluetooth
Bastien Nocera on our team is probably the person who has done the single most to make sure desktop bluetooth is working at all. We decided to boost that effort by having Christian Kellner work on this too, so he has been working on patches for various bluetooth related issues with Bastien providing guidance and code review. We are also working on coming up with some kind of bluetooth testing harness to allow us to catch regressions more easily and verify support on new hardware. Christian focus currently is improving the handling of Bluetooth Audio.

Summary
If you are contemplating giving Fedora a try I think the items above illustrate one thing very strongly and that is how many of these issues we are the primary force behind, so by using Fedora you are not only getting access to them first and at the same time have some assurance that the integration work has been done right, but you are also supporting the effort of moving these technologies forward and also putting yourself in a position to more directly interact with the engineers working on these and a long slew of other important technologies in the desktop and beyond. And our efforts are not just limited to writing code, like for example our current effort to clear the legal hurdles blocking Linux systems from supporting various media codecs. So if you haven’t already I strongly suggest you go to the Get Fedora website and grab our convenient Fedora installer or an ISO image. And as I said initially if you have other pressing items I didn’t cover here, feel free to post a comment and I will be happy to try to answer any questions I get.

by uraeus at April 03, 2017 06:34 PM

March 30, 2017

Sebastian DrögeWriting GStreamer Elements in Rust (Part 4): Logging, COWs and Plugins

(Sebastian Dröge)

This is part 4, the older parts can be found here: part 1, part 2 and part 3

It’s been quite a while since the last update again, so I thought I should write about the biggest changes since last time again even if they’re mostly refactoring. They nonetheless show how Rust is a good match for writing GStreamer plugins.

Apart from actual code changes, also the code was relicensed from the LGPL-2 to a dual MIT-X11/Apache2 license to make everybody’s life a bit easier with regard to static linking and building new GStreamer plugins on top of this.

I’ll also speak about all this and more at RustFest.EU 2017 in Kiev on the 30th of April, together with Luis.

The next steps after all this will be to finally make the FLV demuxer feature-complete, for which all the base-work is already done now.

Logging

One thing that was missing so far and made debugging problems always a bit annoying was the missing integration with the GStreamer logging infrastructure. Adding println!() everywhere just to remove them again later gets boring after a while.

The GStreamer logging infrastructure is based, like many other solutions, on categories in which you log your messages and levels that describe the importance of the message (error, warning, info, …). Logging can be disabled at compile time, up to a specific level, and can also be enabled/disabled at runtime for each category to a specific level, and performance impact for disabled logging should be close to zero. This now has to be mapped somehow to Rust.

During last year’s “24 days of Rust” in December, slog was introduced (see this also for some overview how slog is used). And it seems like the perfect match here due to being able to implement new “output backends”, called a Drain in slog and very low performance impact. So how logging works now is that you create a Drain per GStreamer debug category (which will then create the category if needed), and all logging to that Drain goes directly to GStreamer:

// The None parameter is a GStreamer Element, which allows the logging system to
// print the element name and other things on the GStreamer side
// The 0 is for defining a color for the logging in that category
let logger = Logger::root(GstDebugDrain::new(None,
                                             "mycategory",
                                             0,
                                             "Some description"),
                                             None);
debug!(logger, "Some output with a number {}", 1);

With lazy_static we can then make sure that the Drain is only created once and can be used from multiple places.

All the implementation for the Drain can be found here, and it’s all rather straightforward plumbing. The interesting part here however is that slog makes sure that the message string and all its formatting arguments (the integer in the above example) are passed down to the Drain without doing any formatting. As such we can skip the whole formatting step if the category is not enabled or its level is too low, which gives us almost zero-cost logging for the cases when it is disabled. And of course slog also allows disabling logging up to a specific level at compile time via cargo’s features feature, making it really zero-cost if disabled at compile time.

Safe & simple Copy-On-Write

In GStreamer, buffers and similar objects are inheriting from a base class called GstMiniObject. This base class provides infrastructure for reference counting, copying (cloning) of the objects and a dynamic (at runtime, not to be confused with Rust’s COW type) Copy-On-Write mechanism (writable access requires a reference count of 1, or a copy has to be made). This is very similar to Rust’s Arc, which for a contained type that implements Clone provides the make_mut() and get_mut() functions that work the same way.

Now we can’t unfortunately use Arc directly here for wrapping the GStreamer types, as the reference counting is already done inside GStreamer and adding a second layer of reference counting on top is not going to make things work better. So there’s now a GstRc, which provides more or less the same API as Arc and wraps structs that implement the GstMiniObject trait. The latter provides GstRc functions for getting the raw pointer, swap the raw pointer and create new instances from a raw pointer. The actual structs for buffers and other types don’t do any reference counting or otherwise instance handling, and only have unsafe constructors. The general idea here is that they will never exist outside a GstRc, which will then can provide you with (mutable or not) references to them.

With all this we now have a way to let Rust do the reference counting for us and enforce the writability rules of the GStreamer API automatically without leaving any chance of doing things wrong. Compared to C where you have to do the reference counting yourself and could accidentally try to modify a non-writable (reference count > 1) object (which would give an assertion), this is a big improvement.

And as a bonus this is all completely without overhead: all that is passed around in the Rust code is (once compiled) the raw C pointer of the objects, and the functions calls directly map to the C functions too. Let’s take an example:

// This gives a GstRc
let mut buffer = Buffer::new_from_vec(vec![1, 2, 3, 4]).unwrap();

{ // A new block to keep the &mut Buffer scope (and mut borrow) small
  // This would fail (return None) if the buffer was not writable
  let buffer_ref = buffer.get_mut().unwrap();
  buffer_ref.set_pts(Some(1));
}

// After this the reference count will be 2
let mut buffer_copy = buffer.clone();

{
  // buffer.get_mut() would return None, the below creates a copy
  // of the buffer instead, which makes it writable again
  let buffer_copy_ref = buffer.make_mut().unwrap();
  buffer_copy_ref.set_pts(Some(2));
}

// Access to Buffer functions that only require a &mut Buffer can
// be done directly thanks to the Deref trait
assert_ne!(buffer.get_pts(), buffer_copy.get_pts());

After reading this code you might ask why DerefMut is not implemented in addition, which would then do make_mut() internally if needed and would allow getting around the extra method call. The reason for this is that make_mut() might do a (expensive!) copy, and as such DerefMut could do a copy implicitly without the code having any explicit indication that a copy might happen here. I would be worried that it could cause non-obvious performance problems.

The last change I’m going to write about today is that the repository was completely re-organized. There is now a base crate and separate plugin crates (e.g. gst-plugin-file). The former is a normal library crate and contains some C code and all the glue between GStreamer and Rust, the latter don’t contain a single line of C code (and no unsafe code either at this point) and compile to a standalone GStreamer plugin.

The only tricky bit here was generating the plugin entry point from pure Rust code. GStreamer requires a plugin to export a symbol with a specific name, which provides access to a description struct. As the struct also contains strings, and generating const static strings with ‘\0’ terminator is not too easy, this is still a bit ugly currently. With the upcoming changes in GStreamer 1.14 this will become better, as we can then just export a function that can dynamically allocate the strings and return the struct from there.

All the boilerplate for creating the plugin entry point is hidden by the plugin_define!() macro, which can then be used as follows (and you’ll understand what I mean with ugly ‘\0’ terminated strings then):

plugin_define!(b"rsfile\0",
               b"Rust File Plugin\0",
               plugin_init,
               b"1.0\0",
               b"MIT/X11\0",
               b"rsfile\0",
               b"rsfile\0",
               b"https://github.com/sdroege/rsplugin\0",
               b"2016-12-08\0");

As a side-note, handling multiple crates next to each other is very convenient with the workspace feature of cargo and the “build –all”, “doc –all” and “test –all” commands since 1.16.

by slomo at March 30, 2017 12:34 PM

March 22, 2017

Christian SchallerAnother media codec on the way!

(Christian Schaller)

One of the thing we are working hard at currently is ensuring you have the codecs you need available in Fedora Workstation. Our main avenue for doing this is looking at the various codecs out there and trying to determine if the intellectual property situation allows us to start shipping all or parts of the technologies involved. This was how we were able to start shipping mp3 playback support for Fedora Workstation 25. Of course in cases where this is obviously not the case we have things like the agreement with our friends at Cisco allowing us to offer H264 support using their licensed codec, which is how OpenH264 started being available in Fedora Workstation 24.

As you might imagine clearing a codec for shipping is a slow and labour intensive process with lawyers and engineers spending a lot of time reviewing stuff to figure out what can be shipped when and how. I am hoping to have more announcements like this coming out during the course of the year.

So I am very happy to announce today that we are now working on packaging the codec known as AC3 (also known as A52) for Fedora Workstation 26. The name AC3 might not be very well known to you, but AC3 is part of a set of technologies developed by Dolby and marketed as Dolby Surround. This means that if you have video files with surround sound audio it is most likely something we can playback with an AC3 decoder. AC3/A52 is also used for surround sound TV broadcasts in the US and it is the audio format used by some Sony and Panasonic video cameras.

We will be offering AC3 playback in Fedora Workstation 26 and we are looking into options for offering an encoder. To be clear there are nothing stopping us from offering an encoder apart from finding an implementation that is possible to package and ship with Fedora with an reasonable amount of effort. The most well known open source implementation we know about is the one found in ffmpeg/libav, but extracting a single codec to ship from ffmpeg or libav is a lot of work and not something we currently have the resources to do. We found another implementation called aften, but that seems to be unmaintaned for years, but we will look at it to see if it could be used.
But if you are interested in AC3 encoding support we would love it if someone started working on a standalone AC3 encoder we could ship, be that by picking up maintership of Aften, splitting out AC3 encoding from libav or ffmpeg or writting something new.

If you want to learn more about AC3 the best place to look is probably the Wikipedia page for Dolby Digital or the a52 ATSC audio standard document for more of a technical deep dive.

by uraeus at March 22, 2017 05:02 PM

March 18, 2017

Jean-François Fortin TamDefence against the Dark Arts involves controlling your hardware

In light of the Vault 7 documents leak (and the rise to power of Lord Voldemort this year), it might make sense to rethink just how paranoid we need to be.  Jarrod Carmichael puts it quite vividly:

I find the general surprise… surprising. After all, this is in line with what Snowden told us years ago, which was already in line with what many computer geeks thought deep down inside for years prior. In the good words of monsieur Crête circa 2013, the CIA (and to an extent the NSA, FBI, etc.) is a spy agency. They are spies. Spying is what they’re supposed to do! 😁

Well, if these agencies are really on to you, you’re already in quite a bit of trouble to begin with. Good luck escaping them, other than living in an embassy or airport for the next decade or so. But that doesn’t mean the repercussions of their technological recklessness—effectively poisoning the whole world’s security well—are not something you should ward against.

It’s not enough to just run FLOSS apps. When you don’t control the underlying OS and hardware, you are inherently compromised. It’s like driving over a minefield with a consumer-grade Hummer while dodging rockets (at least use a hovercraft or something!) and thinking “Well, I’m not driving a Ford Pinto!” (but see this post where Todd weaver explains the implications much more eloquently—and seriously—than I do).

Considering the political context we now find ourselves in, pushing for privacy and software freedom has never been more relevant, as Karen Sandler pointed out at the end of the year. This is why I’m excited that Purism’s work on coreboot is coming to fruition and that it will be neutralizing the Intel Management Engine on its laptops, because this is finally providing an option for security-concerned people other than running exotic or technologically obsolete hardware.

by nekohayo at March 18, 2017 02:00 PM

March 17, 2017

Víctor JáquezGStreamer VAAPI 1.11.x (development branch)

Greetings GstFolks!

Last month the unstable release 1.11.2 of GStreamer hit the streets, and I would like to share with you all a quick heads-up of what we are working on in gstreamer-vaapi, since there are a lot of new stuff:

  1. GstVaapiDisplay inherits from GstObject

    GstVaapiDisplay is a wrapper for VADisplay. Before it was a custom C structure shared among the pipeline through the GstContext mechanism. Now it is a GObject based object, which can be queried, introspected and, perhaps later on, exposed in a separated library.

  2. Direct rendering and upload

    Direct rendering and upload are mechanisms based on using ">">vaDeriveImage to upload an raw image into a VASurface, or to download a VASurface into a raw image, which is faster rather than exporting the VASurface to a VAImage.

    Nonetheless we have found some issues with the direct rendering in new Intel hardware (Skylake and above), and we are still assessing if we keep it as default.

  3. Improve the GstValidate pass rate

    GstValidate provides a battery of tests for the whole GStreamer object, sadly, using gstreamer-vaapi, the battery didn’t output the same pass rate as without it. Though we still have some issues with the vaapsink that might need to be tackled in VA API, the pass rate has increased a lot.

  4. Refactor the GstVaapiVideoMemory

    We had refactor completely the internals of the VAAPI video memory (related with the work done for the direct download and upload). Also we have added locks when mapping and unmapping, to avoid race conditions.

  5. Support dmabuf sharing with downstream

    gstreamer-vaapi already had support to share dmabuf-based buffers with upstream (e.g. cameras) but now it is also capable to share dmabuf-based buffers with downstream with sinks capable of importing them (e.g. glimagesink under supported EGL).

  6. Support compilation with meson

    Meson is a new compilation machinery in GStreamer, along with autotools, and now it is supported also by gstreamer-vaapi.

  7. Headless rendering improvements

    There has been a couple of improvements in the DRM backend for vaapisink, for headless environments.

  8. Wayland backend improvements

    Also there has been improvements for the Wayland backend for vaapisink and GstVaapiDisplay.

  9. Dynamically reports the supported raw caps

    Now the elements query in run-time the VA backend to know which color formats does it support, so either the source or sink caps are negotiated correctly, avoiding possible error conditions (like negotiating a unsupported color space). This has been done for encoders, decoders and the post-processor.

  10. Encoders enhancements

    We have improve encoders a lot, adding new features such as constant bit rate support for VP8, the handling of stream metadata through tags, etc.

And many, many more changes, improvements and fixes. But there is still a long road to the stable release (1.12) with many pending tasks and bugs to tackle.

Thanks a bunch to Hyunjun Ko, Julien Isorce, Scott D Phillips, Stirling Westrup, etc. for all their work.

Also, Intel Media and Audio For Linux was accepted in the Google Summer Of Code this year! If your are willing to face this challenge, you can browse the list of ideas to work on, not only in gstreamer-vaapi, but in the driver, or other projects surrounding VAAPI.

Finally, do not forget these dates: 20th and 21h of May @ A Coruña (Spain), where the GStreamer Spring Hackfest is going to take place. Sign up!

by vjaquez at March 17, 2017 04:53 PM

March 15, 2017

Andy Wingoguile 2.2 omg!!!

(Andy Wingo)

Oh, good evening my hackfriends! I am just chuffed to share a thing with yall: tomorrow we release Guile 2.2.0. Yaaaay!

I know in these days of version number inflation that this seems like a very incremental, point-release kind of a thing, but it's a big deal to me. This is a project I have been working on since soon after the release of Guile 2.0 some 6 years ago. It wasn't always clear that this project would work, but now it's here, going into production.

In that time I have worked on JavaScriptCore and V8 and SpiderMonkey and so I got a feel for what a state-of-the-art programming language implementation looks like. Also in that time I ate and breathed optimizing compilers, and really hit the wall until finally paging in what Fluet and Weeks were saying so many years ago about continuation-passing style and scope, and eventually came through with a solution that was still CPS: CPS soup. At this point Guile's "middle-end" is, I think, totally respectable. The backend targets a quite good virtual machine.

The virtual machine is still a bytecode interpreter for now; native code is a next step. Oddly my journey here has been precisely opposite, in a way, to An incremental approach to compiler construction; incremental, yes, but starting from the other end. But I am very happy with where things are. Guile remains very portable, bootstrappable from C, and the compiler is in a good shape to take us the rest of the way to register allocation and native code generation, and performance is pretty ok, even better than some natively-compiled Schemes.

For a "scripting" language (what does that mean?), I also think that Guile is breaking nice ground by using ELF as its object file format. Very cute. As this seems to be a "Andy mentions things he's proud of" segment, I was also pleased with how we were able to completely remove the stack size restriction.

high fives all around

As is often the case with these things, I got the idea for removing the stack limit after talking with Sam Tobin-Hochstadt from Racket and the PLT group. I admire Racket and its makers very much and look forward to stealing fromworking with them in the future.

Of course the ideas for the contification and closure optimization passes are in debt to Matthew Fluet and Stephen Weeks for the former, and Andy Keep and Kent Dybvig for the the latter. The intmap/intset representation of CPS soup itself is highly endebted to the late Phil Bagwell, to Rich Hickey, and to Clojure folk; persistent data structures were an amazing revelation to me.

Guile's virtual machine itself was initially heavily inspired by JavaScriptCore's VM. Thanks to WebKit folks for writing so much about the early days of Squirrelfish! As far as the actual optimizations in the compiler itself, I was inspired a lot by V8's Crankshaft in a weird way -- it was my first touch with fixed-point flow analysis. As most of yall know, I didn't study CS, for better and for worse; for worse, because I didn't know a lot of this stuff, and for better, as I had the joy of learning it as I needed it. Since starting with flow analysis, Carl Offner's Notes on graph algorithms used in optimizing compilers was invaluable. I still open it up from time to time.

While I'm high-fiving, large ups to two amazing support teams: firstly to my colleagues at Igalia for supporting me on this. Almost the whole time I've been at Igalia, I've been working on this, for about a day or two a week. Sometimes at work we get to take advantage of a Guile thing, but Igalia's Guile investment mainly pays out in the sense of keeping me happy, keeping me up to date with language implementation techniques, and attracting talent. At work we have a lot of language implementation people, in JS engines obviously but also in other niches like the networking group, and it helps to be able to transfer hackers from Scheme to these domains.

I put in my own time too, of course; but my time isn't really my own either. My wife Kate has been really supportive and understanding of my not-infrequent impulses to just nerd out and hack a thing. She probably won't read this (though maybe?), but it's important to acknowledge that many of us hackers are only able to do our work because of the support that we get from our families.

a digression on the nature of seeking and knowledge

I am jealous of my colleagues in academia sometimes; of course it must be this way, that we are jealous of each other. Greener grass and all that. But when you go through a doctoral program, you know that you push the boundaries of human knowledge. You know because you are acutely aware of the state of recorded knowledge in your field, and you know that your work expands that record. If you stay in academia, you use your honed skills to continue chipping away at the unknown. The papers that this process reifies have a huge impact on the flow of knowledge in the world. As just one example, I've read all of Dybvig's papers, with delight and pleasure and avarice and jealousy, and learned loads from them. (Incidentally, I am given to understand that all of these are proper academic reactions :)

But in my work on Guile I don't actually know that I've expanded knowledge in any way. I don't actually know that anything I did is new and suspect that nothing is. Maybe CPS soup? There have been some similar publications in the last couple years but you never know. Maybe some of the multicore Concurrent ML stuff I haven't written about yet. Really not sure. I am starting to see papers these days that are similar to what I do and I have the feeling that they have a bit more impact than my work because of their medium, and I wonder if I could be putting my work in a more useful form, or orienting it in a more newness-oriented way.

I also don't know how important new knowledge is. Simply being able to practice language implementation at a state-of-the-art level is a valuable skill in itself, and releasing a quality, stable free-software language implementation is valuable to the world. So it's not like I'm negative on where I'm at, but I do feel wonderful talking with folks at academic conferences and wonder how to pull some more of that into my life.

In the meantime, I feel like (my part of) Guile 2.2 is my master work in a way -- a savepoint in my hack career. It's fine work; see A Virtual Machine for Guile and Continuation-Passing Style for some high level documentation, or many of these bloggies for the nitties and the gritties. OKitties!

getting the goods

It's been a joy over the last two or three years to see the growth of Guix, a packaging system written in Guile and inspired by GNU stow and Nix. The laptop I'm writing this on runs GuixSD, and Guix is up to some 5000 packages at this point.

I've always wondered what the right solution for packaging Guile and Guile modules was. At one point I thought that we would have a Guile-specific packaging system, but one with stow-like characteristics. We had problems with C extensions though: how do you build one? Where do you get the compilers? Where do you get the libraries?

Guix solves this in a comprehensive way. From the four or five bootstrap binaries, Guix can download and build the world from source, for any of its supported architectures. The result is a farm of weirdly-named files in /gnu/store, but the transitive closure of a store item works on any distribution of that architecture.

This state of affairs was clear from the Guix binary installation instructions that just have you extract a tarball over your current distro, regardless of what's there. The process of building this weird tarball was always a bit ad-hoc though, geared to Guix's installation needs.

It turns out that we can use the same strategy to distribute reproducible binaries for any package that Guix includes. So if you download this tarball, and extract it as root in /, then it will extract some paths in /gnu/store and also add a /opt/guile-2.2.0. Run Guile as /opt/guile-2.2.0/bin/guile and you have Guile 2.2, before any of your friends! That pack was made using guix pack -C lzip -S /opt/guile-2.2.0=/ guile-next glibc-utf8-locales, at Guix git revision 80a725726d3b3a62c69c9f80d35a898dcea8ad90.

(If you run that Guile, it will complain about not being able to install the locale. Guix, like Scheme, is generally a statically scoped system; but locales are dynamically scoped. That is to say, you have to set GUIX_LOCPATH=/opt/guile-2.2.0/lib/locale in the environment, for locales to work. See the GUIX_LOCPATH docs for the gnarlies.)

Alternately of course you can install Guix and just guix package -i guile-next. Guix itself will migrate to 2.2 over the next week or so.

Welp, that's all for this evening. I'll be relieved to push the release tag and announcements tomorrow. In the meantime, happy hacking, and yes: this blog is served by Guile 2.2! :)

by Andy Wingo at March 15, 2017 10:56 PM

March 07, 2017

Zeeshan AliGDP meets GSoC

(Zeeshan Ali)
Are you a student? Passionate about Open Source? Want your code to run on next generation of automobiles? You're in luck! Genivi Development Platform will be participating in Google Summer of Code this summer and you are welcome to participate. We have collected a bunch of ideas for what would be a good 3 month project for a student but you're more than welcome to suggest your own project. The ideas page, also has instructions on how to get started with GDP.

We look forward to your participation!

March 07, 2017 07:08 PM

March 06, 2017

Andy Wingoit's probably spam

(Andy Wingo)

Greetings, peoples. As you probably know, these words are served to you by Tekuti, a blog engine written in Scheme that uses Git as its database.

Part of the reason I wrote this blog software was that from the time when I was using Wordpress, I actually appreciated the comments that I would get. Sometimes nice folks visit this blog and comment with information that I find really interesting, and I thought it would be a shame if I had to disable those entirely.

But allowing users to add things to your site is tricky. There are all kinds of potential security vulnerabilities. I thought about the ones that were important to me, back in 2008 when I wrote Tekuti, and I thought I did a pretty OK job on preventing XSS and designing-out code execution possibilities. When it came to bogus comments though, things worked well enough for the time. Tekuti uses Git as a log-structured database, and so to delete a comment, you just revert the change that added the comment. I added a little security question ("what's your favorite number?"; any number worked) to prevent wordpress spammers from hitting me, and I was good to go.

Sadly, what was good enough in 2008 isn't good enough in 2017. In 2017 alone, some 2000 bogus comments made it through. So I took comments offline and painstakingly went through and separated the wheat from the chaff while pondering what to do next.

an aside

I really wondered why spammers bothered though. I mean, I added the rel="external nofollow" attribute on links, which should prevent search engines from granting relevancy to the spammer's links, so what gives? Could be that all the advice from the mid-2000s regarding nofollow is bogus. But it was definitely the case that while I was adding the attribute to commenter's home page links, I wasn't adding it to links in the comment. Doh! With this fixed, perhaps I will just have to deal with the spammers I have and not even more spammers in the future.

i digress

I started by simply changing my security question to require a number in a certain range. No dice; bogus comments still got through. I changed the range; could it be the numbers they were using were already in range? Again the bogosity continued undaunted.

So I decided to break down and write a bogus comment filter. Luckily, Git gives me a handy corpus of legit and bogus comments: all the comments that remain live are legit, and all that were ever added but are no longer live are bogus. I wrote a simple tokenizer across the comments, extracted feature counts, and fed that into a naive Bayesian classifier. I finally turned it on this morning; fingers crossed!

My trials at home show that if you train the classifier on half the data set (around 5300 bogus comments and 1900 legit comments) and then run it against the other half, I get about 6% false negatives and 1% false positives. The feature extractor interns sequences of 1, 2, and 3 tokens, and doesn't have a lower limit for number of features extracted -- a feature seen only once in bogus comments and never in legit comments is a fairly strong bogosity signal; as you have to make up the denominator in that case, I set it to indicate that such a feature is 99.9% bogus. A corresponding single feature in the legit set without appearance in the bogus set is 99% legit.

Of course with this strong of a bias towards precise features of the training set, if you run the classifier against its own training set, it produces no false positives and only 0.3% false negatives, some of which were simply reverted duplicate comments.

It wasn't straightforward to get these results out of a Bayesian classifier. The "smoothing" factor that you add to both numerator and denominator was tricky, as I mentioned above. Getting a useful tokenization was tricky. And the final trick was even trickier: limiting the significant-feature count when determining bogosity. I hate to cite Paul Graham but I have to do so here -- choosing the N most significant features in the document made the classification much less sensitive to the varying lengths of legit and bogus comments, and less sensitive to inclusions of verbatim texts from other comments.

We'll see I guess. If your comment gets caught by my filters, let me know -- over email or Twitter I guess, since you might not be able to comment! I hope to be able to keep comments open; I've learned a lot from yall over the years.

by Andy Wingo at March 06, 2017 02:16 PM

March 01, 2017

Christian Schaller2016 in review

(Christian Schaller)

I started writing this blog entry at the end of January, but kept delaying publishing it due to waiting for some cool updates we are working on. But I decided today that instead of keep pushing the 2016 review part back I should just do this as two separate blog entries. So here is my Fedora Workstation 2016 Summary :)

We did two major releases of Fedora Workstation, namely 24 and 25 each taking is steps closer to realising our vision for the future of the Linux Desktop. I am really happy that we finally managed to default to Wayland in Fedora Workstation 25. As Jonathan Corbet of LWN so well put it: “That said, it’s worth pointing out that the move to Wayland is a huge transition; we are moving away from a display manager that has been in place since before Linus Torvalds got his first computer”.
Successfully replacing the X11 system that has been used since 1987 is no small feat and we have to remember many tried over the years and failed. So a big Thank You to Kristian Høgsberg for his incredible work getting Wayland off the ground and build consensus around it from the community. I am always full of admiration for those who manage to create these kind of efforts up from their first line of code to a place where a vibrant and dynamic community can form around them.

And while we for sure have some issues left to resolve I think the launch of Wayland in Fedora Workstation 25 was so strong that we managed to keep and accelerate the momentum needed to leave the orbit of X11 and have it truly take on a life of its own.
Because we have succeeded not just in forming a community around Wayland, but with getting the existing linux graphics driver community to partake in the move, we managed to get the major desktop toolkits to partake in the move and I believe we have managed to get the community at large to partake in the move. And we needed all of those 3 to join us for this transition to have a chance to succeed with it. If this had only been about us at Red Hat and in the Fedora community who cared and contributed it would have gone nowhere, but this was truly one of those efforts that pulled together almost everyone in the wider linux community, and showcased what is possible when such a wide coalition of people get together. So while for instance we don’t ship an Enlightenment spin of Fedora (interested parties would be encouraged to do so though) we did value and appreciate the work they where doing around Wayland, simply because the bigger the community the more development and bug fixing you will see on the shared infrastructure.

A part of the Wayland effort was the new input library Peter Hutterer put out called libinput. That library allowed us to clean up our input stack and also share the input code between X and Wayland. A lot of credit to Peter and Benjamin Tissoires for their work here as the transition like the later Wayland transition succeeded without causing a huge amount of pain for our users.

And this is also our approach for Flatpak which for us forms a crucial tandem with Wayland and the future of the Linux desktop. To ensure the project is managed in a way that is open and transparent to all and allows for different groups to adapt it to their specific usecases. And so far it is looking good, with early adoption and trials from the IVI community, traditional Linux distributions, device makers like Endless and platforms such as Steam. Each of these using the technologies or looking to use them in slightly different ways, but still all collaborating on pushing the shared technologies forward.

We managed to make some good steps forward in our effort to drain the swamp of Desktop Linux land (only unfortunate thing here is a certain Trump deciding to cybersquat on the ‘drain the swamp’ mantra) with adding H264 and mp3 support to Fedora Workstation. And while the H264 support still needs some work to support more profiles (which we unfortunately did not get to in 2016) we have other codec related work underway which I think will help move the needle on this being a big issue even further. The work needed on OpenH264 is not forgotten, but Wim Taymans ended up doing a lot more multimedia plumbing work for our container based future than originally planned. I am looking forward to share more details of where his work is going these days though as it could bring another group of makers into the world of mainstream desktop Linux when its ready.

Another piece of swamp draining that happened was around the Linux Firmware Service, which went from strength to strength in 2016. We had new vendors sign up throughout the year and while not all of those efforts are public I do expect that by the end of 2017 we will have most major hardware vendors offering firmware through the service. And not only system firmware updates but things like Logitech mice and keyboards will also be available.

Of course the firmware update service also has a client part and GNOME Software truly became a powerhouse for driving change during 2016, being the catalyst not only for the firmware update service, but also for linux applications providing good metadata in a standardized manner. The Appstream format required by GNOME Software has become the de-facto standard. And speaking of GNOME Software the distribution upgrade functionality we added in Fedora 24 and improved in Fedora 25 has become pretty flawless. Always possible to improve of course, but the biggest problem I heard of was due to versioning issue due to us pushing the mp3 decoding support for Fedora in at the very last minute and thus not giving 3rd party repositories a reasonable chance to update their packaging to account for it. Lesson learnt for going forward :)

These are just of course a small subset of the things we accomplished in 2016, but I was really happy to see the great reception we had to Fedora 25 last year, with a lot of major new sites giving it stellar reviews and also making it their distribution of the year. The growth curves in terms of adoption we are seeing for Fedora Workstation is a great encouragement for the team and helps is validate that we are on the right track with setting our development priorities. My hope for 2017 is that even more of you will decide to join our effort and switch to Fedora and 2017 will be the year of Fedora Workstation! On that note the very positive reception to the Fedora Media Writer that we introduced as the default download for Fedora Workstation 25 was great to see. Being able to have one simple tool to use regardless of which operating system you come to us from simplifies so much in terms of both communication on our end and lowering the threshold of adoption on the end user side.

by uraeus at March 01, 2017 05:47 PM

February 27, 2017

GStreamerGStreamer 1.11.2 unstable release (binaries)

(GStreamer)

Pre-built binary images of the 1.11.2 unstable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

February 27, 2017 08:00 AM

February 24, 2017

Andy Wingoencyclopedia snabb and the case of the foreign drivers

(Andy Wingo)

Peoples of the blogosphere, welcome back to the solipsism! Happy 2017 and all that. Today's missive is about Snabb (formerly Snabb Switch), a high-speed networking project we've been working on at work for some years now.

What's Snabb all about you say? Good question and I have a nice answer for you in video and third-party textual form! This year I managed to make it to linux.conf.au in lovely Tasmania. Tasmania is amazing, with wild wombats and pademelons and devils and wallabies and all kinds of things, and they let me talk about Snabb.

Click to download video

You can check that video on the youtube if the link above doesn't work; slides here.

Jonathan Corbet from LWN wrote up the talk in an article here, which besides being flattering is a real windfall as I don't have to write it up myself :)

In that talk I mentioned that Snabb uses its own drivers. We were recently approached by a customer with a simple and honest question: does this really make sense? Is it really a win? Why wouldn't we just use the work that the NIC vendors have already put into their drivers for the Data Plane Development Kit (DPDK)? After all, part of the attraction of a switch to open source is that you will be able to take advantage of the work that others have produced.

Our answer is that while it is indeed possible to use drivers from DPDK, there are costs and benefits on both sides and we think that when we weigh it all up, it makes both technical and economic sense for Snabb to have its own driver implementations. It might sound counterintuitive on the face of things, so I wrote this long article to discuss some perhaps under-appreciated points about the tradeoff.

Technically speaking there are generally two ways you can imagine incorporating DPDK drivers into Snabb:

  1. Bundle a snapshot of the DPDK into Snabb itself.

  2. Somehow make it so that Snabb could (perhaps optionally) compile against a built DPDK SDK.

As part of a software-producing organization that ships solutions based on Snabb, I need to be able to ship a "known thing" to customers. When we ship the lwAFTR, we ship it in source and in binary form. For both of those deliverables, we need to know exactly what code we are shipping. We achieve that by having a minimal set of dependencies in Snabb -- only LuaJIT and three Lua libraries (DynASM, ljsyscall, and pflua) -- and we include those dependencies directly in the source tree. This requirement of ours rules out (2), so the option under consideration is only (1): importing the DPDK (or some part of it) directly into Snabb.

So let's start by looking at Snabb and the DPDK from the top down, comparing some metrics, seeing how we could make this combination.

snabbdpdk
Code lines 61K 583K
Contributors (all-time) 60 370
Contributors (since Jan 2016) 32 240
Non-merge commits (since Jan 2016) 1.4K 3.2K

These numbers aren't directly comparable, of course; in Snabb our unit of code change is the merge rather than the commit, and in Snabb we include a number of production-ready applications like the lwAFTR and the NFV, but they are fine enough numbers to start with. What seems clear is that the DPDK project is significantly larger than Snabb, so adding it to Snabb would fundamentally change the nature of the Snabb project.

So depending on the DPDK makes it so that suddenly Snabb jumps from being a project that compiles in a minute to being a much more heavy-weight thing. That could be OK if the benefits were high enough and if there weren't other costs, but there are indeed other costs to including the DPDK:

  • Data-plane control. Right now when I ship a product, I can be responsible for the whole data plane: everything that happens on the CPU when packets are being processed. This includes the driver, naturally; it's part of Snabb and if I need to change it or if I need to understand it in some deep way, I can do that. But if I switch to third-party drivers, this is now out of my domain; there's a wall between me and something that running on my CPU. And if there is a performance problem, I now have someone to blame that's not myself! From the customer perspective this is terrible, as you want the responsibility for software to rest in one entity.

  • Impedance-matching development costs. Snabb is written in Lua; the DPDK is written in C. I will have to build a bridge, and keep it up to date as both Snabb and the DPDK evolve. This impedance-matching layer is also another source of bugs; either we make a local impedance matcher in C or we bind everything using LuaJIT's FFI. In the former case, it's a lot of duplicate code, and in the latter we lose compile-time type checking, which is a no-go given that the DPDK can and does change API and ABI.

  • Communication costs. The DPDK development list had 3K messages in January. Keeping up with DPDK development would become necessary, as the DPDK is now in your dataplane, but it costs significant amounts of time.

  • Costs relating to mismatched goals. Snabb tries to win development and run-time speed by searching for simple solutions. The DPDK tries to be a showcase for NIC features from vendors, placing less of a priority on simplicity. This is a very real cost in the form of the way network packets are represented in the DPDK, with support for such features as scatter/gather and indirect buffers. In Snabb we were able to do away with this complexity by having simple linear buffers, and our speed did not suffer; adding the DPDK again would either force us to marshal and unmarshal these buffers into and out of the DPDK's format, or otherwise to reintroduce this particular complexity into Snabb.

  • Abstraction costs. A network function written against the DPDK typically uses at least three abstraction layers: the "EAL" environment abstraction layer, the "PMD" poll-mode driver layer, and often an internal hardware abstraction layer from the network card vendor. (And some of those abstraction layers are actually external dependencies of the DPDK, as with Mellanox's ConnectX-4 drivers!) Any discrepancy between the goals and/or implementation of these layers and the goals of a Snabb network function is a cost in developer time and in run-time. Note that those low-level HAL facilities aren't considered acceptable in upstream Linux kernels, for all of these reasons!

  • Stay-on-the-train costs. The DPDK is big and sometimes its abstractions change. As a minor player just riding the DPDK train, we would have to invest a continuous amount of effort into just staying aboard.

  • Fork costs. The Snabb project has a number of contributors but is really run by Luke Gorrie. Because Snabb is so small and understandable, if Luke decided to stop working on Snabb or take it in a radically different direction, I would feel comfortable continuing to maintain (a fork of) Snabb for as long as is necessary. If the DPDK changed goals for whatever reason, I don't think I would want to continue to maintain a stale fork.

  • Overkill costs. Drivers written against the DPDK have many considerations that simply aren't relevant in a Snabb world: kernel drivers (KNI), special NIC features that we don't use in Snabb (RDMA, offload), non-x86 architectures with different barrier semantics, threads, complicated buffer layouts (chained and indirect), interaction with specific kernel modules (uio-pci-generic / igb-uio / ...), and so on. We don't need all of that, but we would have to bring it along for the ride, and any changes we might want to make would have to take these use cases into account so that other users won't get mad.

So there are lots of costs if we were to try to hop on the DPDK train. But what about the benefits? The goal of relying on the DPDK would be that we "automatically" get drivers, and ultimately that a network function would be driver-agnostic. But this is not necessarily the case. Each driver has its own set of quirks and tuning parameters; in order for a software development team to be able to support a new platform, the team would need to validate the platform, discover the right tuning parameters, and modify the software to configure the platform for good performance. Sadly this is not a trivial amount of work.

Furthermore, using a different vendor's driver isn't always easy. Consider Mellanox's DPDK ConnectX-4 / ConnectX-5 support: the "Quick Start" guide has you first install MLNX_OFED in order to build the DPDK drivers. What is this thing exactly? You go to download the tarball and it's 55 megabytes. What's in it? 30 other tarballs! If you build it somehow from source instead of using the vendor binaries, then what do you get? All that code, running as root, with kernel modules, and implementing systemd/sysvinit services!!! And this is just step one!!!! Worse yet, this enormous amount of code powering a DPDK driver is mostly driver-specific; what we hear from colleagues whose organizations decided to bet on the DPDK is that you don't get to amortize much knowledge or validation when you switch between an Intel and a Mellanox card.

In the end when we ship a solution, it's going to be tested against a specific NIC or set of NICs. Each NIC will add to the validation effort. So if we were to rely on the DPDK's drivers, we would have payed all the costs but we wouldn't save very much in the end.

There is another way. Instead of relying on so much third-party code that it is impossible for any one person to grasp the entirety of a network function, much less be responsible for it, we can build systems small enough to understand. In Snabb we just read the data sheet and write a driver. (Of course we also benefit by looking at DPDK and other open source drivers as well to see how they structure things.) By only including what is needed, Snabb drivers are typically only a thousand or two thousand lines of Lua. With a driver of that size, it's possible for even a small ISV or in-house developer to "own" the entire data plane of whatever network function you need.

Of course Snabb drivers have costs too. What are they? Are customers going to be stuck forever paying for drivers for every new card that comes out? It's a very good question and one that I know is in the minds of many.

Obviously I don't have the whole answer, as my role in this market is a software developer, not an end user. But having talked with other people in the Snabb community, I see it like this: Snabb is still in relatively early days. What we need are about three good drivers. One of them should be for a standard workhorse commodity 10Gbps NIC, which we have in the Intel 82599 driver. That chipset has been out for a while so we probably need to update it to the current commodities being sold. Additionally we need a couple cards that are going to compete in the 100Gbps space. We have the Mellanox ConnectX-4 and presumably ConnectX-5 drivers on the way, but there's room for another one. We've found that it's hard to actually get good performance out of 100Gbps cards, so this is a space in which NIC vendors can differentiate their offerings.

We budget somewhere between 3 and 9 months of developer time to create a completely new Snabb driver. Of course it usually takes less time to develop Snabb support for a NIC that is only incrementally different from others in the same family that already have drivers.

We see this driver development work to be similar to the work needed to validate a new NIC for a network function, with the additional advantage that it gives us up-front knowledge instead of the best-effort testing later in the game that we would get with the DPDK. When you add all the additional costs of riding the DPDK train, we expect that the cost of Snabb-native drivers competes favorably against the cost of relying on third-party DPDK drivers.

In the beginning it's natural that early adopters of Snabb make investments in this base set of Snabb network drivers, as they would to validate a network function on a new platform. However over time as Snabb applications start to be deployed over more ports in the field, network vendors will also see that it's in their interests to have solid Snabb drivers, just as they now see with the Linux kernel and with the DPDK, and given that the investment is relatively low compared to their already existing efforts in Linux and the DPDK, it is quite feasible that we will see the NIC vendors of the world start to value Snabb for the performance that it can squeeze out of their cards.

So in summary, in Snabb we are convinced that writing minimal drivers that are adapted to our needs is an overall win compared to relying on third-party code. It lets us ship solutions that we can feel responsible for: both for their operational characteristics as well as their maintainability over time. Still, we are happy to learn and share with our colleagues all across the open source high-performance networking space, from the DPDK to VPP and beyond.

by Andy Wingo at February 24, 2017 05:37 PM

GStreamerGStreamer 1.11.2 unstable release

(GStreamer)

The GStreamer team is pleased to announce the second release of the unstable 1.11 release series. The 1.11 release series is adding new features on top of the 1.0, 1.2, 1.4, 1.6, 1.8 and 1.10 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework. The unstable 1.11 release series will lead to the stable 1.12 release series in the next weeks. Any newly added API can still change until that point.

Full release notes will be provided at some point during the 1.11 release cycle, highlighting all the new features, bugfixes, performance optimizations and other important changes.

Binaries for Android, iOS, Mac OS X and Windows will be provided in the next days.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

February 24, 2017 02:00 PM

GStreamerGStreamer 1.10.4 stable release (binaries)

(GStreamer)

Pre-built binary images of the 1.10.4 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

See /releases/1.10/ for the full list of changes.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

February 24, 2017 10:00 AM

February 23, 2017

GStreamerGStreamer 1.10.4 stable release

(GStreamer)

The GStreamer team is pleased to announce the third bugfix release in the stable 1.10 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.10.0. For a full list of bugfixes see Bugzilla.

See /releases/1.10/ for the full release notes.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

February 23, 2017 03:00 PM

February 07, 2017

Arun RaghavanStricter JSON parsing with Haskell and Aeson

I’ve been having fun recently, writing a RESTful service using Haskell and Servant. I did run into a problem that I couldn’t easily find a solution to on the magical bounty of knowledge that is the Internet, so I thought I’d share my findings and solution.

While writing this service (and practically any Haskell code), step 1 is of course defining our core types. Our REST endpoint is basically a CRUD app which exchanges these with the outside world as JSON objects. Doing this is delightfully simple:

{-# LANGUAGE DeriveGeneric #-}

import Data.Aeson
import GHC.Generics

data Job = Job { jobInputUrl :: String
               , jobPriority :: Int
               , ...
               } deriving (Eq, Generic, Show)

instance ToJSON Job where
  toJSON = genericToJSON defaultOptions

instance FromJSON Job where
  parseJSON = genericParseJSON defaultOptions

That’s all it takes to get the basic type up with free serialization using Aeson and Haskell Generics. This is followed by a few more lines to hook up GET and POST handlers, we instantiate the server using warp, and we’re good to go. All standard stuff, right out of the Servant tutorial.

The POST request accepts a new object in the form of a JSON object, which is then used to create the corresponding object on the server. Standard operating procedure again, as far as RESTful APIs go.

The nice part about doing it like this is that the input is automatically validated based on types. So input like:

{
  "jobInputUrl": 123, // should have been a string
  "jobPriority": 123
}

will result in:

Error in $: expected String, encountered Number

However, as this nice tour of how Aeson works demonstrate, if the input has keys that we don’t recognise, no error will be raised:

{
  "jobInputUrl": "http://arunraghavan.net",
  "jobPriority": 100,
  "junkField": "junkValue"
}

This behaviour would not be undesirable in use-cases such as mine — if the client is sending fields we don’t understand, I’d like for the server to signal an error so the underlying problem can be caught early.

As it turns out, making the JSON parsing stricter and catch missing fields is just a little more involved. I didn’t find how this could be done in a single place on the Internet, so here’s the best I could do:

{-# LANGUAGE DeriveDataTypeable #-}
{-# LANGUAGE DeriveGeneric      #-}

import Data.Aeson
import Data.Data
import GHC.Generics

data Job = Job { jobInputUrl :: String
               , jobPriority :: Int
               , ...
               } deriving (Data, Eq, Generic, Show)

instance ToJSON Job where
  toJSON = genericToJSON defaultOptions

instance FromJSON Job where
  parseJSON json = do
    job <- genericParseJSON defaultOptions json
    if keysMatchRecords json job
    then
      return job
    else
      fail "extraneous keys in input"
    where
      -- Make sure the set of JSON object keys is exactly the same as the fields in our object
      keysMatchRecords (Object o) d =
        let
          objKeys   = sort . fmap unpack . keys
          recFields = sort . fmap (fieldLabelModifier defaultOptions) . constrFields . toConstr
        in
          objKeys o == recFields d
      keysMatchRecords _ _          = False

The idea is quite straightforward, and likely very easy to make generic. The Data.Data module lets us extract the constructor for the Job type, and the list of fields in that constructor. We just make sure that’s an exact match for the list of keys in the JSON object we parsed, and that’s it.

Of course, I’m quite new to the Haskell world so it’s likely there are better ways to do this. Feel free to drop a comment with suggestions! In the mean time, maybe this will be useful to others facing a similar problem.

Update: I’ve fixed parseJSON to properly use fieldLabelModifier from the default options, so that comparison actually works when you’re not using Aeson‘s default options. Thanks to /u/tathougies for catching that.

I’m also hoping to rewrite this in generic form using Generics, so watch this space for more updates.

by Arun at February 07, 2017 05:10 AM

February 01, 2017

GStreamerGStreamer 1.10.3 stable release (binaries)

(GStreamer)

Pre-built binary images of the 1.10.3 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

See /releases/1.10/ for the full list of changes.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

February 01, 2017 08:40 AM

January 30, 2017

GStreamerGStreamer 1.10.3 stable release

(GStreamer)

The GStreamer team is pleased to announce the third bugfix release in the stable 1.10 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.10.0. For a full list of bugfixes see Bugzilla.

See /releases/1.10/ for the full release notes.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

January 30, 2017 02:25 PM

January 29, 2017

Thomas Vander SticheleA morning in San Francisco

(Thomas Vander Stichele)

This morning in San Francisco, I check out from the hotel and walk to Bodega, a place I discovered last time I was here. I walk past a Chinese man swinging his arms slowly and deliberately, celebrating a secret of health us Westerners will never know. It is Chinese New Year, and I pass bigger groups celebrating and children singing. My phone takes a picture of a forest of phones taking pictures.

I get to the corner hoping the place is still in business. The sign outside asks “Can it all be so simple?” The place is open, so at least for today, the answer is yes. I take a seat at the bar, and I’m flattered when the owner recognizes me even if it’s only my second time here. I ask her if her sister made it to New York to study – but no, she is trekking around Columbia after helping out at the bodega every weekend for the past few months. I get a coffee and a hibiscus mimosa as I ponder the menu.

The man next to me turns out to be her cousin, Amir. He took a plane to San Francisco from Iran yesterday after hearing an executive order might get signed banning people with visas from seven countries to enter the US. The order was signed two hours before his plane landed. He made it through immigration. The fact sheet arrived on the immigration officer’s desks right after he passed through, and the next man in his queue, coming from Turkey, did not make it through. Needles and eyes.

Now he is planning to get a job, and get a lawyer to find a way to bring over his wife and 4 year old child who are now officially banned from following him for 120 days or more. In Iran he does business strategy and teaches at University. It hits home really hard that we are not that different, him and I, and how undeservedly lucky I am that I won’t ever be faced with such a horrible choice to make. Paria, the owner, chimes in, saying that she’s a Iranian Muslim who came to the US 15 years ago with her family, and they all can’t believe what’s happening right now.

The church bell chimes a song over Washington Square Park and breaks the spell, telling me it’s eleven o’clock and time to get going to the airport.

flattr this!

by Thomas at January 29, 2017 04:57 AM

January 19, 2017

Arun RaghavanQuantifying Synchronisation: Oscilloscope Edition

I’ve written a bit in my last two blog posts about the work I’ve been doing in inter-device synchronised playback using GStreamer. I introduced the library and then demonstrated its use in building video walls.

The important thing in synchronisation, of course, is how much in-sync are the streams? The video in my previous post gave a glimpse into that, and in this post I’ll expand on that with a more rigorous, quantifiable approach.

Before I start, a quick note: I am currently providing freelance consulting around GStreamer, PulseAudio and open source multimedia in general. If you’re looking for help with any of these, do get in touch.

The sync measurement setup

Quantifying what?

What is it that we are trying to measure? Let’s look at this in terms of the outcome — I have two computers, on a network. Using the gst-sync-server library, I play a stream on both of them. The ideal outcome is that the same video frame is displayed at exactly the same time, and the audio sample being played out of the respective speakers is also identical at any given instant.

As we saw previously, the video output is not a good way to measure what we want. This is because video displays are updated in sync with the display clock, over which consumer hardware generally does not have control. Besides, our eyes are not that sensitive to minor differences in timing unless images are side-by-side. After all, we’re fooling it with static pictures that change every 16.67ms or so.

Using audio, though, we should be able to do better. Digital audio streams for music/videos typically consist of 44100 or 48000 samples a second, so we have a much finer granularity than video provides us. The human ear is also fairly sensitive to timings with regards to sound. If it hears the same sound at an interval larger than 10 ms, you will hear two distinct sounds and the echo will annoy you to no end.

Measuring audio is also good enough because once you’ve got audio in sync, GStreamer will take care of A/V sync itself.

Setup

Okay, so now that we know what we want to measure, but how do we measure it? The setup is illustrated below:

Sync measurement setup illustrated

As before, I’ve set up my desktop PC and laptop to play the same stream in sync. The stream being played is a local audio file — I’m keeping the setup simple by not adding network streaming to the equation.

The audio itself is just a tick sound every second. The tick is a simple 440 Hz sine wave (A₄ for the musically inclined) that runs for for 1600 samples. It sounds something like this:

https://arunraghavan.net/wp-content/uploads/test-audio.ogg

I’ve connected the 3.5mm audio output of both the computers to my faithful digital oscilloscope (a Tektronix TBS 1072B if you wanted to know). So now measuring synchronisation is really a question of seeing how far apart the leading edge of the sine wave on the tick is.

Of course, this assumes we’re not more than 1s out of sync (that’s the periodicity of the tick itself), and I’ve verified that by playing non-periodic sounds (any song or video) and making sure they’re in sync as well. You can trust me on this, or better yet, get the code and try it yourself! :)

The last piece to worry about — the network. How well we can sync the two streams depends on how well we can synchronise the clocks of the pipeline we’re running on each of the two devices. I’ll talk about how this works in a subsequent post, but my measurements are done on both a wired and wireless network.

Measurements

Before we get into it, we should keep in mind that due to how we synchronise streams — using a network clock — how in-sync our streams are will vary over time depending on the quality of the network connection.

If this variation is small enough, it won’t be noticeable. If it is large (10s of milliseconds), then we may notice start to notice it as echo, or glitches when the pipeline tries to correct for the lack of sync.

In the first setup, my laptop and desktop are connected to each other directly via a LAN cable. The result looks something like this:

The first two images show the best case — we need to zoom in real close to see how out of sync the audio is, and it’s roughly 50µs.

The next two images show the “worst case”. This time, the zoomed out (5ms) version shows some out-of-sync-ness, and on zooming in, we see that it’s in the order of 500µs.

So even our bad case is actually quite good — sound travels at about 340 m/s, so 500µs is the equivalent of two speakers about 17cm apart.

Now let’s make things a little more interesting. With both my laptop and desktop connected to a wifi network:

On average, the sync can be quite okay. The first pair of images show sync to be within about 300µs.

However, the wifi on my desktop is flaky, so you can see it go off up to 2.5ms in the next pair. In my setup, it even goes off up to 10-20ms, before returning to the average case. The next two images show it go back and forth.

Why does this happen? Well, let’s take a quick look at what ping statistics from my desktop to my laptop look like:

Ping from desktop to laptop on wifi

That’s not good — you can see that the minimum, average and maximum RTT are very different. Our network clock logic probably needs some tuning to deal with this much jitter.

Conclusion

These measurements show that we can get some (in my opinion) pretty good synchronisation between devices using GStreamer. I wrote the gst-sync-server library to make it easy to build applications on top of this feature.

The obvious area to improve is how we cope with jittery networks. We’ve added some infrastructure to capture and replay clock synchronisation messages offline. What remains is to build a large enough body of good and bad cases, and then tune the sync algorithm to work as well as possible with all of these.

Also, Florent over at Ubicast pointed out a nice tool they’ve written to measure A/V sync on the same device. It would be interesting to modify this to allow for automated measurement of inter-device sync.

In a future post, I’ll write more about how we actually achieve synchronisation between devices, and how we can go about improving it.

by Arun at January 19, 2017 12:09 PM