September 21, 2016

Andy Wingois go an acceptable cml?

(Andy Wingo)

Yesterday I tried to summarize the things I know about Concurrent ML, and I came to the tentative conclusion that Go (and any Go-like system) was an acceptable CML. Turns out I was both wrong and right.

you were wrong when you said everything's gonna be all right

I was wrong, in the sense that programming against the CML abstractions lets you do more things than programming against channels-and-goroutines. Thanks to Sam Tobin-Hochstadt to pointing this out. As an example, consider a little process that tries to receive a message off a channel, and times out otherwise:

func withTimeout(ch chan int, timeout int) (result int) {
  var timeoutChannel chan int;
  var msg int;
  go func() {
    sleep(timeout)
    timeoutChannel <- 0
  }()
  select {
    case msg = <-ch: return msg;
    case msg = <-timeoutChannel: return 0;
  }
}

I think that's the first Go I've ever written. I don't even know if it's syntactically valid. Anyway, I think we see how it should work. We return the message from the channel, unless the timeout happens before.

But, what if the message is itself a composite message somehow? For example, say we have a transformer that reads a value from a channel and adds 1 to it:

func onePlus(in chan int) (result chan int) {
  var out chan int
  go func () { out <- 1 + <-in }()
  return out
}

What if we do a withTimeout(onePlus(numbers), 0)? Assume the timeout fires first and that's the result that select chooses. There's still that onePlus goroutine out there trying to read from in and at some point probably it will succeed, but nobody will read its value. At that point the number just vanishes into the ether. Maybe that's OK in certain domains, but certainly not in general!

What CML gives you is the ability to express an event (which is kinda like a possibility of sending or receiving a message on a channel) in such a way that we don't run into this situation. Specifically with the wrap combinator, we would make an event such that receiving on numbers would run a function on the received message and return that as the message value -- which is of course the same as what we have, except that in CML the select wouldn't actually read the message off unless it select'd that channel for input.

Of course in Go you could just rewrite your program, so that the select statement looks like this:

select {
  case msg = <-ch: return msg + 1;
  case msg = <-timeoutChannel: return 0;
}

But here we're operating at a lower level of abstraction; we were forced to intertwingle our concerns of adding 1 and our concerns of timeout. CML is more expressive than Go.

you were right when you said we're all just bricks in the wall

However! I was right in the sense that you can build a CML system on top of Go-like systems (though possibly not Go in particular). Thanks to Vesa Karvonen for this comment and the link to their proof-of-concept CML implementation in Clojure's core.async. I understand Vesa also has an implementation in F# as well.

Folks should read Vesa's code, after reading the Reppy papers of course; it's delightfully short and expressive. The basic idea is that event composition operators like choose and wrap build up data structures instead of doing things. The sync operation then grovels through those data structures to collect a list of channels to pass on to core.async's equivalent of select. When select returns, sync determines which event that chosen channel and message corresponds to, and proceeds to "activate" the event (and, as a side effect, possibly issue NACK messages to other channels).

Provided you can map from the chosen select channel/message back to the event, (something that core.async can mostly do, with a caveat; see the code), then you can build CML on top of channels and goroutines.

o/~ yeah you were wrong o/~

On the other hand! One advantage of CML is that its events are not limited to channel sends and receives. I understand that timeouts, thread joins, and maybe some other event types are first-class event kinds in many CML systems. Michael Sperber, current Scheme48 maintainer and functional programmer, tells me that simply wrapping events in channels+goroutines works but can incur a big performance overhead relative to supporting those event types natively, due to the need to make the new goroutine and channel and the scheduling costs. He quotes 10X as the overhead!

So although CML and Go appear to be inter-expressible, maybe a proper solution will base the simple channel send/receive interface on CML rather than the other way around.

Also, since these events are now second-class, it must be OK to lose these events, for the same reason that the naïve withTimeout could lose a message from numbers. This is the case for timeouts usually but maybe you have to think about this more, and possibly provide an infinite stream of the message. (Of course the wrapper goroutine would be collected if the channel becomes unreachable.)

you were right when you said this is the end

I've long wondered how contemporary musicians deal with the enormous, crushing weight of recorded music. I don't really pick any more but hoo am I feeling this now. I think for Guile, I will continue hacking on fibers in a separate library, and I think that things will remain that way for the next couple years and possibly more. We need more experience and more mistakes before blessing and supporting any particular formulation of highly concurrent programming. I will say though that I am delighted that we are able to actually do this experimentation on a library level and I look forward to seeing what works out :)

Thanks again to Vesa, Michael, and Sam for sharing their time and knowledge; all errors are of course mine. Happy hacking!

by Andy Wingo at September 21, 2016 09:29 PM

September 20, 2016

Andy Wingoconcurrent ml versus go

(Andy Wingo)

Peoples! Lately I've been navigating the guile-ship through waters unknown. This post is something of an echolocation to figure out where the hell this ship is and where it should go.

Concretely, I have been working on getting a nice lightweight concurrency system rolling for Guile. I'll write more about that later, but you can think of it as being modelled on Go, though built as a library. (I had previously described it as "Erlang-like", but that's just not accurate.)

Earlier this year at Curry On this topic was burning in my mind and of course when I saw the language-hacker fam there I had to bend their ears. My targets: Matthew Flatt, the amazing boundary-crossing engineer, hacker, teacher, researcher, and implementor of Racket, and Matthias Felleisen, the godfather of the PLT research family. I saw them sitting together and I thought, you know what, what can they have to say to each other? These people have been talking together for 30 years right? Surely they are actually waiting for some ignorant dude to saunter up to the PL genius bar, right?

So saunter I do, saying, "if someone says to you that they want to build a server that will handle 100K or so simultaneous connections on Racket, what abstraction do you tell them to use? Racket threads?" Apparently: yes. A definitive yes, in the case of Matthias, with a pointer to Robby Findler's paper on kill-safe abstractions; and still a yes from Matthew with the caveat that for the concrete level of concurrency that I described, you'd have to run tests. More fundamentally, I was advised to look at Concurrent ML (on which Racket's concurrency facilities were based), that CML was much better put together than many modern variants like Go.

This was very interesting and new to me. As y'all probably know, I don't have a formal background in programming languages, and although I've read a lot of literature, reading things only makes you aware of the growing dimension of the not-yet-read. Concurrent ML was even beyond my not-yet-read horizon.

So I went back and read a bunch of papers. Turns out Concurrent ML is like Lisp in that it has a tribe and a tightly-clutched history and a diaspora that reimplements it in whatever language they happen to be working in at the moment. Kinda cool, and, um... a bit hard to appreciate in the current-day context when the only good references are papers from 10 or 20 years ago.

However, after reading a bunch of John Reppy papers, here is my understanding of what Concurrent ML is. I welcome corrections; surely I am getting this wrong.

1. CML is like Go, composed of channels and goroutines. (Forgive the modern referent; I assume most folks know Go at this point.)

2. Unlike Go, in CML a channel is never buffered. To make a buffered channel in CML, you spawn a thread that manages a buffer between two channels.

3. Message send and receive operations in CML are built on a lower-level primitive called "events". (send ch x) is instead euivalent to (sync (send-event ch x)). It's like an event is the derivative of a message send with respect to time, or something.

4. Events can be combined and transformed using the choose and wrap combinators.

5. Doing a sync on an event created by choose allows a user to build select in "user-space", as a library. Cool stuff. So this is what events are for.

6. There are separate event type implementations for timeouts, channel send/recv blocking operations, file descriptor blocking operations, syscalls, thread joins, and the like. These are supported by the CML implementation.

7. The early implementations of Concurrent ML were concurrent but not parallel; they did not run multiple "goroutines" on separate CPU cores at the same time. It was only in like 2009 that people started to do CML in parallel. I do not know if this late parallelism has a practical impact on the viability of CML.

ok go

What is the relationship of CML to Go? Specifically, is CML more expressive than Go? (I assume the reverse is not the case, but that would also be an interesting result!)

There are a few languages that only allow you to select over message receives (not sends), but Go's select doesn't have this limitation, so that's not a differentiator.

Some people say that it's nice to have events as the common denominator, but I don't get this argument. If the only event under consideration is message send or receive over a channel, events + choose + sync is the same in expressive power as a built-in select, as far as I can see. If there are other events, then your runtime already has to support them either way, and something like (let ((ch (make-channel))) (spawn-fiber (lambda () (put-message ch exp))) (get-message ch)) should be sufficient for any runtime-supported event in exp, like sleeps or timeouts or thread joins or whatever.

To me it seems like Go has made the right choices here. I do not see the difference, and that's why I wrote all this, is to be shown the error of my ways. Choosing channels, send, receive, and select as the primitives seems to have the same power as SML events.

Let this post be a pentagram on the floor, then, to summon the CML cognoscenti. Well-actuallies are very welcome; hit me up in the comments!

[edit: Sam Tobin-Hochstadt tells me I got it wrong and I believe him :) In the meantime while I work out how I was wrong, examples are welcome!]

by Andy Wingo at September 20, 2016 09:33 PM

September 15, 2016

GStreamerGStreamer Conference 2016: Last chance for early-bird discount on tickets

(GStreamer)

This is a quick reminder that registration for the GStreamer conference 2016 is open, and if you register today you can still benefit from the discounted early-bird registration fee, which is only available until Thursday 15 September 2016 (inclusive). After that the registration fee for professional tickets will rise to 340 EUR.

Register now for the GStreamer Conference!

GStreamer Conference 2016 Berlin

About the GStreamer Conference

The GStreamer Conference 2016 will take place on 10-11 October 2016 in Berlin (Germany), and will take place in the same week as the Embedded Linux Conference Europe. More information and details how to register can be found on the conference website.

September 15, 2016 10:00 AM

September 14, 2016

GStreamerGStreamer Conference 2016: Collabora Platinum Sponsor

(GStreamer)

The GStreamer project is pleased to welcome back Collabora as Platinum level sponsor at this year's GStreamer Conference in Berlin.

Collabora (https://www.collabora.com) is a consultancy with more than 10 years of experience in open source technologies. As well as employing several core contributors, they have been sponsoring the GStreamer conference for every year since the very first conference.

Thanks Collabora!

Collabora

About the GStreamer Conference

The GStreamer Conference 2016 will take place on 10-11 October 2016 in Berlin (Germany), and will take place in the same week as the Embedded Linux Conference Europe. More information and details how to register can be found on the conference website.

September 14, 2016 01:00 PM

September 11, 2016

Jean-François Fortin TamTaking down my online portfolio

Cleaning up my apartment today, hoping to get rid of a pile of draft papers that has been cluttering my space for six months, I’m taking the opportunity to write about what this particular pile of paper means (yes, my blogging backlog goes that far—I am draining the swamp one post at a time!)

2016-09-11--10.40.19

Pictured: my floor, littered with intermediate drafts of my new portfolio, along with a few printer calibration photos (upper-left) and a copy of the new Annual Report sitting around

For the longest of times, as a designer flying under the radar (my management and marketing work is more “visible”, ironically), I did not have a proper portfolio. In fact, I had made a portfolio only once in my life, assembled in a few days in 2011. While my work as idéemarque in the past few years has led me to do a fair amount of design work, I was so busy with business work in general that I never took the time to do the massive amount of research and planning required to make a new one.

In early 2016, I finally sat down and made a new portfolio. Those who have worked with me know I’m a perfectionist, and since I had finally set my mind on doing this, I would be doing it right. It indeed took me 2-3 months of research, writing, designing, revising, redesigning, print testing, revising again and fixing issues, until I ran out of issues to fix.

From the start, I decided that it would be a top-quality print-only edition. I used beautiful, legible and readable fonts to make reading on paper a delightful experience. I invested in premium quality paper (including some semi-transparent sheets) and used a high-end printer, but not a commercial printing press as that would have been ridiculously expensive and inflexible for printing only a handful of custom-made copies. A print shop couldn’t have provided all the fancy features I required anyway (fancy binding, specialty paper, embossing, etc.). Therefore each of my portfolios is a hand-made craft—even if it might not seem that way as I made it full bleed (edge to edge). This whole project made me learn way too much about troubleshooting printer color problems, printing on unconventional paper types, faking bleeds, and the traditional bookbinding techniques of my ancestors.

Pictured: some really thick paper used for the portfolio

Pictured: some really thick and heavy paper used for the portfolio

In 2016, a printed portfolio might seem outdated compared to an online portfolio. Yet, mine is designed for print, precisely because the web and cheap emails have become the norm in this era of constant noise, and because I prefer to control everything in the reader’s experience—including the delivery, the typography, the texture, size, depth, resolution, and the physical (and mental) weight of the document. And not having to “/%$?%*)&!?” around with CSS and “modern” web development is a huge plus.

CSS is awesome mug

Pictured: not my mug

Some said, “How can this be such an involved process, taking you months?” to which I replied that, among other things, the document was roughly thirty pages. This got me some expressions of bewilderment, until I explained, “It’s not a graphic design portfolio, it’s a human-computer interaction design research portfolio.”

While it took a long time, the project was worth it, both as a personal challenge and from the reactions I got afterwards.

I have now decided to remove any traces of my portfolio from my website (except the photo and illustrations gallery). A website is not only a pain in the ass to maintain regularly, it’s typically ignored or skimmed anyway (basically, Schrödinger’s Visitor). For those who requested an electronic version of my portfolio over the past few months, I have insisted on shipping them the print version instead—tailored for them, of course.

by nekohayo at September 11, 2016 06:38 PM

September 07, 2016

Sebastian DrögeWriting GStreamer Elements in Rust (Part 2): Don’t panic, we have better assertions now – and other updates

(Sebastian Dröge)

It’s a while since last article about writing GStreamer plugins in Rust, so here is a short (or not so short?) update of the changes since back then. You might also want to attend my talk at the GStreamer Conference on 10-11 October 2016 in Berlin about this topic.

At this point it’s still only with the same elements as before (HTTP source, file sink and source), but the next step is going to be something more useful (an element processing actual data, parser or demuxer is not decided yet) now that I’m happy with the general infrastructure. You can still find the code in the same place as before on GitHub here, and that’s where also all updates are going to be.

The main sections here will be Error Handling, Threading and Asynchonous IO.

Error Handling & Panics

First of all let’s get started with a rather big change that shows some benefits of Rust over C. There are two types of errors we care about here: expected errors and unexpected errors.

Expected Errors

In GLib based libraries we usually report errors with some kind of boolean return value plus an optional GError that allows to propagate further information about what exactly went wrong to the caller but also to the user. Bindings sometimes convert these directly into exceptions of the target language or whatever construct exists there.

Unfortunately, in GStreamer we use GErrors not very often. Consider for example GstBaseSink (in pseudo-C++/Java/… for simplicity):

class BaseSrc {
    ...
    virtual gboolean start();
    virtual gboolean stop();
    virtual GstFlowReturn create(GstBuffer ** buffer);
    ...
}

For start()/stop() there is just a boolean, for render() there is at least an enum with a few variants. This is for from ideal, so what is additionally required by implementors of those virtual methods is that they post error messages if something goes wrong with further details. Those are propagated out of the normal control flow via the GstBus to the surrounding bins and in the end the application. It would be much nicer if instead we would have GErrors there and make it mandatory for implementors to return one if something goes wrong. These could still be converted to error messages but at a central place then. Something to think about for the next major version of GStreamer.

This is of course only for expected errors, that is, for things where we know that something can go wrong and want to report that.

Rust

In Rust this problem is solved in a similar way, see the huge chapter about error handling in the documentation. You basically return either the successful result, or something very similar to a GError:

trait Src {
    ...
    start(&mut self) -> Result(), ErrorMessage>;
    stop(&mut self) -> Result(), ErrorMessage>;
    create(&mut self, &[u8] buffer) -> Result;
    ...
}

Result is the type behind that, and it comes with convenient macros for propagating errors upwards (try!()), chaining multiple failing calls and/or converting errors (map(), and_then(), map_err(), or_else(), etc) and libraries that make defining errors with all the glue code required for combining different errors types from different parts of the code easier.

Similar to Result, there is also Option, which can be Some(x) or None, to signal the possible absence of a value. It works similarly, has similar API, but is generally not meant for error handling. It’s now used instead of GST_CLOCK_TIME_NONE (aka u64::MAX) to signal the absence of e.g. a stop position of a seek, or the absence of a known size of the stream. It’s more explicit then giving a single integer value of all of them a completely different meaning.

How is the different?

The most important difference from my point of view here is, that you must handle errors in one way or another. Otherwise the compiler won’t accept your code. If something can fail, you must explicitly handle this and can’t just silently ignore the possibility of failure. While in C people tend to just ignore error return values and assume that things just went fine.

What’s ErrorMessage and FlowError, what else?

As you probably expect, ErrorMessage maps to the GStreamer concept of error messages and contains exactly the same kind of information. In Rust this is implemented slightly different but in the end results in the same thing. The main difference here is that whenever e.g. start fails, you must provide an error message and can’t just fail silently. That error message can then be used by the caller, and e.g. be posted on the bus (and that’s exactly what happens).

FlowError is basically the negative part (the errors or otherwise non-successful results) of GstFlowReturn:

pub enum FlowError {
    NotLinked,
    Flushing,
    Eos,
    NotNegotiated(ErrorMessage),
    Error(ErrorMessage),
}

Similarly, for the actual errors (NotNegotiated and Error), an actual error message must be provided and that then gets used by the caller (and is posted on the bus).

And in the same way, if setting an URI fails we now return a Result(), UriError>, which then reports the error properly to GStreamer.

In summary, if something goes wrong, we know about that, have to handle/report that and have an error message to post on the bus.

Macros are awesome

As a side-note, creating error messages for GStreamer is not too convenient and they want information like the current source file, line number, function, etc. Like in C, I’ve created a macro to make such an error message. Different to C, macros in Rust are actually awesome though and not just arbitrarily substituting text. Instead they work via pattern matching and allow you to distinguish all kinds of different cases, can be recursive and are somewhat typed (expression vs. statement vs. block of code vs. type name, …).

Unexpected Errors

So this was about expected errors so far, which have to be handled explicitly in Rust but not in C, and for which we have some kind of data structure to pass around. What about the other cases, the errors that will never happen (but usually do sooner or later) because your program would just be completely broken then and all your assumptions wrong, and you wouldn’t know what to do in those cases anyway.

In C with GLib we usually have 3 ways of handling these. 1) Not at all (and crashing, doing something wrong, deadlocking, deleting all your files, …), 2) explicitly asserting what the assumptions in the code are and crashing cleanly otherwise (SIGABRT), or 3) returning some default value from the function but just returning immediately and printing a warning instead of going on.

None of these 3 cases are handleable in any case, which seems fair because they should never happen and if they do we wouldn’t know what to do anyway. 1) is obviously least desirable but the most common, 3) is only slightly better (you get a warning, but usually sooner or later something will crash anyway because you’re in an inconsistent state) and 2) is cleanest. However 2) is nothing you really want either, your application should somehow be able to return back to a clean state if it can (by e.g. storing the current user data, stopping everything and loading up a new UI with the stored user data and some dialog).

Rust

Of course no Rust code should ever run into case 1) above and randomly crash, cause memory corruptions or similar. But of course this will also happen due to bugs in Rust itself, using unsafe code, or code wrapping e.g. a C library. There’s not really anything that can be done about this.

For the other two cases there is however: catching panics. Whenever something goes wrong in unexpected ways, the corresponding Rust code can call the panic!() macro in one way or another. Like via assertions, or when “asserting” that a Result is never the error case by calling unwrap() on it (you don’t have to handle errors but you have to explicitly opt-in to ignore them by calling unwrap()).

What happens from there on is similar to exception handling in other languages (unless you compiled your code so that panics automatically kill the application). The stack gets unwound, everything gets cleaned up on the way, and at some point either everything stops or someone catches that. The boundary for the unwinding is either your main() in Rust, or if the code is called from C, then at that exact point (i.e. for the GStreamer plugins at the point where functions are called from GStreamer).

So what?

At the point where GStreamer calls into the Rust code, we now catch all unwinds that might happen and remember that one happened. This is then converted into a GStreamer error message (so that the application can handle that in a meaningful way) and by remembering that we prevent any further calls into the Rust code and immediately make them error messages too and return.

This allows to keep the inconsistent state inside the element and to allow the application to e.g. remove the element and replace it with something else, restart the pipeline, or do whatever else it wants to do. Assertions are always local to the element and not going to take down the whole application!

Threading

The other major change that happened is that Sink and Source are now single-threaded. There is no reason why the code would have to worry about threading as everything happens in exactly one thread (the streaming thread), except for the setting/getting of the URI (and possibly other “one-time” settings in the future).

To solve that, at the translation layer between C and Rust there is now a (Rust!) wrapper object that handles all the threading (in Rust with Mutexes, which work like the ones in C++, or atomic booleans/integers), stores the URI separately from the Source/Sink and just passes the URI to the start() function. This made the code much cleaner and made it even simpler to write new sources or sinks. No more multi-threading headaches.

I think that we should in general move to such a simpler model in GStreamer and not require a full-fledged, multi-threaded GstElement subclass to be implemented, but instead something more use-case oriented (Source, sink, encoder, decoder, …) that has a single threaded API and hides all the gory details of GstElement. You don’t have to know these in most cases, so you shouldn’t have to know them as is required right now.

Simpler Source/Sink Traits

Overall the two traits look like this now, and that’s all you have to implement for a new source or sink:

pub type UriValidator = Fn(&Url) -> Result(), UriError>;

pub trait Source {
    fn uri_validator(&self) -> Box;

    fn is_seekable(&self) -> bool;
    fn get_size(&self) -> Option;

    fn start(&mut self, uri: Url) -> Result(), ErrorMessage>;
    fn stop(&mut self) -> Result(), ErrorMessage>;
    fn fill(&mut self, offset: u64, data: &mut [u8]) -> Result;
    fn seek(&mut self, start: u64, stop: Option) -> Result(), ErrorMessage>;
}

pub trait Sink {
    fn uri_validator(&self) -> Box;

    fn start(&mut self, uri: Url) -> Result(), ErrorMessage>;
    fn stop(&mut self) -> Result(), ErrorMessage>;

    fn render(&mut self, data: &[u8]) -> Result(), FlowError>;
}

Asynchronous IO

The last time I mentioned that a huge missing feature was asynchronous IO, in a composeable way. This has some news now, there’s an abstract implementation for futures and a set of higher-level APIs around mio for doing actual IO, called tokio. Independent of that there’s also futures-cpupool, which allows to call arbitrary calculations as futures on threads of a thread pool.

Recently also the HTTP library Hyper, as used by the HTTP source (and Servo), also got a branch that moves it to tokio for allowing asynchronous IO. Once that is landed, it can relatively easily be used inside the HTTP source for allowing to interrupt HTTP requests at any time.

It seems like this area moves into a very promising direction now, solving my biggest technical concern in a very pleasant way.

by slomo at September 07, 2016 12:37 PM

September 06, 2016

Arun RaghavanGStreamer on Android and universal builds

This is a quick PSA for those of you using the GStreamer binary builds for Android.

With the Android NDK r12, the default behaviour while building native code changed from building for armeabi to building for all ABIs. So if your app doesn’t specify APP_ABI in its Application.mk, you will now get an error about unsupported architectures. This was tracked as bug 770631.

The idea behind this change is that your Android app should ship versions of your native code for all supported architectures as a “universal” build, so it is accessible to as many devices as possible.

To deal with this, we now provide a universal tarball which contains binaries for all archiectures that we support. This is currently ARM, ARMv7-A, ARMv8-A (64-bit), x86, and x86-64. That leaves MIPS and MIPS64 that are not currently supported.

If you’ve been using the GStreamer Android binaries before GStreamer 1.9.2, then you should start using the universal tarball rather than the architecture-specific tarball. You will need minor updates to your native build, like we made to the player example. You probably want to put the gstAndroidRoot variable in ~/.gradle/gradle.properties instead, though.

As Sebastian announced, assuming all goes well with the universal tarballs, we will stop shipping the per-arch tarballs — they are redundant, and just take up CI and disk resources.

There are some things that I’d like for us to be able to do better. The first is that Android Studio doesn’t pick up native code with our current build approach. This is a limitation of the Android Gradle NDK plugin, which doesn’t support a custom build. This should change with Android Studio 2.2.

I would also like to integrate better with Android Studio — either be able to specify the GStreamer Android binary path in the UI (like you do for the SDK/NDK), or better yet, have it be possible to specify the dependency in Gradle, and have it be automatically pulled from the Internet. If any of you are familiar with how we can do this, please shout out!

by Arun at September 06, 2016 03:34 AM

September 05, 2016

GStreamerGStreamer Conference 2016: Registration now open

(GStreamer)

About the GStreamer Conference

The GStreamer Conference 2016 will take place on 10-11 October 2016 in Berlin (Germany), and will take place in the same week as the Embedded Linux Conference Europe.

It is a conference for developers, decision-makers, and anyone else interested in the GStreamer multimedia framework and open source multimedia.

Registration now open

You can now register for the GStreamer Conference 2016 via the conference website.

September 05, 2016 09:00 AM

September 04, 2016

Jean-François Fortin Tam2016 GNOME Summit @ Montréal

Version française ci-bas.

2010-11-06--09.52.49Hi everyone, we’re planning to host the GNOME Summit in Montréal this year, on October 8-9-10 (US Colombus Day week-end, Canadian Thanksgiving). It is an unconference-style event aimed for those who want to get involved at the deeply technical level of GNOME, but everyone is welcome and we’re hoping to have a newcomers-oriented session as well as the “deep end of the pool”. Please pre-register here, indicate any topics of interest you would like to propose for collective tackling during the summit, and indicate your travel and accommodation needs. I will try to secure the venue and figure out all the details surrounding the event soon. Oh, and if you’re in any position to ask one of the GNOME-friendly companies for sponsorship, please do so and drop me an email at nekohayo at gmail. Thanks!

Bonjour tout le monde! Nous organisons le GNOME Summit à Montréal cette année, le 8, 9 et 10 octobre (fin de semaine de l’Action de Grâce au Canada). Maintenant à sa seizième édition consécutive, il s’agit d’un événement très technique et en profondeur, de style « unconference ». Il y a toutefois des contributeurs dans la communauté GNOME qui s’intéressent à y faire des ateliers pour mentorer des nouveaux contributeurs intéressés à se joindre au projet. De par sa nature internationale, l’événement se déroule principalement en anglais. Tous et toutes sont les bienvenu(e)s. Veuillez vous pré-enregistrer ici (ou m’envoyer un courriel à nekohayo à gmail) dès que possible. Si votre entreprise serait intéressée à soutenir l’événement, n’hésitez pas à m’en faire part également!

by nekohayo at September 04, 2016 03:30 PM

GStreamerOrc 0.4.26 bug-fix release

(GStreamer)

The GStreamer team announces another maintenance bug-fix release of liborc, the Optimized Inner Loop Runtime Compiler. Main changes since the previous release:

  • Use 64 bit arithmetic to increment the stride if needed (fixing crashes in certain libgstvideo functions on OS X)
  • Fix generation of ModR/M / SIB bytes for the EBP, R12, R13 registers on X86/X86-64 (fixing crashes in compositor on Windows)
  • Fix test_parse unit test if no executable backend is available
  • Add orc-test path to the -uninstalled .pc file
  • Fix compiler warnings in the tests on OS X

Direct tarball download: orc-0.4.26.

September 04, 2016 10:00 AM

September 01, 2016

GStreamerGStreamer Core, Plugins, RTSP Server, Editing Services, Python, Validate, VAAPI, OMX 1.9.2 unstable release

(GStreamer)

The GStreamer team is pleased to announce the second release of the unstable 1.9 release series, which marks the feature freeze for 1.10. The 1.9 release series is adding new features on top of the 1.0, 1.2, 1.4, 1.6 and 1.8 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework. The unstable 1.9 release series will lead to the stable 1.10 release series in the next weeks. Any newly added API can still change until that point.

Binaries for Android, iOS, Mac OS X and Windows will be provided in the next days.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

September 01, 2016 09:00 AM

August 27, 2016

Sebastian DrögeGetting RSS feeds for news websites that don’t provide them

(Sebastian Dröge)

This is about a fun little project I already wrote a few months ago, completely unrelated to other things I’m usually writing about, but I thought it might be useful for others too.

When I moved to Greece last year, I had the problem that there were not many news websites that provided local news in English and actually had an RSS feed. And having local news, next to global ones about what happens all over the world, seems like a good idea to know what happens close around you.
The only useful website I found was Ekathimerini. There were two others that seemed useful to have, The Press Project (a crowd-funded project) and To Vima, both of which don’t have an RSS feed (or not for their English version). Of course the real solution to this problem is to learn Greek, which I’m doing now but that’s going to take a while until I’m able to understand news articles without too much effort.

So what did I do? I wrote a small web service that parses the HTML of those websites and returns an RSS feed based on that, together with having it update regularly in the background and keeping some history of items. You can find it here: html-rss-proxy. The resulting RSS feeds seem to work very well in Liferea and Newsblur at least.

Since then it was also extended by another news website, and generally it’s rather simple to add new ones to the code. You just need to figure out how to extract the relevant information from the HTML of some website and then convert that into code like here and wrap it up in the general interface that the other parts of the code expect, like here.

If you add some website yourself, feel free to send me a pull request and I’ll merge it!

Technical bits

On the technical side, this seems to be one of the most stable pieces of software I ever wrote. It never crashed or otherwise failed since I started running it, and fortunately I also didn’t have to update the HTML parsing code yet because of website changes.

It’s written in Haskell, using the Scotty web framework, Cereal serialization library for storing the history of the past articles, http-conduit for fetching the websites, and html-conduit for parsing the HTML. Overall a very pleasant experience, thanks to the language being very convenient to write and preventing most silly mistakes at compile-time, and the high quality of the libraries.

The only part I’m not yet too happy about is the actual HTML parsing, it seems to verbose and redudant. I might port it to another library at a later time, maybe xml-html-conduit-lens.

Update

After saying that I don’t like the HTML parsing, I actually reimplemented it around xml-html-conduit-lens now. The result is much shorter code and it resembles the structure of the HTML, as you can see here for example.

Considering that people always say that lens is so complicated, and this is more than simple getters, I have to say it went rather painless. Only the compiler errors if the types don’t match are a bit tricky to understand at first.

by slomo at August 27, 2016 03:02 PM

August 21, 2016

Jean-François Fortin TamGUADEC 2016, laptops and tablets made to run GNOME, surprise Pitivi meeting

I went there for the 2016 edition of GUADEC:

2016-08-17--21.35.31

I arrived a couple of days early to attend my last GNOME Foundation board meeting, in one of the KIT’s libraries. The building’s uncanny brutalist architecture only added to the nostalgia of a two years adventure coming to an end:

2016-08-16--17.44.34

Then…

And so I made a new talk proposal at the last minute, which was upvoted fairly quickly by attendees:

2016-08-12--15.03.12

The conference organizers counter-trolled me by inscribing it exactly like this onto the giant public schedule in the venue’s lobby:

2016-08-12--16.08.16

The result was this talk: Laptops & Tablets Manufactured to Run a Pure GNOME. Go watch it now if you missed it. Note: during the talk’s Q&A session, I mistakenly thought that Purism‘s tablets were using an ARM architecture; they’re actually planned to be Intel-based. And to make things clear, for laptop keyboard layouts, Purism is currently offering US/UK, which are different physical layouts (different cutting etc.).

Also relevant to your interests if you’re into that whole privacy thing:

In my luggage, I carried ~20 kilograms of the Foundation’s annual reports. Some folks were skeptical and thought I should “only bring a few of them, people aren’t usually interested in the annual report.” Well, they were dead wrong: within one afternoon, the annual reports “sold out” like hotcakes. See also my tandem lightning talk (at the 29mins mark) to get a glimpse of how much work we put into designing the new annual report this year. Also, if you took one of them at the conference, remember: they’re precious little works of art and a powerful tool to convince people to become contributors or sponsors to support GNOME, so make sure to use them towards that goal!

I was very happy to see Mathieu and Alexandru from the Pitivi team deciding to attend at the last minute, even if it was just for one day. We spent time with Jakub Steiner discussing his workflow and wishlist for a “perfect” video editor:

2016-08-15--17.05.53

Pitivi makes people smile!
Left to right: Alexandru Băluț, Jakub “Skywalker” Steiner, Mathieu Duponchelle.

I took a handful of photos of the BoFs and uploaded them to my gallery for GUADEC 2016. Licensed under the Creative Commons “by attribution” 4.0 as usual, so that the GNOME Engagement team can use them for GNOME promotional materials if needed. Make sure you do the same and list yours in here.

The Flatpak BoF session

The Flatpak BoF session

Overall, this GUADEC was one of the most well-organized ones I’ve seen in years. I was floored by the amount of efforts and planning the local team put into this. They really deserve some kudos! Everything ran smoothly on the surface (I know that in such events there will always be odd situations happening here and there, but they dealt with them so efficiently that they were invisible). The team had a professional-grade two-way radio system to coordinate, a car and trailer to carry stuff around every day, made and reused food (pro-grade cafeteria counter metal containers = genius), a lifetime supply of IKEA mugs that got washed and reused frequently, tons of snacks, managed to pull in great sponsors even at the last minute, put signage in various parts of the city to guide people to the venue, had huge quantities of tasty dead animals (and plants) to eat at a very successful barbecue event, got an icecream vendor to come to the venue, and even filled up a pool and beanbags, for pete’s sake!

Thanks to the the Chaos Computer Club for providing a flawless live video streaming, recording and publishing service. Very rarely did we have GUADEC videos published in a timely fashion in the past, let alone streamed live and with proper laptop video output capture as well as proper sound mixing. This is fantastic.

I should mention the closing night’s event in the biergarten, with sponsored drinks and food by our friendly GStreamer experts Centricular, a very nice gesture that was well appreciated by everyone.

2016-08-15--19.23.28

Survivors at rest in the snacks bunker on the last day of the BoFs

Thanks to everybody involved to make this event a success, and thanks to the GNOME Foundation for making it possible for me to attend.

gnome sponsored badge shadow

by nekohayo at August 21, 2016 11:45 AM

August 19, 2016

GStreamerGStreamer Core, Plugins, RTSP Server, Editing Services, Python, Validate, VAAPI 1.8.3 stable release

(GStreamer)

The GStreamer team is pleased to announce the third bugfix release in the stable 1.8 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.8.x. For a full list of bugfixes see Bugzilla.

See /releases/1.8/ for the full release notes.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, or gstreamer-vaapi, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, or gstreamer-vaapi.

August 19, 2016 10:00 AM

August 12, 2016

Christian SchallerWant make Linux run better on laptops?

(Christian Schaller)

So we have two jobs openings in the Red Hat desktop team. What we are looking for is people to help us ensure that Fedora and RHEL runs great on various desktop hardware, with a focus on laptops. Since these jobs require continuous access to a lot of new and different hardware we can not accept applications this time for remotees, but require you to work out of out office in Munich, Germany. We are looking for people with people not afraid to jump into a lot of different code and who likes tinkering with new hardware. The hardware enablement here might include some kernel level work, but will more likely involve improving higher level stacks. So for example if we have a new laptop where bluetooth doesn’t work you would need to investigate and figure out if the problem is in the kernel, in the bluez stack or in our Bluetooth desktop parts.

This will be quite varied work and we expect you to be part of a team which will be looking at anything from driver bugs, battery life issues, implementing new stacks, biometric login and enabling existing features in the kernel or in low level libraries in the user interface.

You can read more about the jobs at the jobs.redhat.com. That link lists a Senior Engineer, but we also got a Principal Engineer position open with id 53653, but that one is not on the website as I post this, but should hopefully be very soon.

Also if you happen to be in the Karlsruhe area or at GUADEC this year I will be here until Sunday, so you could come over for a chat. Feel free to email me on christian.schaller@gmail.com if you are interested in meeting up.

by uraeus at August 12, 2016 09:07 AM

August 11, 2016

Bastien NoceraFlatpak cross-compilation support

(Bastien Nocera) A couple of weeks ago, I hinted at a presentation that I wanted to do during this year's GUADEC, as a Lightning talk.

Unfortunately, I didn't get a chance to finish the work that I set out to do, encountering a couple of bugs that set me back. Hopefully this will get resolved post-GUADEC, so you can expect some announcements later on in the year.

At least one of the tasks I set to do worked out, and was promptly obsoleted by a nicer solution. Let's dive in.

How to compile for a different architecture

There are four possible solutions to compile programs for a different architecture:

  • Native compilation: get a machine of that architecture, install your development packages, and compile. This is nice when you have fast machines with plenty of RAM to compile on, usually developer boards, not so good when you target low-power devices.
  • Cross-compilation: install a version of GCC and friends that runs on your machine's architecture, but produces binaries for your target one. This is usually fast, but you won't be able to run the binaries created, so might end up with some data created from a different set of options, and won't be able to run the generated test suite.
  • Virtual Machine: you'd run a virtual machine for the target architecture, install an OS, and build everything. This is slower than cross-compilation, but avoids the problems you'd see in cross-compilation.
The final option is one that's used more and more, mixing the last 2 solutions: the QEmu user-space emulator.

Using the QEMU user-space emulator

If you want to run just the one command, you'd do something like:

qemu-static-arm myarmbinary

Easy enough, but hardly something you want to try when compiling a whole application, with library dependencies. This is where binfmt support in Linux comes into play. Register the ELF format for your target with that user-space emulator, and you can run myarmbinary without any commands before it.

One thing to note though, is that this won't work as easily if the qemu user-space emulator and the target executable are built as a dynamic executables: QEmu will need to find the libraries for your architecture, usually x86-64, to launch itself, and the emulated binary will also need to find its libraries.

To solve that first problem, there are QEmu static binaries available in a number of distributions (Fedora support is coming). For the second one, the easiest would be if we didn't have to mix native and target libraries on the filesystem, in a chroot, or container for example. Hmm, container you say.

Running QEmu user-space emulator in a container

We have our statically compiled QEmu, and a filesystem with our target binaries, and switched the root filesystem. Well, you try to run anything, and you get a bunch of errors. The problem is that there is a single binfmt configuration for the kernel, whether it's the normal OS, or inside a container or chroot.

The Flatpak hack

This commit for Flatpak works-around the problem. The binary for the emulator needs to have the right path, so it can be found within the chroot'ed environment, and it will need to be copied there so it is accessible too, which is what this patch will do for you.

Follow the instructions in the commit, and test it out with this Flatpak script for GNU Hello.

$ TARGET=arm ./build.sh
[...]
$ ls org.gnu.hello.arm.xdgapp
918k org.gnu.hello.arm.xdgapp

Ready to install on your device!

The proper way

The above solution was built before it looked like the "proper way" was going to find its way in the upstream kernel. This should hopefully land in the upcoming 4.8 kernel.

Instead of launching a separate binary for each non-native invocation, this patchset allows the kernel to keep the binary opened, so it doesn't need to be copied to the container.

In short

With the work being done on Fedora's static QEmu user-space emulators, and the kernel feature that will land, we should be able to have a nice tickbox in Builder to build for any of the targets supported by QEmu.

Get cross-compiling!

by Bastien Nocera (noreply@blogger.com) at August 11, 2016 03:00 PM

August 10, 2016

Zeeshan AliLife is change

(Zeeshan Ali)
Quite a few major life events happened/happening this summer so I thought I blog about them and some of the experiences I had.

New job & new city/country

Yes, I found it hard to believe too that I'll ever be leaving Red Hat and the best manager I ever had (no offence to others but competing with Matthias is just impossible) but I'll be moving to Gothenburg to join Pelagicore folks as a Software Architect in just 2 weeks. I have always found Swedish language to be a very cute language so looking forward to my attempt of learning Swedish. If only I had learnt Swedish rather than Finnish when I was in Finland.

BTW, I'm selling all my furniture so if you're in London and need some furniture, get in touch!

Fresh helicopter pilot

So after two years of hard work and getting myself sinking in bank loans, I finally did it! Last week, I passed the skills test for Private Pilot License (Helicopters) and currently awaiting anxiously for my license to come through (it usually takes at least two weeks). Once I have that, I can rent Helicopters and take passengers with me. I'll be able to share the costs with passengers but I'm not allowed to make money out of it. The test was very tough and I came very close to failing at one particular point. The good news is that despite me being very tense and very windy conditions on test day, the biggest negative point from my examiner was that I was being over-cautious and hence very slow. So I think it wasn't so bad.



There are a few differences to a driving test. A minor one is is that in driving test, you are not expected to explain your steps but simply execute, where as in skills test for flying, you're expected to think everything out loud. But the most major difference is that in driving test, you are not expected to drive on your own until you pass the test, where as in flying test, you are required to have flown solo for at least 10 hours, which needs to include a solo cross country flight of at least a 100 nautical miles (185 KM) involving 3 major aeorodromes.  Mine involved Estree, Cranfield and Duxford. I've been GPS logging while flying so I can show you log of my qualifying solo cross country flight (click here to see details and notes):



I still got a long way towards Commercial License but at least now I can share the price with friends so building hours towards commercial license, won't be so expensive (I hope). I've found a nice company in Gothenburg that trains in and rents helicopters so I'm very much looking forward to flying over the costs in there. Wanna join? Let me know. :)

August 10, 2016 02:10 PM

August 09, 2016

Bastien NoceraBlog backlog, Post 4, Headset fixes for Dell machines

(Bastien Nocera) At the bottom of the release notes for GNOME 3.20, you might have seen the line:
If you plug in an audio device (such as a headset, headphones or microphone) and it cannot be identified, you will now be asked what kind of device it is. This addresses an issue that prevented headsets and microphones being used on many Dell computers.
Before I start explaining what this does, as a picture is worth a thousand words:


This selection dialogue is one you will get on some laptops and desktop machines when the hardware is not able to detect whether the plugged in device is headphones, a microphone, or a combination of both, probably because it doesn't have an impedance detection circuit to figure that out.

This functionality was integrated into Unity's gnome-settings-daemon version a couple of years ago, written by David Henningsson.

The code that existed for this functionality was completely independent, not using any of the facilities available in the media-keys plugin to volume keys, and it could probably have been split as an external binary with very little effort.

After a bit of to and fro, most of the sound backend functionality was merged into libgnome-volume-control, leaving just 2 entry points, one to signal that something was plugged into the jack, and another to select which type of device was plugged in, in response to the user selection. This means that the functionality should be easily implementable in other desktop environments that use libgnome-volume-control to interact with PulseAudio.

Many thanks to David Henningsson for the original code, and his help integrating the functionality into GNOME, Bednet for providing hardware to test and maintain this functionality, and Allan, Florian and Rui for working on the UI notification part of the functionality, and wiring it all up after I abandoned them to go on holidays ;)

by Bastien Nocera (noreply@blogger.com) at August 09, 2016 10:49 AM

August 06, 2016

Jean-François Fortin TamVice-President’s Report — The State of the GNOME Foundation

Hi! Long time no see. My blog has been pretty quiet in recent months, in the big part due to my extended commitment on the GNOME Foundation‘s Board of Directors (for a second year without an executive director present to take some of the load) and the various business engagements I’ve had.

Generally speaking, this year was a bit less intense than the one before it (we didn’t have to worry about a legal battle with a giant corporation this time around!) although we did end up touching a fair amount of legal matters, such as trademark agreements. One big item we got cleared was the Ubuntu GNOME trademark agreement. We also welcomed businesses that wanted to sell GNOME-related merchandise, you can find them listed here—supporting them by purchasing GNOME-related items also supports the Foundation with a small percentage shared as royalties.

gnome 2016 merchandize - 1 gnome 2016 merchandize - 3 gnome 2016 merchandize - 4 gnome 2016 merchandize - 2

In the summer of 2015, I thought I’d take a break from my presidency from the year before, so I was pretty happy to have a new president and vice-president starting at GUADEC 2015 and me just being a regular board member. Some months later, Christian Hergert had to step down from his role as vice-president because he joined Red Hat, and the GNOME Foundation has a rule where the board of the directors cannot have more than two members (out of seven) from the same company/employer. I took over his role as vice-president then.

And so it went:

tails flying and supporting sonic

Pictured: me, helping Shaun tackle the big things. Sonic boom!

The board has done a lot of work in recent months. In addition to the legal agreements mentioned above, since my last report we’ve held 50 meetings (double the amount from last year; it seems bi-weekly meetings were not enough to cover all that we had to discuss!), over 2400 emails were exchanged on the board mailing list, and we wrote over 24,800 lines of discussion on our IRC channel.

When the 2016 elections came up I thought it was time to let new blood come in and participate. I needed to move on and focus on growing my own business that I have been neglecting for two years, anyway. As the new board came in, I have been gradually winding down (my role after the election and until GUADEC is mostly advisory, as I do not hold voting powers).

I am excited about the team that composes the new Board of Directors and I trust that they will do a great job. The GNOME Foundation always needs a team of experimented, positive and energetic people to come together to think, discuss and make decisions regarding the various challenges its faces. As I wrote during last year’s elections period:

For this to work, we need people that are what I call “powerhouses”, because the GNOME Foundation Board is an “active” board. This means great thinkers and proactive doers ready to deal with anything while being very capable in the board room.

The best metaphor I have for a healthy GNOME board is taken from role-playing games: a well-coordinated “level 45-70” party that will not be afraid to crawl dungeons together for a year. You need polyvalent classes just like you need specialists (analytic mages, “massive damage” knights, resourceful healers, quick & agile rangers, etc.).

So if this makes sense to anyone, I’m a hybrid mage-knight with a ton of HP/MP potions and phoenix feathers ;)

Party line-up, by Laikkuseia

Party line-up, by Laikkuseia

During the 2016 winter and spring, I have also been heavily involved in the engagement team to redesign the Annual Report and get it ready in time for GUADEC. I’ll bring it along with me in my luggage. Do you want a copy? Let me know by tomorrow (Sunday):

(note that I can also bring copies of last year’s report if you’d like to have them for your vintage collection)

I am looking forward to seeing all of you at GUADEC next week; don’t forget to register! And look forward to some very interesting news coming up from my side in the near future.

by nekohayo at August 06, 2016 01:06 PM

July 31, 2016

Nirbheek ChauhanGStreamer and Meson: A New Hope

Anyone who has written a non-trivial project using Autotools has realized that (and wondered why) it requires you to be aware of 5 different languages. Once you spend enough time with the innards of the system, you begin to realize that it is nothing short of an astonishing feat of engineering. Engineering that belongs in a museum. Not as part of critical infrastructure.

Autotools was created in the 1980s and caters to the needs of an entirely different world of software from what we have at present. Worse yet, it carries over accumulated cruft from the past 40 years — ostensibly for better “cross-platform support” but that “support” is mostly for extinct platforms that five people in the whole world remember.

We've learned how to make it work for most cases that concern FOSS developers on Linux, and it can be made to limp along on other platforms that the majority of people use, but it does not inspire confidence or really anything except frustration. People will not like your project or contribute to it if the build system takes 10x longer to compile on their platform of choice, does not integrate with the preferred IDE, and requires knowledge arcane enough to be indistinguishable from cargo-cult programming.

As a result there have been several (terrible) efforts at replacing it and each has been either incomplete, short-sighted, slow, or just plain ugly. During my time as a Gentoo developer in another life, I came in close contact with and developed a keen hatred for each of these alternative build systems. And so I mutely went back to Autotools and learned that I hated it the least of them all.

Sometime last year, Tim heard about this new build system called ‘Meson’ whose author had created an experimental port of GStreamer that built it in record time.

Intrigued, he tried it out and found that it finished suspiciously quickly. His first instinct was that it was broken and hadn’t actually built everything! Turns out this build system written in Python 3 with Ninja as the backend actually was that fast. About 2.5x faster on Linux and 10x faster on Windows for building the core GStreamer repository.

Upon further investigation, Tim and I found that Meson also has really clean generic cross-compilation support (including iOS and Android), runs natively (and just as quickly) on OS X and Windows, supports GNU, Clang, and MSVC toolchains, and can even (configure and) generate XCode and Visual Studio project files!

But the critical thing that convinced me was that the creator Jussi Pakkanen was genuinely interested in the use-cases of widely-used software such as Qt, GNOME, and GStreamer and had already added support for several tools and idioms that we use — pkg-config, gtk-doc, gobject-introspection, gdbus-codegen, and so on. The project places strong emphasis on both speed and ease of use and is quite friendly to contributions.

Over the past few months, Tim and I at Centricular have been working on creating Meson ports for most of the GStreamer repositories and the fundamental dependencies (libffi, glib, orc) and improving the MSVC toolchain support in Meson.

We are proud to report that you can now build GStreamer on Linux using the GNU toolchain and on Windows with either MinGW or MSVC 2015 using Meson build files that ship with the source (building upon Jussi's initial ports).

Other toolchain/platform combinations haven't been tested yet, but they should work in theory (minus bugs!), and we intend to test and bugfix all the configurations supported by GStreamer (Linux, OS X, Windows, iOS, Android) before proposing it for inclusion as an alternative build system for the GStreamer project.

You can either grab the source yourself and build everything, or use our (with luck, temporary) fork of GStreamer's cross-platform build aggregator Cerbero.

Update: I wrote a new post with detailed steps on how to build using Cerbero and generate Visual Studio project files.

Personally, I really hope that Meson gains widespread adoption. Calling Autotools the Xorg of build systems is flattery. It really is just a terrible system. We really need to invest in something that works for us rather than against us.

PS: If you just want a quick look at what the build system syntax looks like, take a look at this or the basic tutorial.

by Nirbheek (noreply@blogger.com) at July 31, 2016 02:22 AM

July 27, 2016

Nirbheek ChauhanBuilding and Developing GStreamer using Visual Studio

Two months ago, I talked about how we at Centricular have been working on a Meson port of GStreamer and its basic dependencies (glib, libffi, and orc) for various reasons — faster builds, better cross-platform support (particularly Windows), better toolchain support, ease of use, and for a better build system future in general.

Meson also has built-in support for things like gtk-doc, gobject-introspection, translations, etc. It can even generate Visual Studio project files at build time so projects don't have to expend resources maintaining those separately.

Today I'm here to share instructions on how to use Cerbero (our “aggregating” build system) to build all of GStreamer on Windows using MSVC 2015 (wherever possible). Note that this means you won't see any Meson invocations at all because Cerbero does all that work for you.

Note that this is still all unofficial and has not been proposed for inclusion upstream. We still have a few issues that need to be ironed out before we can do that¹.

First, you need to setup the environment on Windows by installing a bunch of external tools: Python 2, Python3, Git, etc. You can find the instructions for that here:

https://github.com/centricular/cerbero#windows

This is very similar to the old Cerbero instructions, but some new tools are needed. Once you've done everything there (Visual Studio especially takes a while to fetch and install itself), the next step is fetching Cerbero:

$ git clone https://github.com/centricular/cerbero.git

This will clone and checkout the meson-1.8 branch that will build GStreamer 1.8.x. Next, we bootstrap it:

https://github.com/centricular/cerbero#bootstrap

Now we're (finally) ready to build GStreamer. Just invoke the package command:

python2 cerbero-uninstalled -c config/win32-mixed-msvc.cbc package gstreamer-1.0

This will build all the `recipes` that constitute GStreamer, including the core libraries and all the plugins including their external dependencies. This comes to about 76 recipes. Out of all these recipes, only the following are ported to Meson and are built with MSVC:

bzip2.recipe
orc.recipe
libffi.recipe (only 32-bit)
glib.recipe
gstreamer-1.0.recipe
gst-plugins-base-1.0.recipe
gst-plugins-good-1.0.recipe
gst-plugins-bad-1.0.recipe
gst-plugins-ugly-1.0.recipe

The rest still mostly use Autotools, plain GNU make or cmake. Almost all of these are still built with MinGW. The only exception is libvpx, which uses its custom make-based build system but is built with MSVC.

Eventually we want to build everything including all external dependencies with MSVC by porting everything to Meson, but as you can imagine it's not an easy task. :-)

However, even with just these recipes, there is a large improvement in how quickly you can build all of GStreamer inside Cerbero on Windows. For instance, the time required for building gstreamer-1.0.recipe which builds gstreamer.git went from 10 minutes to 45 seconds. It is now easier to do GStreamer development on Windows since rebuilding doesn't take an inordinate amount of time!

As a further improvement for doing GStreamer development on Windows, for all these recipes (except libffi because of complicated reasons), you can also generate Visual Studio 2015 project files and use them from within Visual Studio for editing, building, and so on.

Go ahead, try it out and tell me if it works for you!

As an aside, I've also been working on some proper in-depth documentation of Cerbero that explains how the tool works, the recipe format, supported configurations, and so on. You can see the work-in-progress if you wish to.

1. Most importantly, the tests cannot be built yet because GStreamer bundles a very old version of libcheck. I'm currently working on fixing that.

by Nirbheek (noreply@blogger.com) at July 27, 2016 03:58 PM

July 19, 2016

Bastien NoceraGUADEC Flatpak contest

(Bastien Nocera) I will be presenting a lightning talk during this year's GUADEC, and running a contest related to what I will be presenting.

Contest

To enter the contest, you will need to create a Flatpak for a piece of software that hasn't been flatpak'ed up to now (application, runtime or extension), hosted in a public repository.

You will have to send me an email about the location of that repository.

I will choose a winner amongst the participants, on the eve of the lightning talks, depending on, but not limited to, the difficulty of packaging, the popularity of the software packaged and its redistributability potential.

You can find plenty of examples (and a list of already packaged applications and runtimes) on this Wiki page.

Prize

A piece of hardware that you can use to replicate my presentation (or to replicate my attempts at a presentation, depending ;). You will need to be present during my presentation at GUADEC to claim your prize.

Good luck to one and all!

by Bastien Nocera (noreply@blogger.com) at July 19, 2016 03:39 PM

July 12, 2016

GStreamerGStreamer Core, Plugins, RTSP Server, Editing Services, Python, Validate 1.9.1 unstable release (binaries)

(GStreamer)

Pre-built binary images of the 1.9.1 unstable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

July 12, 2016 12:00 AM

July 06, 2016

GStreamerGStreamer Core, Plugins, RTSP Server, Editing Services, Python, Validate, VAAPI, OMX 1.9.1 unstable release

(GStreamer)

The GStreamer team is pleased to announce the first release of the unstable 1.9 release series. The 1.9 release series is adding new features on top of the 1.0, 1.2, 1.4, 1.6 and 1.8 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework. The unstable 1.9 release series will lead to the stable 1.10 release series in the next weeks. Any newly added API can still change until that point.

Binaries for Android, iOS, Mac OS X and Windows will be provided in the next days.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

July 06, 2016 12:00 PM

July 01, 2016

Víctor JáquezVA-API and DRM/KMS in MinnowBoard

In 2012 I started to work on a video renderer for GStreamer which uses directly the DRM/KMS kernel subsystem to display images. I even blogged about it, but I didn’t finished it.

Nonetheless, in December last year, a customer asked us to finish the element, but this time focusing on i.MX6, rather than OMAP4. Consequently, early this year, kmssink got merged into upstream.

The video sink has some nice features, such as DMA-buf importing through the PRIME kernel interface. This feature makes possible a zero-copy path when the video decoder delivers dmabuf based frames. For this particular project, the kernel we used supports the CODA media subsystem, and through the GStreamer element v4l2videodec we could link pipelines negotiating dmabuf sharing, and thus we played videos very efficiently.

During the last GStreamer Hackfest, Nicolas Dufresne got working the MFC media subsystem, for Exynos, with v4l2videodec and kmssink, but I don’t remember if he also got the dmabuf sharing working.

All in all, kmssink had only been tested in a couple of ARM devices, so I wonder, as a gstremer-vaapi co-maintainer, if it also works in x86 devices.

Some colleagues at Igalia, use a Minnowboard to test WebKit4Wayland, so I borrowed it, installed a minimal Fedora 23 in it, and, surprisingly, I found out that it has a nice VA-API support:

   $ vainfo
   error: can't connect to X server!
   libva info: VA-API version 0.38.1
   libva info: va_getDriverName() returns 0
   libva info: Trying to open /usr/lib64/dri/i965_drv_video.so
   libva info: Found init function __vaDriverInit_0_38
   libva info: va_openDriver() returns 0
   vainfo: VA-API version: 0.38 (libva 1.6.2)
   vainfo: Driver version: Intel i965 driver for Intel(R) Bay Trail - 1.6.2
   vainfo: Supported profile and entrypoints
         VAProfileMPEG2Simple            : VAEntrypointVLD
         VAProfileMPEG2Simple            : VAEntrypointEncSlice
         VAProfileMPEG2Main              : VAEntrypointVLD
         VAProfileMPEG2Main              : VAEntrypointEncSlice
         VAProfileH264ConstrainedBaseline: VAEntrypointVLD
         VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
         VAProfileH264Main               : VAEntrypointVLD
         VAProfileH264Main               : VAEntrypointEncSlice
         VAProfileH264High               : VAEntrypointVLD
         VAProfileH264High               : VAEntrypointEncSlice
         VAProfileH264StereoHigh         : VAEntrypointVLD
         VAProfileVC1Simple              : VAEntrypointVLD
         VAProfileVC1Main                : VAEntrypointVLD
         VAProfileVC1Advanced            : VAEntrypointVLD
         VAProfileNone                   : VAEntrypointVideoProc
         VAProfileJPEGBaseline           : VAEntrypointVLD


Afterwards, I compiled GStreamer using gst-uninstalled in order to have kmssink and the master branch of gstreamer-vaapi. And got it! I had hardware accelerated decoding with VA-API, and video rendering using KMS/DRM. But, sadly, no zero copy, since, in order to have dmabuf sharing, we have to finish and to merge bug 755072 in gstreamer-vaapi.

Running the typical Big Bunny video (1080p, H.264) the playback consumes less than the 50% of the CPU usage, compared with the 100% of CPU usage with software decoders (and a lot of frames dropped).

I recorded a small video with the experiment as a proof.

Update: Javer Martinez told me in twitter, that MFC can do dma-buf export since uses vb2 and Exynos DRM can import, but he couldn’t get zero copy to work due missing format support.

by vjaquez at July 01, 2016 05:00 PM

June 30, 2016

Nicolas DufresneGStreamer Echo Canceller

For a long time I believed that echo cancellers had no place inside GStreamer. The theory was that GStreamer was too high level and would never be able to provide accurate enough delay information for any canceller to work. With fairly simple test, I could quickly confirm that the reported latency is often off by a period (generally 10ms). This isn’t strictly GStreamer’s fault and is not in any ways catastrophic for general playback experience.

With the apparition of WebRTC in browsers, it most likely became apparent that to be cross-platform browsers needed to have their own canceller. That’s exactly what happened in libWebRTC (former libjingle, used in both Firefox and Chrome to implement WebRTC). They implemented an echo canceller that accept an approximate delay and this changes everything for GStreamer.

At Collabora, I recently had the opportunity to implement this WebRTC Audio Processing based echo canceller. The main motivation was that the canceller on the hardware DSP we had didn’t work due to a hardware bug. A lot of those boards had been produced and no rework was possible. To save these boards, we decided to try with a software echo canceller. Even though it was using a fair amount of CPU, the experiment was a success. I have then clean-up the code and the new elements are now available in GStreamer Plugins Bad.

How does it work ?

Echo

The first step is to understand what is the echo. In a phone call with loud speaker, what happens is that your microphone records both your voice and the far end voices. The side effect, is that you are sending to the far end listeners both your voice along with a bad copy of their voices a moment before (the echo). To avoid this echo, you need to monitor the far end stream that you are playing back and “subtract” it from the recorded stream. In practice, it’s much more complex work, since the signal is deteriorated by the speaker and the microphone. You also need to figure-out the delays and hint the canceller, otherwise you may end-up with a terrible startup time or it may simply not work.

The implementation was greatly inspired from an experiment Olivier Crête did in 2008 using Speex DSP. I must admit, I never really understood his way of synchronizing the streams and literally ignored pretty much all the code that wasn’t GStreamer specific. The design works this way, you have a DSP element (webrtcdsp) that process the recorded stream and a probe element (webrtcechoprobe) that analyses the far end stream (before playing it back). Due to WebRTC library limitation, those two elements will transform the input buffer into chunk of 10ms. This is done using GstAdapter help. On the probe side, we push buffers in the adapter with timestamp transformed to running time. This time, plus the pipeline latency, gives us the moment in running time when the buffer should be heard by the microphone. We then synchronize the far end data against the recorded data and then let WebRTC Audio Processing library do it’s magic. A simple way of testing the element is by using an echo loop.

  gst-launch-1.0 pulsesrc ! webrtcdsp ! webrtcechoprobe ! pulsesink

Without the canceller, this pipeline would create a lot of echo, and probably end with loud feedback if your microphone volume is high enough. With the canceller, you should instead ear only one echo. It behaves a bit like a sound monitor but with too high latency and the side effect of fading in and out monotonic frequencies. After all, this is not what the algorithm have been design for. Try it in your real audio call application, that’s where you will most likely get the best results.

Before I conclude, there is a good reason why I called the element DSP rather then AEC. WebRTC Audio Processing is much more then just an echo canceller. In fact, it implements a wide variety of filters, noise suppressor, voice activity detection, etc. Currently we enable of subset of it, but I’m definitely looking forward enabling more (if not all) features from this library. I also encourage contributions. This works was only possible because of the great effort Arun Raghavan have put into extracting the echo canceller form the WebRTC project, create a standalone library usable by all. If you are interested about what cool feature could be added in the future, have a look at Arun’s blog about beamforming. And last one, thanks to my colleague who had to suffer me speaking with my computer listening to my echo for few weeks.

by Nicolas at June 30, 2016 10:14 PM

June 29, 2016

Arun RaghavanBeamforming in PulseAudio

In case you missed it — we got PulseAudio 9.0 out the door, with the echo cancellation improvements that I wrote about. Now is probably a good time for me to make good on my promise to expand upon the subject of beamforming.

As with the last post, I’d like to shout out to the wonderful folks at Aldebaran Robotics who made this work possible!

Beamforming

Beamforming as a concept is used in various aspects of signal processing including radio waves, but I’m going to be talking about it only as applied to audio. The basic idea is that if you have a number of microphones (a mic array) in some known arrangement, it is possible to “point” or steer the array in a particular direction, so sounds coming from that direction are made louder, while sounds from other directions are rendered softer (attenuated).

Practically speaking, it should be easy to see the value of this on a laptop, for example, where you might want to focus a mic array to point in front of the laptop, where the user probably is, and suppress sounds that might be coming from other locations. You can see an example of this in the webcam below. Notice the grilles on either side of the camera — there is a microphone behind each of these.

Webcam with 2 micsWebcam with 2 mics

This raises the question of how this effect is achieved. The simplest approach is called “delay-sum beamforming”. The key idea in this approach is that if we have an array of microphones that we want to steer the array at a particular angle, the sound we want to steer at will reach each microphone at a different time. This is illustrated below. The image is taken from this great article describing the principles and math in a lot more detail.

Delay-sum beamformingDelay-sum beamforming

In this figure, you can see that the sound from the source we want to listen to reaches the top-most microphone slightly before the next one, which in turn captures the audio slightly before the bottom-most microphone. If we know the distance between the microphones and the angle to which we want to steer the array, we can calculate the additional distance the sound has to travel to each microphone.

The speed of sound in air is roughly 340 m/s, and thus we can also calculate how much of a delay occurs between the same sound reaching each microphone. The signal at the first two microphones is delayed using this information, so that we can line up the signal from all three. Then we take the sum of the signal from all three (actually the average, but that’s not too important).

The signal from the direction we’re pointing in is going to be strongly correlated, so it will turn out loud and clear. Signals from other directions will end up being attenuated because they will only occur in one of the mics at a given point in time when we’re summing the signals — look at the noise wavefront in the illustration above as an example.

Implementation

(this section is a bit more technical than the rest of the article, feel free to skim through or skip ahead to the next section if it’s not your cup of tea!)

The devil is, of course, in the details. Given the microphone geometry and steering direction, calculating the expected delays is relatively easy. We capture audio at a fixed sample rate — let’s assume this is 32000 samples per second, or 32 kHz. That translates to one sample every 31.25 µs. So if we want to delay our signal by 125µs, we can just add a buffer of 4 samples (4 × 31.25 = 125). Sound travels about 4.25 cm in that time, so this is not an unrealistic example.

Now, instead, assume the signal needs to be delayed by 80 µs. This translates to 2.56 samples. We’re working in the digital domain — the mic has already converted the analog vibrations in the air into digital samples that have been provided to the CPU. This means that our buffer delay can either be 2 samples or 3, not 2.56. We need another way to add a fractional delay (else we’ll end up with errors in the sum).

There is a fair amount of academic work describing methods to perform filtering on a sample to provide a fractional delay. One common way is to apply an FIR filter. However, to keep things simple, the method I chose was the Thiran approximation — the literature suggests that it performs the task reasonably well, and has the advantage of not having to spend a whole lot of CPU cycles first transforming to the frequency domain (which an FIR filter requires)(edit: converting to the frequency domain isn’t necessary, thanks to the folks who pointed this out).

I’ve implemented all of this as a separate module in PulseAudio as a beamformer filter module.

Now it’s time for a confession. I’m a plumber, not a DSP ninja. My delay-sum beamformer doesn’t do a very good job. I suspect part of it is the limitation of the delay-sum approach, partly the use of an IIR filter (which the Thiran approximation is), and it’s also entirely possible there is a bug in my fractional delay implementation. Reviews and suggestions are welcome!

A Better Implementation

The astute reader has, by now, realised that we are already doing a bunch of processing on incoming audio during voice calls — I’ve written in the previous article about how the webrtc-audio-processing engine provides echo cancellation, acoustic gain control, voice activity detection, and a bunch of other features.

Another feature that the library provides is — you guessed it — beamforming. The engineers at Google (who clearly are DSP ninjas) have a pretty good beamformer implementation, and this is also available via module-echo-cancel. You do need to configure the microphone geometry yourself (which means you have to manually load the module at the moment). Details are on our wiki (thanks to Tanu for that!).

How well does this work? Let me show you. The image below is me talking to my laptop, which has two microphones about 4cm apart, on either side of the webcam, above the screen. First I move to the right of the laptop (about 60°, assuming straight ahead is 0°). Then I move to the left by about the same amount (the second speech spike). And finally I speak from the center (a couple of times, since I get distracted by my phone).

The upper section represents the microphone input — you’ll see two channels, one corresponding to each mic. The bottom part is the processed version, with echo cancellation, gain control, noise suppression, etc. and beamforming.

WebRTC beamformingWebRTC beamforming

You can also listen to the actual recordings …

https://arunraghavan.net/wp-content/uploads/aec_rec.mp3

… and the processed output.

https://arunraghavan.net/wp-content/uploads/aec_bf.mp3

Feels like black magic, doesn’t it?

Finishing thoughts

The webrtc-audio-processing-based beamforming is already available for you to use. The downside is that you need to load the module manually, rather than have this automatically plugged in when needed (because we don’t have a way to store and retrieve the mic geometry). At some point, I would really like to implement a configuration framework within PulseAudio to allow users to set configuration from some external UI and have that be picked up as needed.

Nicolas Dufresne has done some work to wrap the webrtc-audio-processing library functionality in a GStreamer element (and this is in master now). Adding support for beamforming to the element would also be good to have.

The module-beamformer bits should be a good starting point for folks who want to wrap their own beamforming library and have it used in PulseAudio. Feel free to get in touch with me if you need help with that.

by Arun at June 29, 2016 05:22 AM

June 21, 2016

Bastien NoceraAAA game, indie game, card-board-box

(Bastien Nocera)
Early bird gets eaten by the Nyarlathotep
 
The more adventurous of you can use those (designed as embeddable) Lua scripts to transform your DRM-free GOG.com downloads into Flatpaks.

The long-term goal would obviously be for this not to be needed, and for online games stores to ship ".flatpak" files, with metadata so we know what things are in GNOME Software, which automatically picks up the right voice/subtitle language, and presents its extra music and documents in the respective GNOME applications.
 
But in the meanwhile, and for the sake of the games already out there, there's flatpak-games. Note that lua-archive is still fiddly.
 
Support for a few Humble Bundle formats (some formats already are), grab-all RPMs and Debs, and those old Loki games is also planned.
 
It's late here, I'll be off to do some testing I think :)

PS: Even though I have enough programs that would fail to create bundles in my personal collection to accept "game donations", I'm still looking for original copies of Loki games. Drop me a message if you can spare one!

by Bastien Nocera (noreply@blogger.com) at June 21, 2016 08:57 PM

Christian SchallerFedora Workstation 24 is out and Flatpak is now officially launched!

(Christian Schaller)

This is a very exciting day for me as two major projects I am deeply involved with are having a major launch. First of all Fedora Workstation 24 is out which crosses a few critical milestones for us. Maybe most visible is that this is the first time you can use the new graphical update mechanism in GNOME Software to take you from Fedora Workstation 23 to Fedora Workstation 24. This means that when you open GNOME Software it will show you an option to do a system upgrade to Fedora Workstation 24. We been testing and doing a lot of QA work around this feature so my expectation is that it will provide a smooth upgrade experience for you.
Fedora System Upgrade

The second major milestone is that we do feel Wayland is now in a state where the vast majority of users should be able to use it on a day to day basis. We been working through the kinks and resolving many corner cases during the previous 6 Months, with a lot of effort put into making sure that the interaction between applications running natively on Wayland and those running using XWayland is smooth. For instance one item we crossed off the list early in this development cycle was adding middle-mouse button cut and paste as we know that was a crucial feature for many long time linux users looking to make the switch. So once you updated I ask all of you to try switching to the Wayland session by clicking on the little cogwheel in the login screen, so that we get as much testing as possible of Wayland during the Fedora Workstation 24 lifespan. Feedback provided by our users during the Fedora Workstation 24 lifecycle will be a crucial information to allow us to make the final decision about Wayland as the default for Fedora Workstation 25. Of course the team will be working ardently during Fedora Workstation 24 to make sure we find and address any niggling issues left.

In addition to that there is also of course a long list of usability improvements, new features and bugfixes across the desktop, both coming in from our desktop team at Red Hat and from the GNOME community in general.

There was also the formal announcement of Flatpak today (be sure to read that press release), which is the great new technology for shipping desktop applications. For those of you who have read my previous blog entries you probably seen me talking about this technology using its old name xdg-app. Flatpak is an incredible piece of engineering designed by Alexander Larsson we developed alongside a lot of other components.
Because as Matthew Garret pointed out not long ago, unless we move away from X11 we can not really produce a secure desktop container technology, which is why we kept such a high focus on pushing Wayland forward for the last year. It is also why we invested so much time into Pinos which is as I mentioned in my original annoucement of the project our video equivalent of PulseAudio (and yes a proper Pinos website is getting close :). Wim Taymans who created Pinos have also been working on patches to PulseAudio to make it more suitable for using with sandboxed applications and those patches have recently been taken over by community member Ahmed S. Darwish who is trying to get them ready for merging into the main codebase.

We are feeling very confident about Flatpak as it has a lot of critical features designed in from the start. First of all it was built to be a cross distribution solution from day one, meaning that making Flatpak run on any major linux distribution out there should be simple. We already got Simon McVittie working on Debian support, we got Arch support and there is also an Ubuntu PPA that the team put together that allows you to run Flatpaks fully featured on Ubuntu. And Endless Mobile has chosen flatpak as their application delivery format going forward for their operating system.

We use the same base technologies as Docker like namespaces, bind mounts and cgroups for Flatpak, which means that any system out there wanting to support Docker images would also have the necessary components to support Flatpaks. Which means that we will also be able to take advantage of the investment and development happening around server side containers.

Flatpak is also heavy using another exciting technology, OSTree, which was originally developed by Colin Walters for GNOME. This technology is actually seeing a lot of investment and development these days as it became the foundation for Project Atomic, which is Red Hats effort to create an enterprise ready platform for running server side containers. OStree provides us with a lot of important features like efficient storage of application images and a very efficient transport mechanism. For example one core feature OSTree brings us is de-duplication of files which means you don’t need to keep multiple copies on your disk of the same file, so if ten Flatpak images share the same file, then you only keep one copy of it on your local disk.

Another critical feature of Flatpak is its runtime separation, which basically means that you can have different runtimes for some families of usecases. So for instance you can have a GNOME runtime that allows all your GNOME applications to share a lot of libraries yet giving you a single point for security updates to those libraries. So while we don’t want a forest of runtimes it does allow us to create a few important ones to cover certain families of applications and thus reduce disk usage further and improve system security.

Going forward we are looking at a lot of exciting features for Flatpak. The most important of these is the thing I mentioned earlier, Portals.
In the current release of flatpak you can choose between two options. Either make it completely sandboxed or not make it sandboxed at all. Portals are basically the way you can sandbox your application yet still allow it to interact with your general desktop and storage. For instance Pinos and PulseAudios role for containers is to provide such portals for handling audio and video. Of course more portals are needed and during the the GTK+ hackfest in Toronto last week a lot of time was spent on mapping out the roadmap for Portals. Expect more news about Portals as they are getting developed going forward.

I want to mention that we of course realize that a new technology like Flatpak should come with a high quality developer story, which is why Christian Hergert has been spending time working on support for Flatpak in the Builder IDE. There is some support in already, but expect to see us flesh this out significantly over the next Months. We are also working on adding more documentation to the Flatpak website, to cover how to integrate more build systems and similar with Flatpak.

And last, but not least Richard Hughes has been making sure we have great Flatpak support in Software in Fedora Workstation 24 ensuring that as an end user you shouldn’t have to care about if your application is a Flatpak or a RPM.

by uraeus at June 21, 2016 06:38 PM

June 20, 2016

Guillaume DesmottesGStreamer leaks tracer

Here at Collabora we are pretty interested at improving QA tools in GStreamer. Thibault is for example doing a great job on gst-validate ensuring that a lot of code paths are regularly tested using real life scenarios. Last year I added Valgrind support to gst-validate allowing us to automatically detect memory leaks in test scenarios. My goal was to integrate this as part of GStreamer's automatic QA to prevent memory leak regressions. While this can sometimes be a good approach to track leaks it has a few downsides:

  • Valgrind can be very CPU and/or memory consuming which can be a problem with longer scenarios or on limited hardware such as embedded devices.
  • As a result running the full tests suite with valgrind can take ages.
  • Valgrind checks for any potential memory leak which can lead to a lot of false positives or leaks in low level system libraries on which we have few control. We usually work around this problem using suppression files but they are generally very fragile and depend a lot on the system/distribution which has been used for testing.

I tried to solve these issues by trying a new approach using GstTracer. Tracers are a new mechanism introduced in GStreamer 1.8 allowing tools to hook into GStreamer internals and collect data. So I started by adding tracer hooks when GstObject and GstMiniObject are created and destroyed. Then I implemented a new tracer tracking the lifetime of (mini)objects and listing those which are still alive when the application is exiting. This worked pretty well but I needed a way to discard objects which are intentionally leaked (false positives). To do so I introduced a new (mini)object flag allowing us to mark such objects.

I'm pretty happy with the result, while proof testing this tool I found and fixed dozens of leaks into Gstreamer (core, plugins and tests). Some of those fixes have already reached the 1.8.2 release. It's also very easy to use and doesn't require any external tool unlike Valgrind (which can be tricky to integrate on some platforms).

To use it you just have to load the leaks tracer with your application and enable tracer logs:

GST_TRACERS="leaks" GST_DEBUG="GST_TRACER:7" gst-launch-1.0 videotestsrc num-buffers=10 ! fakesink

You can also filter out the types of GstObject or GstMiniObject tracked to reduce memory consumption:

GST_TRACERS="leaks(GstEvent,GstMessage)"  GST_DEBUG="GST_TRACER:7" gst-launch-1.0 videotestsrc num-buffers=10 ! fakesink

This tracer has recently be merged into GStreamer core and will be part of the 1.9.1 release.

As future enhancements I implemented live tracking and checkpointing support using signals like I already did in gobject-list a while ago. I'd also like to be able to display the creation stack trace of leaked objects to easily spot the leaked instances. Finally, I opened a bug to discuss the integration of the tracer with the QA system.

by Guillaume Desmottes at June 20, 2016 10:03 AM