June 12, 2019

GStreamerGStreamer Conference 2019 announced to take place in Lyon, France

(GStreamer)

The GStreamer project is happy to announce that this year's GStreamer Conference will take place on Thursday-Friday 31 October - 1 November 2019 in Lyon, France.

You can find more details about the conference on the GStreamer Conference 2019 web site.

A call for papers will be sent out in due course. Registration will open at a later time. We will announce those and any further updates on the gstreamer-announce mailing list, the website, and on Twitter.

Talk slots will be available in varying durations from 20 minutes up to 45 minutes. Whatever you're doing or planning to do with GStreamer, we'd like to hear from you!

We also plan to have sessions with short lightning talks / demos / showcase talks for those who just want to show what they've been working on or do a mini-talk instead of a full-length talk. Lightning talk slots will be allocated on a first-come-first-serve basis, so make sure to reserve your slot if you plan on giving a lightning talk.

There will also be a social event again on Thursday evening.

There are also plans to have a hackfest the weekend right after the conference on 2-3 November 2019.

We hope to see you in Lyon, and please spread the word!

June 12, 2019 02:00 PM

June 03, 2019

Andy Wingopictie, my c++-to-webassembly workbench

(Andy Wingo)

Hello, interwebs! Today I'd like to share a little skunkworks project with y'all: Pictie, a workbench for WebAssembly C++ integration on the web.

loading pictie...

JavaScript disabled, no pictie demo. See the pictie web page for more information. >&&<&>>>&&><<>>&&<><>>

wtf just happened????!?

So! If everything went well, above you have some colors and a prompt that accepts Javascript expressions to evaluate. If the result of evaluating a JS expression is a painter, we paint it onto a canvas.

But allow me to back up a bit. These days everyone is talking about WebAssembly, and I think with good reason: just as many of the world's programs run on JavaScript today, tomorrow much of it will also be in languages compiled to WebAssembly. JavaScript isn't going anywhere, of course; it's around for the long term. It's the "also" aspect of WebAssembly that's interesting, that it appears to be a computing substrate that is compatible with JS and which can extend the range of the kinds of programs that can be written for the web.

And yet, it's early days. What are programs of the future going to look like? What elements of the web platform will be needed when we have systems composed of WebAssembly components combined with JavaScript components, combined with the browser? Is it all going to work? Are there missing pieces? What's the status of the toolchain? What's the developer experience? What's the user experience?

When you look at the current set of applications targetting WebAssembly in the browser, mostly it's games. While compelling, games don't provide a whole lot of insight into the shape of the future web platform, inasmuch as there doesn't have to be much JavaScript interaction when you have an already-working C++ game compiled to WebAssembly. (Indeed, much of the incidental interactions with JS that are currently necessary -- bouncing through JS in order to call WebGL -- people are actively working on removing all of that overhead, so that WebAssembly can call platform facilities (WebGL, etc) directly. But I digress!)

For WebAssembly to really succeed in the browser, there should also be incremental stories -- what does it look like when you start to add WebAssembly modules to a system that is currently written mostly in JavaScript?

To find out the answers to these questions and to evaluate potential platform modifications, I needed a small, standalone test case. So... I wrote one? It seemed like a good idea at the time.

pictie is a test bed

Pictie is a simple, standalone C++ graphics package implementing an algebra of painters. It was created not to be a great graphics package but rather to be a test-bed for compiling C++ libraries to WebAssembly. You can read more about it on its github page.

Structurally, pictie is a modern C++ library with a functional-style interface, smart pointers, reference types, lambdas, and all the rest. We use emscripten to compile it to WebAssembly; you can see more information on how that's done in the repository, or check the README.

Pictie is inspired by Peter Henderson's "Functional Geometry" (1982, 2002). "Functional Geometry" inspired the Picture language from the well-known Structure and Interpretation of Computer Programs computer science textbook.

prototype in action

So far it's been surprising how much stuff just works. There's still lots to do, but just getting a C++ library on the web is pretty easy! I advise you to take a look to see the details.

If you are thinking of dipping your toe into the WebAssembly water, maybe take a look also at Pictie when you're doing your back-of-the-envelope calculations. You can use it or a prototype like it to determine the effects of different compilation options on compile time, load time, throughput, and network trafic. You can check if the different binding strategies are appropriate for your C++ idioms; Pictie currently uses embind (source), but I would like to compare to WebIDL as well. You might also use it if you're considering what shape your C++ library should have to have a minimal overhead in a WebAssembly context.

I use Pictie as a test-bed when working on the web platform; the weakref proposal which adds finalization, leak detection, and working on the binding layers around Emscripten. Eventually I'll be able to use it in other contexts as well, with the WebIDL bindings proposal, typed objects, and GC.

prototype the web forward

As the browser and adjacent environments have come to dominate programming in practice, we lost a bit of the delightful variety from computing. JS is a great language, but it shouldn't be the only medium for programs. WebAssembly is part of this future world, waiting in potentia, where applications for the web can be written in any of a number of languages. But, this future world will only arrive if it "works" -- if all of the various pieces, from standards to browsers to toolchains to virtual machines, only if all of these pieces fit together in some kind of sensible way. Now is the early phase of annealing, when the platform as a whole is actively searching for its new low-entropy state. We're going to need a lot of prototypes to get from here to there. In that spirit, may your prototypes be numerous and soon replaced. Happy annealing!

by Andy Wingo at June 03, 2019 10:10 AM

May 31, 2019

GStreamerGStreamer 1.14.5 stable bug fix release

(GStreamer)

The GStreamer team is pleased to announce another bug fix release in the old stable 1.14 release series of your favourite cross-platform multimedia framework.

This release only contains bugfixes and it should be safe to update from 1.14.x.

The 1.14 series has now been superseded by the new stable 1.16 series, and we recommend you upgrade at your earliest convenience.

See /releases/1.14/ for the details.

Binaries for Android, iOS, Mac OS X and Windows will be made available soon.

Download tarballs directly here: gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-sharp, gstreamer-vaapi, or gst-omx.

May 31, 2019 11:00 PM

May 24, 2019

Andy Wingolightening run-time code generation

(Andy Wingo)

The upcoming Guile 3 release will have just-in-time native code generation. Finally, amirite? There's lots that I'd like to share about that and I need to start somewhere, so this article is about one piece of it: Lightening, a library to generate machine code.

on lightning

Lightening is a fork of GNU Lightning, adapted to suit the needs of Guile. In fact at first we chose to use GNU Lightning directly, "vendored" into the Guile source respository via the git subtree mechanism. (I see that in the meantime, git gained a kind of a subtree command; one day I will have to figure out what it's for.)

GNU Lightning has lots of things going for it. It has support for many architectures, even things like Itanium that I don't really care about but which a couple Guile users use. It abstracts the differences between e.g. x86 and ARMv7 behind a common API, so that in Guile I don't need to duplicate the JIT for each back-end. Such an abstraction can have a slight performance penalty, because maybe it missed the opportunity to generate optimal code, but this is acceptable to me: I was more concerned about the maintenance burden, and GNU Lightning seemed to solve that nicely.

GNU Lightning also has fantastic documentation. It's written in C and not C++, which is the right thing for Guile at this time, and it's also released under the LGPL, which is Guile's license. As it's a GNU project there's a good chance that GNU Guile's needs might be taken into account if any changes need be made.

I mentally associated Paolo Bonzini with the project, who I knew was a good no-nonsense hacker, as he used Lightning for a smalltalk implementation; and I knew also that Matthew Flatt used Lightning in Racket. Then I looked in the source code to see architecture support and was pleasantly surprised to see MIPS, POWER, and so on, so I went with GNU Lightning for Guile in our 2.9.1 release last October.

on lightening the lightning

When I chose GNU Lightning, I had in mind that it was a very simple library to cheaply write machine code into buffers. (Incidentally, if you have never worked with this stuff, I remember a time when I was pleasantly surprised to realize that an assembler could be a library and not just a program that processes text. A CPU interprets machine code. Machine code is just bytes, and you can just write C (or Scheme, or whatever) functions that write bytes into buffers, and pass those buffers off to the CPU. Now you know!)

Anyway indeed GNU Lightning 1.4 or so was that very simple library that I had in my head. I needed simple because I would need to debug any problems that came up, and I didn't want to add more complexity to the C side of Guile -- eventually I should be migrating this code over to Scheme anyway. And, of course, simple can mean fast, and I needed fast code generation.

However, GNU Lightning has a new release series, the 2.x series. This series is a rewrite in a way of the old version. On the plus side, this new series adds all of the weird architectures that I was pleasantly surprised to see. The old 1.4 didn't even have much x86-64 support, much less AArch64.

This new GNU Lightning 2.x series fundamentally changes the way the library works: instead of having a jit_ldr_f function that directly emits code to load a float from memory into a floating-point register, the jit_ldr_f function now creates a node in a graph. Before code is emitted, that graph is optimized, some register allocation happens around call sites and for temporary values, dead code is elided, and so on, then the graph is traversed and code emitted.

Unfortunately this wasn't really what I was looking for. The optimizations were a bit opaque to me and I just wanted something simple. Building the graph took more time than just emitting bytes into a buffer, and it takes more memory as well. When I found bugs, I couldn't tell whether they were related to my usage or in the library itself.

In the end, the node structure wasn't paying its way for me. But I couldn't just go back to the 1.4 series that I remembered -- it didn't have the architecture support that I needed. Faced with the choice between changing GNU Lightning 2.x in ways that went counter to its upstream direction, switching libraries, or refactoring GNU Lightning to be something that I needed, I chose the latter.

in which our protagonist cannot help himself

Friends, I regret to admit: I named the new thing "Lightening". True, it is a lightened Lightning, yes, but I am aware that it's horribly confusing. Pronounced like almost the same, visually almost identical -- I am a bad person. Oh well!!

I ported some of the existing GNU Lightning backends over to Lightening: ia32, x86-64, ARMv7, and AArch64. I deleted the backends for Itanium, HPPA, Alpha, and SPARC; they have no Debian ports and there is no situation in which I can afford to do QA on them. I would gladly accept contributions for PPC64, MIPS, RISC-V, and maybe S/390. At this point I reckon it takes around 20 hours to port an additional backend from GNU Lightning to Lightening.

Incidentally, if you need a code generation library, consider your choices wisely. It is likely that Lightening is not right for you. If you can afford platform-specific code and you need C, Lua's DynASM is probably the right thing for you. If you are in C++, copy the assemblers from a JavaScript engine -- C++ offers much more type safety, capabilities for optimization, and ergonomics.

But if you can only afford one emitter of JIT code for all architectures, you need simple C, you don't need register allocation, you want a simple library to just include in your source code, and you are good with the LGPL, then Lightening could be a thing for you. Check the gitlab page for info on how to test Lightening and how to include it into your project.

giving it a spin

Yesterday's Guile 2.9.2 release includes Lightening, so you can give it a spin. The switch to Lightening allowed us to lower our JIT optimization threshold by a factor of 50, letting us generate fast code sooner. If you try it out, let #guile on freenode know how it went. In any case, happy hacking!

by Andy Wingo at May 24, 2019 08:44 AM

May 23, 2019

Andy Wingobigint shipping in firefox!

(Andy Wingo)

I am delighted to share with folks the results of a project I have been helping out on for the last few months: implementation of "BigInt" in Firefox, which is finally shipping in Firefox 68 (beta).

what's a bigint?

BigInts are a new kind of JavaScript primitive value, like numbers or strings. A BigInt is a true integer: it can take on the value of any finite integer (subject to some arbitrarily large implementation-defined limits, such as the amount of memory in your machine). This contrasts with JavaScript number values, which have the well-known property of only being able to precisely represent integers between -253 and 253.

BigInts are written like "normal" integers, but with an n suffix:

var a = 1n;
var b = a + 42n;
b << 64n
// result: 793209995169510719488n

With the bigint proposal, the usual mathematical operations (+, -, *, /, %, <<, >>, **, and the comparison operators) are extended to operate on bigint values. As a new kind of primitive value, bigint values have their own typeof:

typeof 1n
// result: 'bigint'

Besides allowing for more kinds of math to be easily and efficiently expressed, BigInt also allows for better interoperability with systems that use 64-bit numbers, such as "inodes" in file systems, WebAssembly i64 values, high-precision timers, and so on.

You can read more about the BigInt feature over on MDN, as usual. You might also like this short article on BigInt basics that V8 engineer Mathias Bynens wrote when Chrome shipped support for BigInt last year. There is an accompanying language implementation article as well, for those of y'all that enjoy the nitties and the gritties.

can i ship it?

To try out BigInt in Firefox, simply download a copy of Firefox Beta. This version of Firefox will be fully released to the public in a few weeks, on July 9th. If you're reading this in the future, I'm talking about Firefox 68.

BigInt is also shipping already in V8 and Chrome, and my colleague Caio Lima has an project in progress to implement it in JavaScriptCore / WebKit / Safari. Depending on your target audience, BigInt might be deployable already!

thanks

I must mention that my role in the BigInt work was relatively small; my Igalia colleague Robin Templeton did the bulk of the BigInt implementation work in Firefox, so large ups to them. Hearty thanks also to Mozilla's Jan de Mooij and Jeff Walden for their patient and detailed code reviews.

Thanks as well to the V8 engineers for their open source implementation of BigInt fundamental algorithms, as we used many of them in Firefox.

Finally, I need to make one big thank-you, and I hope that you will join me in expressing it. The road to ship anything in a web browser is long; besides the "simple matter of programming" that it is to implement a feature, you need a specification with buy-in from implementors and web standards people, you need a good working relationship with a browser vendor, you need willing technical reviewers, you need to follow up on the inevitable security bugs that any browser change causes, and all of this takes time. It's all predicated on having the backing of an organization that's foresighted enough to invest in this kind of long-term, high-reward platform engineering.

In that regard I think all people that work on the web platform should send a big shout-out to Tech at Bloomberg for making BigInt possible by underwriting all of Igalia's work in this area. Thank you, Bloomberg, and happy hacking!

by Andy Wingo at May 23, 2019 12:13 PM

May 17, 2019

Guillaume DesmottesRust+GNOME hackfest in Berlin

Last week I had the chance to attend for the first time the GNOME+Rust Hackfest in Berlin. My goal was to finish the GstVideoDecoder bindings, keep learning about Rust and meet people from the GNOME/Rust community. I had some experience with gstreamer-rs as an user but never actually wrote any bindings myself. To make it even more challenging the underlying C API is unfortunatelly unsafe so we had to do some smart tricks to make it safe to use in Rust.

I'm very happy with the work I managed to do during these 4 days. I was able to complete the bindings as well as my CDG decoder using them. The bindings are waiting for final review and the decoder should hopefully be merged soon as well. More importantly I learned a lot about the bindings infrastructure but also about more advanced Rust features and nice API design patterns. Turned out it's much easier to write Rust when you are sitting next to the gtk-rs maintainers. ;)

I'm very grateful to Guillaume and Sebastian for all the time they spent answering my questions and helping fighting the Rust compiler. Thanks also to Alban for letting me stay at his place, to Zeeshan and Kinvolk for organizing and hosting the hackfest and to Collabora for sponsoring my trip.

20190510_154210.jpg

by Guillaume Desmottes at May 17, 2019 07:53 AM

May 10, 2019

Nirbheek ChauhanGStreamer's Meson and Visual Studio Journey


Almost 3 years ago, I wrote about how we at Centricular had been working on an experimental port of GStreamer from Autotools to the Meson build system for faster builds on all platforms, and to allow building with Visual Studio on Windows.

At the time, the response was mixed, and for good reason—Meson was a very new build system, and it needed to work well on all the targets that GStreamer supports, which was all major operating systems. Meson did aim to support all of those, but a lot of work was required to bring platform support up to speed with the requirements of a non-trivial project like GStreamer.

The Status: Today!

After years of work across several components (Meson, Ninja, Cerbero, etc), GStreamer is being built with Meson on all platforms! Autotools is scheduled to be removed in the next release cycle (1.18).

The first stable release with this work was 1.16, which was released yesterday. It has already led to a number of new capabilities:
  • GStreamer can be built with Visual Studio on Windows inside Cerbero, which means we now ship official binaries for GStreamer built with the  MSVC toolchain.
  • From-scratch Cerbero builds are much faster on all platforms, which has aided the implementation of CI-gated merge requests on GitLab.
  • The developer workflow has been streamlined and is the same on all platforms (Linux, Windows, macOS) using the gst-build meta-project. The meta-project can also be used for cross-compilation (Android, iOS, Windows, Linux).
  • The Windows developer workflow no longer requires installing several packages by hand or setting up an MSYS environment. All you need is Git, Python 3, Visual Studio, and 15 min for the initial build.
  • Profiling on Windows is now possible, and I've personally used it to profile and fix numerous Windows-specific performance issues.
  • Visual Studio projects that use GStreamer now have debug symbols since we're no longer mixing MinGW and MSVC binaries. This also enables usable crash reports and symbol servers.
  • We can ship plugins that can only be built with MSVC on Windows, such as the Intel MSDK hardware codec plugin, Directshow plugins, and also easily enable new Windows 10 features in existing plugins such as WASAPI.
  • iOS bitcode builds are more correct, since Meson is smart enough to know how to disable incompatible compiler options on specific build targets.
  • The iOS framework now also ships shared libraries in addition to the static libraries.
Overall, it's been a huge success and we're really happy with how things have turned out!

You can download the prebuilt MSVC binaries, reproduce them yourself, or quickly bootstrap a GStreamer development environment. The choice is yours!

Further Musings

While working on this over the years, what's really stood out to me was how this sort of gargantuan task was made possible through the power of community-driven FOSS and community-focused consultancy.

Our build system migration quest has been long with valleys full of yaks with thick coats of fur, and it would have been prohibitively expensive for a single entity to sponsor it all. Thanks to the inherently collaborative nature of community FOSS projects, people from various backgrounds and across companies could come together and make this possible.

There are many other examples of this, but seeing the improbable happen from the inside is something special.

Special shout-outs to ZEISS, Barco, Pexip, and Cablecast.tv for sponsoring various parts of this work!

Their contributions also made it easier for us to spend thousands more hours of non-sponsored time to fill in the gaps so that all the sponsored work done could be upstreamed in a form that's useful for everyone who uses GStreamer. This sort of thing is, in my opinion, an essential characteristic of being a community-focused consultancy, and we make sure that it always has high priority.

by Nirbheek (noreply@blogger.com) at May 10, 2019 09:48 PM

May 04, 2019

Sebastian PölsterlEvaluating Survival Models

Evaluating Survival Models

The most frequently used evaluation metric of survival models is the concordance index (c index, c statistic). It is a measure of rank correlation between predicted risk scores $\hat{f}$ and observed time points $y$ that is closely related to Kendall’s τ. It is defined as the ratio of correctly ordered (concordant) pairs to comparable pairs. Two samples $i$ and $j$ are comparable if the sample with lower observed time $y$ experienced an event, i.e., if $y_j > y_i$ and $\delta_i = 1$, where $\delta_i$ is a binary event indicator. A comparable pair $(i, j)$ is concordant if the estimated risk $\hat{f}$ by a survival model is higher for subjects with lower survival time, i.e., $\hat{f}_i >\hat{f}_j \land y_j > y_i$, otherwise the pair is discordant. Harrell's estimator of the c index is implemented in concordance_index_censored.

While Harrell's concordance index is easy to interpret and compute, it has some shortcomings:

  1. it has been shown that it is too optimistic with increasing amount of censoring [1],
  2. it is not a useful measure of performance if a specific time range is of primary interest (e.g. predicting death within 2 years).

Since version 0.8, scikit-survival supports an alternative estimator of the concordance index from right-censored survival data, implemented in concordance_index_ipcw, that addresses the first issue.

The second point can be addressed by extending the well known receiver operating characteristic curve (ROC curve) to possibly censored survival times. Given a time point $t$, we can estimate how well a predictive model can distinguishing subjects who will experience an event by time $t$ (sensitivity) from those who will not (specificity). The function cumulative_dynamic_auc implements an estimator of the cumulative/dynamic area under the ROC for a given list of time points.

The first part of this post will illustrate the first issue with simulated survival data, while the second part will focus on the time-dependent area under the ROC applied to data from a real study.

To see the full source code for producing the figures in this post, please see this notebook.

Bias of Harrell's Concordance Index

Harrell's concordance index is known to be biased upwards if the amount of censoring in the test data is high [1]. Uno et al. proposed an alternative estimator of the concordance index that behaves better in such situations. In this section, we are going to apply concordance_index_censored and concordance_index_ipcw to synthetic survival data and compare their results.

Simulation Study

We are generating a synthetic biomarker by sampling from a standard normal distribution. For a given hazard ratio, we compute the associated (actual) survival time by drawing from an exponential distribution. The censoring times were generated from an uniform independent distribution $\textrm{Uniform}(0,\gamma)$, where we choose $\gamma$ to produce different amounts of censoring.

Since Uno's estimator is based on inverse probability of censoring weighting, we need to estimate the probability of being censored at a given time point. This probability needs to be non-zero for all observed time points. Therefore, we restrict the test data to all samples with observed time lower than the maximum event time $\tau$. Usually, one would use the tau argument of concordance_index_ipcw for this, but we apply the selection before to pass identical inputs to concordance_index_censored and concordance_index_ipcw. The estimates of the concordance index are therefore restricted to the interval $[0, \tau]$.

Let us assume a moderate hazard ratio of 2 and generate a small synthetic dataset of 100 samples from which we estimate the concordance index. We repeat this experiment 200 times and plot mean and standard deviation of the difference between the actual (in the absence of censoring) and estimated concordance index.

Since the hazard ratio remains constant and only the amount of censoring changes, we would want an estimator for which the difference between the actual and estimated c to remain approximately constant across simulations.

We can observe that estimates are on average below the actual value, except for the highest amount of censoring, where Harrell's c begins overestimating the performance (on average).

With such a small dataset, the variance of differences is quite big, so let us increase the amount of data to 1000 and repeat the simulation.

Now we can observe that Harrell's c begins to overestimate performance starting with approximately 49% censoring while Uno's c is still underestimating the performance, but is on average very close to the actual performance for large amounts of censoring.

For the final experiment, we double the size of the dataset to 2000 and repeat the analysis.

The trend we observed in the previous simulation is now even more pronounced. Harrell's c is becoming more and more overconfident in the performance of the synthetic marker with increasing amount of censoring, while Uno's c remains stable.

In summary, while the difference between concordance_index_ipcw and concordance_index_censored is negligible for small amounts of censoring, when analyzing survival data with moderate to high amounts of censoring, you might want to consider estimating the performance using concordance_index_ipcw instead of concordance_index_censored.

Time-dependent Area under the ROC

The area under the receiver operating characteristics curve (ROC curve) is a popular performance measure for binary classification task. In the medical domain, it is often used to determine how well estimated risk scores can separate diseased patients (cases) from healthy patients (controls). Given a predicted risk score $\hat{f}$, the ROC curve compares the false positive rate (1 - specificity) against the true positive rate (sensitivity) for each possible value of $\hat{f}$.

When extending the ROC curve to continuous outcomes, in particular survival time, a patient’s disease status is typically not fixed and changes over time: at enrollment a subject is usually healthy, but may be diseased at some later time point. Consequently, sensitivity and specificity become time-dependent measures. Here, we consider cumulative cases and dynamic controls at a given time point $t$, which gives rise to the time-dependent cumulative/dynamic ROC at time $t$. Cumulative cases are all individuals that experienced an event prior to or at time $t$ ($t_i \leq t$), whereas dynamic controls are those with $t_i>t$. By computing the area under the cumulative/dynamic ROC at time $t$, we can determine how well a model can distinguish subjects who fail by a given time ($t_i \leq t$) from subjects who fail after this time ($t_i>t$). Hence, it is most relevant if one wants to predict the occurrence of an event in a period up to time $t$ rather than at a specific time point $t$.

The cumulative_dynamic_auc function implements an estimator of the cumulative/dynamic area under the ROC at a given list of time points. To illustrate its use, we are going to use data from a study that investigated to which extent the serum immunoglobulin free light chain (FLC) assay can be used predict overall survival. The dataset has 7874 subjects and 9 features; the endpoint is death, which occurred for 2169 subjects (27.5%).

First, we are loading the data and split it into train and test set to evaluate how well markers generalize.

x, y = load_flchain()
 
(x_train, x_test,
 y_train, y_test) = train_test_split(x, y, test_size=0.2, random_state=0)

Serum creatinine measurements are missing for some patients, therefore we are just going to impute these values with the mean using scikit-learn's SimpleImputer.

num_columns = ['age', 'creatinine', 'kappa', 'lambda']
 
imputer = SimpleImputer().fit(x_train.loc[:, num_columns])
x_train = imputer.transform(x_train.loc[:, num_columns])
x_test = imputer.transform(x_test.loc[:, num_columns])

Similar to Uno's estimator of the concordance index described above, we need to be a little bit careful when selecting the test data and time points we want to evaluate the ROC at, due to the estimator's dependence on inverse probability of censoring weighting. First, we are going to check whether the observed time of the test data lies within the observed time range of the training data.

y_events = y_train[y_train['death']]
train_min, train_max = y_events["futime"].min(), y_events["futime"].max()
 
y_events = y_test[y_test['death']]
test_min, test_max = y_events["futime"].min(), y_events["futime"].max()
 
assert train_min = test_min  test_max  train_max, \
    "time range or test data is not within time range of training data."

When choosing the time points to evaluate the ROC at, it is important to remember to choose the last time point such that the probability of being censored after the last time point is non-zero. In the simulation study above, we set the upper bound to the maximum event time, here we use a more conservative approach by setting the upper bound to the 80% percentile of observed time points, because the censoring rate is quite large at 72.5%. Note that this approach would be appropriate for choosing tau of concordance_index_ipcw too.

times = np.percentile(y["futime"], np.linspace(5, 81, 15))
print(times)

[ 470.3        1259.         1998.         2464.82428571 2979.
 3401.         3787.99857143 4051.         4249.         4410.17285714
 4543.         4631.         4695.         4781.         4844.        ]

We begin by considering individual real-valued features as risk scores without actually fitting a survival model. Hence, we obtain an estimate of how well age, creatinine, kappa FLC, and lambda FLC are able to distinguish cases from controls at each time point.

The plot shows the estimated area under the time-dependent ROC at each time point and the average across all time points as dashed line.

We can see that age is overall the most discriminative feature, followed by $\kappa$ and $\lambda$ FLC. That fact that age is the strongest predictor of overall survival in the general population is hardly surprising (we have to die at some point after all). More differences become evident when considering time: the discriminative power of FLC decreases at later time points, while that of age increases. The observation for age again follows common sense. In contrast, FLC seems to be a good predictor of death in the near future, but not so much if it occurs decades later.

Next, we will fit an actual survial model to predict the risk of death from the Veterans' Administration Lung Cancer Trial. After fitting a Cox proportional hazards model, we want to assess how well the model can distinguish survivors from deceased in weekly intervals, up to 6 months after enrollment.

The plot shows that the model is doing quite well on average with an AUC of ~0.82 (dashed line). However, there is a clear difference in performance between the first and second half of the time range. Performance increases up to about 100 days from enrollment, but quickly drops thereafter. Thus, we can conclude that the model is less effective in predicting death past 100 days.

Conclusion

I hope this post helped you to understand some of the pitfalls when estimating the performance of markers and models from right-censored survival data. We illustrated that Harrell's estimator of the concordance index is biased when the amount of censoring is high, and that Uno's estimator is more appropriate in this situation. Finally, we demonstrated that the time-dependent area under the ROC is a very useful tool when we want to predict the occurrence of an event in a period up to time $t$ rather than at a specific time point $t$.

sebp Sat, 05/04/2019 - 13:12

Comments

by sebp at May 04, 2019 11:12 AM

May 01, 2019

Sebastian Pölsterlscikit-survival 0.8 released

scikit-survival 0.8 released

This release of scikit-survival 0.8 adds some nice enhancements for validating survival models.

Previously, scikit-survival only supported Harrell's concordance index to assess the performance of survival models. While it is easy to interpret and compute, it has some shortcomings:

  1. it has been shown that it is too optimistic with increasing amount of censoring1,
  2. it is not a useful measure of performance if a specific time point is of primary interest (e.g. predicting 2 year survival).

The first issue is addressed by the new concordance_index_ipcw function, which implements an alternative estimator of the concordance index.1,2

The second point can be addressed by extending the well known receiver operating characteristic curve (ROC curve) to possibly censored survival times. Given a time point t, we can estimate how well a predictive model can distinguishing subjects who will experience an event by time t (sensitivity) from those who will not (specificity). The newly added function cumulative_dynamic_auc implements an estimator of the cumulative/dynamic area under the ROC for a given list of time points3.

Both estimators rely on inverse probability of censoring weighting, which means they require access to training data to estimate the censoring distribution from. Therefore, if the amount of censoring is high, some care must be taken in selecting a suitable time range for evaluation.

For a complete list of changes see the release notes.

Download

Pre-built conda packages are available for Linux, OSX and Windows:

conda install -c sebp scikit-survival

Alternatively, scikit-survival can be installed from source via pip:

pip install -U scikit-survival

References

  1. Uno, H., Cai, T., Pencina, M. J., D’Agostino, R. B., & Wei, L. J. (2011). On the C-statistics for evaluating overall adequacy of risk prediction procedures with censored survival data. Statistics in Medicine, 30(10), 1105–1117.
  2. H. Hung and C. T. Chiang, Estimation methods for time-dependent AUC models with survival data, Canadian Journal of Statistics, vol. 38, no. 1, pp. 8–26, 2010.
  3. H. Uno, T. Cai, L. Tian, and L. J. Wei, Evaluating prediction rules for t-year survivors with censored regression models, Journal of the American Statistical Association, vol. 102, pp. 527–537, 2007.
sebp Wed, 05/01/2019 - 17:50

Comments

by sebp at May 01, 2019 03:50 PM

April 24, 2019

Guillaume DesmottesGStreamer buffer flow analyzer

Gstreamer's logging system is an incredibly powerful ally when debugging but it can sometimes be a bit daunting to dig through the massive amount of generated logs. I often find myself writing small scripts processing gst logs when debugging. Their goal is generally to automatically extract some specific information or metrics from the generated logs. Such scripts are usually quickly written and quickly disposed once I'm done with my debugging but I've been wondering how I could make them easier to write and to re-use.

gst-log-parser is an attempt to solve these two problems by providing a library parsing GStreamer logs and enabling users to easily build such tools. It's written in Rust and is shipped with a few tools that I wrote to track actual bugs in GStreamer elements and applications.

One of those tool is a buffer flow analyzer which can be used to provide various information regarding the buffers exchanged through your pipeline. It relies on logs generated by the upstream stats tracer, so no modification in GStreamer core or in plugins is required.

First step is to generate the logs, this is easily done by defining these env variables: GST_DEBUG="GST_TRACER:7" GST_DEBUG_FILE=gst.log GST_TRACERS=stats

We can then use flow for example to detect decreasing pts or dts:

cargo run --release --bin flow gst.log check-decreasing-pts

Decreasing pts tsdemux0:audio_0_0042 00:00:02.550852023 < 00:00:02.555653845

Or to detect gaps of at least 100ms in the buffers flow:

cargo run --release --bin flow gst.log gap 100

gap from udpsrc0:src : 00:00:00.100142318 since previous buffer (received: 00:00:02.924532910 previous: 00:00:02.824390592)

It can also be used to draw graphs of the pts/dts produced by each src pad over time:

cargo run --release --bin flow gst.log plot-pts

gst-flow-pts.png


These are just a few examples of the kind of information we can extract from stats logs. I'll likely add more tools in the future and I'm happy to hear suggestions about other features that would make your life easier when debugging GStreamer.

by Guillaume Desmottes at April 24, 2019 10:07 AM

April 22, 2019

Thibault SaunierGStreamer Editing Services OpenTimelineIO support

(Thibault Saunier)

GStreamer Editing Services OpenTimelineIO support

OpenTimelineIO is an Open Source API and interchange format for editorial timeline information, it basically allows some form of interoperability between the different post production Video Editing tools. It is being developed by Pixar and several other studios are contributing to the project allowing it to evolve quickly.

We, at Igalia, recently landed support for the GStreamer Editing Services (GES) serialization format in OpenTimelineIO, making it possible to convert GES timelines to any format supported by the library. This is extremely useful to integrate GES into existing Post production workflow as it allows projects in any format supported by OpentTimelineIO to be used in the GStreamer Editing Services and vice versa.

On top of that we are building a GESFormatter that allows us to transparently handle any file format supported by OpenTimelineIO. In practice it will be possible to use cuts produced by other video editing tools in any project using GES, for instance Pitivi:

At Igalia we are aiming at making GStreamer ready to be used in existing Video post production pipelines and this work is one step in that direction. We are working on additional features in GES to fill the gaps toward that goal, for instance we are now implementing nested timeline support and framerate based timestamps in GES. Once we implement them, those features will enhance compatibility of Video Editing projects created from other NLE softwares through OpenTimelineIO. Stay tuned for more information!

by thiblahute at April 22, 2019 03:21 PM

April 19, 2019

GStreamerGStreamer 1.16.0 new major stable release

(GStreamer)

The GStreamer team is excited to announce a new major feature release of your favourite cross-platform multimedia framework!

The 1.16 release series adds new features on top of the previous 1.14 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework.

Highlights:

  • GStreamer WebRTC stack gained support for data channels for peer-to-peer communication based on SCTP, BUNDLE support, as well as support for multiple TURN servers.
  • AV1 video codec support for Matroska and QuickTime/MP4 containers and more configuration options and supported input formats for the AOMedia AV1 encoder
  • Support for Closed Captions and other Ancillary Data in video
  • Support for planar (non-interleaved) raw audio
  • GstVideoAggregator, compositor and OpenGL mixer elements are now in -base
  • New alternate fields interlace mode where each buffer carries a single field
  • WebM and Matroska ContentEncryption support in the Matroska demuxer
  • new WebKit WPE-based web browser source element
  • Video4Linux: HEVC encoding and decoding, JPEG encoding, and improved dmabuf import/export
  • Hardware-accelerated Nvidia video decoder gained support for VP8/VP9 decoding, whilst the encoder gained support for H.265/HEVC encoding.
  • Many improvements to the Intel Media SDK based hardware-accelerated video decoder and encoder plugin (msdk): dmabuf import/export for zero-copy integration with other components; VP9 decoding; 10-bit HEVC encoding; video post-processing (vpp) support including deinterlacing; and the video decoder now handles dynamic resolution changes.
  • The ASS/SSA subtitle overlay renderer can now handle multiple subtitles that overlap in time and will show them on screen simultaneously
  • The Meson build is now feature-complete (with the exception of plugin docs) and it is now the recommended build system on all platforms. The Autotools build is scheduled to be removed in the next cycle.
  • The GStreamer Rust bindings and Rust plugins module are now officially part of upstream GStreamer.
  • The GStreamer Editing Services gained a gesdemux element that allows directly playing back serialized edit list with playbin or (uri)decodebin
  • Many performance improvements

Full release notes can be found here.

Binaries for Android, iOS, Mac OS X and Windows will be provided in the next days.

You can download release tarballs directly here: gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, gstreamer-sharp, or gst-omx.

April 19, 2019 11:00 AM

April 15, 2019

GStreamerOrc 0.4.29 bug-fix release

(GStreamer)

The GStreamer team is pleased to announce another maintenance bug-fix release of liborc, the Optimized Inner Loop Runtime Compiler. Main changes since the previous release:

  • PowerPC: Support ELFv2 ABI (A. Wilcox) and ppc64le
  • Mips backend: only enable if the DSPr2 ASE is present
  • Windows and MSVC build fixes
  • orccpu-arm: Allow 'cpuinfo' fallback on non-android
  • pkg-config file for orc-test library
  • orcc: add --decorator command line argument to add function decorators in header files
  • meson: Make orcc detectable from other subprojects
  • meson: add options to disable tests, docs, benchmarks, examples, tools, etc.
  • meson: misc. other fixes

Direct tarball download: orc-0.4.29.

April 15, 2019 12:00 PM

April 11, 2019

GStreamerGStreamer 1.15.90 pre-release (1.16.0 release candidate 1)

(GStreamer)

The GStreamer team is pleased to announce the first release candidate for the upcoming stable 1.16.0 release.

Check out the draft release notes highlighting all the new features, bugfixes, performance optimizations and other important changes.

Packagers: please note that quite a few plugins and libraries have moved between modules since 1.14, so please take extra care and make sure inter-module version dependencies are such that users can only upgrade all modules in one go, instead of seeing a mix of 1.15 and 1.14 on their system.

Binaries for Android, iOS, Mac OS X and Windows will be provided shortly.

Release tarballs can be downloaded directly here:

Known errata: possible crash on 32-bit systems when creating seek events

April 11, 2019 08:00 AM

April 08, 2019

Phil NormandIntroducing WPEQt, a WPE API for Qt5

(Phil Normand)

WPEQt provides a QML plugin implementing an API very similar to the QWebView API. This blog post explains the rationale behind this new project aimed for QtWebKit users.

Qt5 already provides multiple WebView APIs, one based on QtWebKit (deprecated) and one based on QWebEngine (aka Chromium). WPEQt aims to provide …

by Philippe Normand at April 08, 2019 10:20 AM

April 06, 2019

Bastien NoceraDeveloper tool for i18n: “Pseudolocale”

(Bastien Nocera) While browsing for some internationalisation/localisation features, I found an interesting piece of functionality in Android's developer documentation. I'll quote it here:
A pseudolocale is a locale that is designed to simulate characteristics of languages that cause UI, layout, and other translation-related problems when an app is translated.
I've now implemented this for applications and libraries that use gettext, as an LD_PRELOAD library, available from this repository.


The current implementation can highlight a number of potential problems (paraphrasing the Android documentation again):
- String concatenation, which displays as one message split across 2 or more brackets.
- Hard-coded strings, which cannot be sent to translation, display as unaccented text in the pseudolocale to make them easy to notice.
- Right-to-left (RTL) problems such as elements not being mirrored.

Our old friend, Office Runner 


Testing brought some unexpected results :)

by Bastien Nocera (noreply@blogger.com) at April 06, 2019 11:41 AM

Nirbheek ChauhanGStreamer and Meson: A New Hope

Anyone who has written a non-trivial project using Autotools has realized that (and wondered why) it requires you to be aware of 5 different languages. Once you spend enough time with the innards of the system, you begin to realize that it is nothing short of an astonishing feat of engineering. Engineering that belongs in a museum. Not as part of critical infrastructure.

Autotools was created in the 1980s and caters to the needs of an entirely different world of software from what we have at present. Worse yet, it carries over accumulated cruft from the past 40 years — ostensibly for better “cross-platform support” but that “support” is mostly for extinct platforms that five people in the whole world remember.

We've learned how to make it work for most cases that concern FOSS developers on Linux, and it can be made to limp along on other platforms that the majority of people use, but it does not inspire confidence or really anything except frustration. People will not like your project or contribute to it if the build system takes 10x longer to compile on their platform of choice, does not integrate with the preferred IDE, and requires knowledge arcane enough to be indistinguishable from cargo-cult programming.

As a result there have been several (terrible) efforts at replacing it and each has been either incomplete, short-sighted, slow, or just plain ugly. During my time as a Gentoo developer in another life, I came in close contact with and developed a keen hatred for each of these alternative build systems. And so I mutely went back to Autotools and learned that I hated it the least of them all.

Sometime last year, Tim heard about this new build system called ‘Meson’ whose author had created an experimental port of GStreamer that built it in record time.

Intrigued, he tried it out and found that it finished suspiciously quickly. His first instinct was that it was broken and hadn’t actually built everything! Turns out this build system written in Python 3 with Ninja as the backend actually was that fast. About 2.5x faster on Linux and 10x faster on Windows for building the core GStreamer repository.

Upon further investigation, Tim and I found that Meson also has really clean generic cross-compilation support (including iOS and Android), runs natively (and just as quickly) on OS X and Windows, supports GNU, Clang, and MSVC toolchains, and can even (configure and) generate XCode and Visual Studio project files!

But the critical thing that convinced me was that the creator Jussi Pakkanen was genuinely interested in the use-cases of widely-used software such as Qt, GNOME, and GStreamer and had already added support for several tools and idioms that we use — pkg-config, gtk-doc, gobject-introspection, gdbus-codegen, and so on. The project places strong emphasis on both speed and ease of use and is quite friendly to contributions.

Over the past few months, Tim and I at Centricular have been working on creating Meson ports for most of the GStreamer repositories and the fundamental dependencies (libffi, glib, orc) and improving the MSVC toolchain support in Meson.

We are proud to report that you can now build GStreamer on Linux using the GNU toolchain and on Windows with either MinGW or MSVC 2015 using Meson build files that ship with the source (building upon Jussi's initial ports).

Other toolchain/platform combinations haven't been tested yet, but they should work in theory (minus bugs!), and we intend to test and bugfix all the configurations supported by GStreamer (Linux, OS X, Windows, iOS, Android) before proposing it for inclusion as an alternative build system for the GStreamer project.

You can either grab the source yourself and build everything, or use our (with luck, temporary) fork of GStreamer's cross-platform build aggregator Cerbero.

Update: I wrote a new post with detailed steps on how to build using Cerbero and generate Visual Studio project files.

Second update: All this is now upstream, see the upstream Cerbero repository's README

Personally, I really hope that Meson gains widespread adoption. Calling Autotools the Xorg of build systems is flattery. It really is just a terrible system. We really need to invest in something that works for us rather than against us.

PS: If you just want a quick look at what the build system syntax looks like, take a look at this or the basic tutorial.

by Nirbheek (noreply@blogger.com) at April 06, 2019 05:08 AM

Nirbheek ChauhanBuilding and Developing GStreamer using Visual Studio

Two months ago, I talked about how we at Centricular have been working on a Meson port of GStreamer and its basic dependencies (glib, libffi, and orc) for various reasons — faster builds, better cross-platform support (particularly Windows), better toolchain support, ease of use, and for a better build system future in general.

Meson also has built-in support for things like gtk-doc, gobject-introspection, translations, etc. It can even generate Visual Studio project files at build time so projects don't have to expend resources maintaining those separately.

Today I'm here to share instructions on how to use Cerbero (our “aggregating” build system) to build all of GStreamer on Windows using MSVC 2015 (wherever possible). Note that this means you won't see any Meson invocations at all because Cerbero does all that work for you.

Note that this is still all unofficial and has not been proposed for inclusion upstream. We still have a few issues that need to be ironed out before we can do that¹.

Update: As of March 2019, all these instructions are obsolete since MSVC support has been merged into upstream Cerbero. I'm leaving this outdated text as-is for archival purposes.

First, you need to setup the environment on Windows by installing a bunch of external tools: Python 2, Python3, Git, etc. You can find the instructions for that here:

https://github.com/centricular/cerbero#windows

This is very similar to the old Cerbero instructions, but some new tools are needed. Once you've done everything there (Visual Studio especially takes a while to fetch and install itself), the next step is fetching Cerbero:

$ git clone https://github.com/centricular/cerbero.git

This will clone and checkout the meson-1.8 branch that will build GStreamer 1.8.x. Next, we bootstrap it:

https://github.com/centricular/cerbero#bootstrap

Now we're (finally) ready to build GStreamer. Just invoke the package command:

python2 cerbero-uninstalled -c config/win32-mixed-msvc.cbc package gstreamer-1.0

This will build all the `recipes` that constitute GStreamer, including the core libraries and all the plugins including their external dependencies. This comes to about 76 recipes. Out of all these recipes, only the following are ported to Meson and are built with MSVC:

bzip2.recipe
orc.recipe
libffi.recipe (only 32-bit)
glib.recipe
gstreamer-1.0.recipe
gst-plugins-base-1.0.recipe
gst-plugins-good-1.0.recipe
gst-plugins-bad-1.0.recipe
gst-plugins-ugly-1.0.recipe

The rest still mostly use Autotools, plain GNU make or cmake. Almost all of these are still built with MinGW. The only exception is libvpx, which uses its custom make-based build system but is built with MSVC.

Eventually we want to build everything including all external dependencies with MSVC by porting everything to Meson, but as you can imagine it's not an easy task. :-)

However, even with just these recipes, there is a large improvement in how quickly you can build all of GStreamer inside Cerbero on Windows. For instance, the time required for building gstreamer-1.0.recipe which builds gstreamer.git went from 10 minutes to 45 seconds. It is now easier to do GStreamer development on Windows since rebuilding doesn't take an inordinate amount of time!

As a further improvement for doing GStreamer development on Windows, for all these recipes (except libffi because of complicated reasons), you can also generate Visual Studio 2015 project files and use them from within Visual Studio for editing, building, and so on.

Go ahead, try it out and tell me if it works for you!

As an aside, I've also been working on some proper in-depth documentation of Cerbero that explains how the tool works, the recipe format, supported configurations, and so on. You can see the work-in-progress if you wish to.

1. Most importantly, the tests cannot be built yet because GStreamer bundles a very old version of libcheck. I'm currently working on fixing that.

by Nirbheek (noreply@blogger.com) at April 06, 2019 05:07 AM

April 03, 2019

Christian SchallerPreparing for Fedora Workstation 30

(Christian Schaller)

I just installed the Fedora Workstation 30 Beta yesterday and so far things are looking great. As many others have reported to, with the GNOME 3.32 update things definitely feels faster and smoother. So I thought it was a good time to talk about what is coming in Fedora Workstation 30 and what we are currently working on.

Fractional Scaling: One of the big features that landed, although still considered experimental was the fractional scaling feature that has been a collaboration between Jonas Ådahl here at Red hat and Marco Trevisan at Canonical. It has taken quite some time since the initial hackfest as it is a complex task, but we are getting close. Fractional scaling is a critical feature for many HiDPI screen laptops to get a desktop size that perfectly fits their screen, not being to small or to large.

Screen sharing support for Chrome and Firefox under Wayland. The Wayland security model doesn’t allow any application to freely grab images or streams of the whole desktop like you could under X. This is of course a huge improvement in security, but it did cause some disruption for valid usecases like screen sharing with things like BlueJeans and Google Hangouts. We been working on resolving that with the help of PipeWire. We been at it for some time and things are now coming together. Chrome 73 ships with everything needed to make this work with Chrome, although you have to turn it on manually (got to this URL to turn it on: chrome://flags/#enable-webrtc-pipewire-capturer). The reason it needs to be manually enabled is not that it is unreliable, it is because the UI is still a little fugly due to a combination of feature overlap between the browser and the desktop and also how the security feature of the desktop is done. We are trying to come up with ways for the UI to be smoother without sacrificing your privacy/security. For Firefox we will keep shipping with our downstream patch until we manage to get it landed upstream.

Firefox for Wayland: Martin Stransky has been hard at work making Firefox be able to run Wayland-native. That work is tantalizingly near, but we decided to postpone it for Fedora Workstation 31 in the end to make sure it is really well polished before releasing it upon the world. The advantage of Wayland native Firefox is that in addition to bring us one step closer to not needing to run an X server (XWayland) all the time it also enables things like fractional scaling mentioned above to work for Firefox.

OpenH264 improved: As many of you know Firefox relies on a library called OpenH264, provided by Cisco, for its H264 video codec support for WebRTC. This library is also provided to Fedora users from Cisco free of charge (you can install it through GNOME Software). However its usefulness have been somewhat limited due to only supporting the baseline profile used for video calling, but not the Main and High profiles used by most online video content. Well what I can tell you is that Red Hat, Endless and Cisco partnered with Centricular some time ago to add support for decoding those profiles to OpenH264 and that work is now almost complete. The basic code enabling them is already merged, but Jan Schmidt at Centricular is working on fixing a few files that are still giving us problems. As soon as that is generally shipping we hope to get Firefox to be able to use OpenH264 also for things like Youtube playback and of course also use OpenH264 to playback any H264 using GStreamer applications like Totem. So a big thank you to Endless, Cisco and Centricular for working with us on this and thus enabling us to have a legal way to offer H264 support to our users.

NVidia binary driver support under Wayland: We been putting it quite a bit of effort trying to tie off the lose ends for using the NVidia binary driver with Wayland. We did manage to fix a long list of bugs like dealing with various colorspace issues, multimonitor setups and so on. For Intel and AMD graphics users things should actually be pretty good to go at this point. The last major item holding us back on the NVidia side is full support for using the binary driver with XWayland applications (native Wayland applications should work fine already). Adam Jackson worked diligently to get all the pieces in place and we do think we have a model now that will allow NVidia to provide an updated driver that should enable XWayland. As it stands though that driver update is likely to only come out towards the fall, so we will keep defaulting to X for NVidia binary driver users for some time more.

Gaming under Wayland. Olivier Fourdan and Jonas Ådahl has trying to crush any major Wayland bug reported for quite some time now and one area where we seem to have rounded the corner is for games. Valve has been kind enough to give us the ability to install and run any steam game for testing purposes, so whenever we found a game giving us trouble we have been able to let Olivier and Jonas reproduce it easily. So on my own gaming box I am now able to run all the Steam games I have under Wayland, including those using Proton, without a hitch. We haven’t tested with the full Steam catalog of course, there are thousands, so if your favourite game is giving you trouble under Wayland still, please let us know. Talking about gaming one area we will try to free up some cycles going forward to look deeper at is Flatpaks and gaming. We already done quite a bit of work in this area, with things like the NVidia binary driver extension and the Steam package on Flathub. But we know from leading linux game devs that there are still some challenges to be resolved, like making host device access for gamepads simpler from within the Flatpak sandbox.

Flatpak Creation in Fedora. Owen Taylor has been in charge of getting Flatpaks building in Fedora, ensuring we can produce Flatpaks from Fedora packages. Owen set up a system to track the Fedora Flatpak status, we got about 10 applications so far, but hope to greatly grow that number of time as we polish up the system. This enables us to start planning for shipping some applications in Fedora Workstation as Flatpaks by default in a future release. This respository will be available by default in Fedora workstation 30 and you can choose the flatpak version of the package through the new drop down box in the top right corner of GNOME Software. For now the RPM version of the package is still the default, but we expect to change that in later releases of Fedora Workstation.

Gedit in GNOME Software with Source drop down box

Gedit in GNOME Software with Source drop down box

Fedora Toolbox – Debarshi Ray is leading the effort we call Fedora Toolbox, which is our starting point for our goal to revitalise and revolutionize development on Linux. Fedora Toolbox is trying to take the model of a pet container for development and make it seamless and natural. Our goal is to make it dead simple to create pet containers for your projects, so you can for instance have a Fedora pet container where you develop against the leading edge libraries and tools in Fedora, and you can have a RHEL based container where you develop against the library versions and tools shipping in RHEL (makes updating and fixing in production applications a lot easier) and maybe a SteamOS container to work on your little game project. Currently the model is that you have one pet container per OS your targeting, but we are pondering if maybe having one pet container per project would be even better if we can find good ways to avoid it being a lot of extra overhead (by for example having to re-install all your favourite command line tools in the container) or just outright confusing (which container got what tools and libraries again). Our goal here though is to ensure Fedora becomes the premier container native OS out there and thus a natural home for developers doing container development.
We are also working with the team inside Red Hat focusing on AI/ML and trying to ensure that we have a super smooth way for you to get a pet container with things like TensorFlow and CUDA up and running quickly.

Being an excellent platform for Openshift and Kubernetes development: We are putting effort into together with the Red Hat developer tools organization to bringing the OpenShift and CodeReady Studio and CodeReady Workspaces tools to Fedora. These tools have so far been very focused on RHEL support, but thanks to Flatpak for CodeReady Studio and web integration for CodeReady Workspaces we now have a path for making them easily available in Fedora too. In the world of Kubernetes OpenShift is where you want to be, and we want Fedora Workstation to be the ultimate portal for OpenShift development.

Fleet Commander with Active Directory support – So we are about to hit a very major milestone with Fleet Commander our large scale desktop management tool for Fedora and RHEL. Oliver Gutierrez has been hard at work making it work with Active Directory in addition to the existing FreeIPA support. We know that a majority of people interested in Fleet Commander are only using Active Directory currently, so being able to use Active Directory with Fleet Commander should make this great tool available to a huge number of new users. So if you are managing a University computer lab or a large number of Fedora or RHEL clients in your company we should soon have a Fleet Commander release out that you can use. And if you are not using Fedora or RHEL today well Fleet Commander is a very big reason for switching over!
We will do a proper announcement with further details once the release with Active Directory support is out.

PipeWire – I don’t have a major development to report, just a lot of steady work being done to stabilize and improve PipeWire. As mentioned earlier we now have Wayland screen sharing and recording working smoothly in the major browsers which is the user facing feature I think most of you will notice. Wim is still working on pushing the audio side it forward, but that is also a huge task. We have started talking about organizing a new hackfest soon to see if we can accelerate the effort further again. Likely scenario at this point in time is that we start enabling the JACK side of PipeWire first, maybe as early as Fedora Workstation 31, and then come back and do the PulseAudio replacement as a last stage.

Improved Input handling Another area we keep focusing on is improving input in Fedora. Peter Hutterer and Benjamin Tissoires are working hard on improving the stack. Peter just sent an extensive RFC out for how to deal with high resolution mice under Linux and Benjamin has been trying to get support for the Dell Totem landed. Neither will be there unfortunately for Fedora Workstation 30,but we expect to land this before Fedora Workstation 31.

Flicker-free boot
Hans de Goede has continued working on his effort to create a flicker-free boot experience with Fedora. The results of this work is on display in Fedora Workstation 30 and will for most of you now provide a seamless bootup experience . This effort is not so much about functionality as it is about ensuring you have an end-to-end polished experience with your Linux desktop. Things like the constant mode changes we seen in the past contribute to giving Linux an image of being unpolished and we want Fedora to be the vehicle that breaks down that image.

Ramping up Silverblue

For those of you following Fedora you are probably aware of Silverblue, which is our effort to re-think the Linux desktop distribution from the ground up and help us take the Linux desktop to a new level. The distribution model hasn’t really changed much over the last 20 years and we probably polished up the offering as far as we can within the scope of that model. For instance I upgraded my system to Fedora 30 beta yesterday and it was a long and tedious process of looking at about 6000 individual packages get updated from the Fedora 29 version to the Fedora 30 version one by one. I didn’t hit a lot of major snags despite this being a beta, but it is screamingly obvious that updating your operating system in this way is both slow and inherently fragile as anyone of those 6000 packages might hit a problem during upgrade and leave the system in a unknown state, especially since its common for packages to run scripts and similar as part of their upgrade.

Silverblue provides a revolutionary replacement for that process. First of all since it ships as a unified image we make life a lot easier for our QE team who can then test and verify against a single image which is in a known state. This in turn ensures that you as a user can feel confident that the new OS version will not break something on your system. And since the new version is just an image stored on your system next to the old one, upgrading is just about rebooting your system. There is no waiting for individual packages to get upgraded, as everything is already there and ready. Compare it to booting into a different kernel version on Fedora, it is quick and trivial.
And this also means that in the unlikely case that there is a problem with the new OS version you can just as easily go back to the previous version, by rebooting again and choosing to boot into that version. So you basically have instant upgrades with instant rollback if needed.
We believe this will radically change the way you look at OS upgrades forever, in fact you might almost forget they are happening.

And since Silverblue will basically be a Flatpak (and other containers) only OS you will have a clean delimitation between OS and applications. This means that even if we do major updates to the host, your applications should remain unaffected by the host OS update.
In fact we have some very interesting developments underway for Flatpak, with some major new efforts underway, efforts that I would love to talk about, but they are tied to some major Red Hat announcements that will happen at this years Red Hat Summit which will happen on May 7th – May 9th, so I will leave it as a teaser and then let you all know once the Summit is underway and Red Hats related major announcements are done.

There is a lot of work happening around Silverblue and as it happens Matthias Clasen wrote a long blog entry about it today. That blog goes into a lot more details on some of the Silverblue work items we been doing.

Anyway, I feel really excited about Silverblue and as we continue to refine the experience and figure out how everything will look in this brave new world I am sure everyone else will get excited too. Silverblue represents the single biggest evolution of the Linux desktop since the original GNOME and KDE releases back in the late nineties. It is not just about trying to tweak the existing experience, but an attempt at taking a big leap forward and provide an operating system that embodies all that we learned over these last 20 years and provide a natural home for developers and creators of all kind in our container centric computing future. Be sure to grab the Silverblue image of Fedora 30 beta and give it a test run. I recommend activating flathub.org repo to get started in order to get a decent range of applications available. As we move forward we are working hard to ensure that you have the world of applications available out of the box, so no need to go an enable any 3rd party repositories, but there are some more work that needs to happen before we can do that.

Summary
So Fedora Workstation 30 is going to be another exiting release of both of traditional RPM based Workstation version and of Silverblue, and I hope they will encourage even more people to join our rapidly growing Fedora community. Be sure to join us in #fedora-workstation on freenode IRC to talk!

by uraeus at April 03, 2019 07:13 PM

March 28, 2019

Christian SchallerLVFS adopted by Linux Foundation

(Christian Schaller)

Today the announcement went out that the Linux Vendor Firmware Service has become and official Linux Foundation service. For those that don’t know it yet LVFS is a service that provides firmware for your linux running hardware and it was one off our initial efforts as part of the Fedora Workstation effort to drain the swamp in terms of making Linux a first class desktop operating system.

The effort came about due to Peter Jones, who is Red Hats representative to the UEFI standards body, approaching me to talk about how Microsoft was trying to push for a standardized way to ship UEFI firmware for Windows and how UEFI being a standard openeded a path for us to actually get full support for this without each vendor having to ship and maintain their own proprietary firmware tools. So we did a meeting with Peter Jones and also brought in Richard Hughes who had already been looking at the problem of firmware updates in Linux, partly due to his ColorHug hardware, and the effort got started with Peter working on the low level OS tooling and Richard taking on building the service to drive distribution and the work to integrate it all into GNOME Software. One concern we had of course was if we could reach critical mass for this and get vendors interested, but luckily Dell was just as keen on improving firmware handling under Linux as us and signed on from the start. Having Dell onboard helped give the effort a lot of credibility and as the service matured we ended up having more and more vendors sign up. We also reached out through Red Hats partnerships to push vendors to adopt supporting it. As Richard also mentions in his interview about it, we had made the solution as similar to Microsofts as possible to decrease the threshold for hardware vendors to join, the goal being that if they did the basic work to support Windows they could more or less just ship the same firmware file to LVFS.

One issue that we had gone back on forth about inside Red Hat was the formal setup of the service. While we all agreed the service was hugely beneficial it felt like something that should be a shared service for all of Linux and we felt that if the service was Red Hat provided it might dissuade other vendors to join. So we started looking around for a neutral place to land the service while in the meantime LVFS had a sort of autonomous status being run as a community effort by Richard Hughes. We ended up talking to Chris Wright, the Red Hat CTO, about the project and he offered to facilitate contact with the Linux Foundation. The initial meetings was very positive and the Linux Foundation seemed interested in running the service right from the start, it did end up taking us quite some time to clear all formal and technical hurdles to get there, but I for one is very happy to see the LVFS now being a vendor neutral service provided by the Linux Foundation.

So a big thank you to Richard Hughes, Peter Jones, Chris Wright, Mario Limonciello and Dell and the Linux Foundation for their help in getting us here. And also a big thank you to Fedora and the Fedora community for their help with providing us a place to develop and polish up this service to the benefit of all. To me this is one of many examples of how Fedora keeps innovating and leading the way on Desktop linux.

by uraeus at March 28, 2019 02:50 PM

March 08, 2019

Bastien NoceraVideos and Books in GNOME 3.32

(Bastien Nocera) GNOME 3.32 will very soon be released, so I thought I'd go back on a few of the things that happened with some of our content applications.

Videos
First, many thanks to Marta Bogdanowicz, Baptiste Mille-Mathias, Ekaterina Gerasimova and Andre Klapper who toiled away at updating Videos' user documentation since 2012, when it was still called “Totem”, and then again in 2014 when “Videos” appeared.

The other major change is that Videos is available, fully featured, from Flathub. It should play your Windows Movie Maker films, your circular wafers of polycarbonate plastic and aluminium, and your Devolver indie films. No more hunting codecs or libraries!

In the process, we also fixed a large number of outstanding issues, such as accommodating for the app menu's planned disappearance, moving the audio/video properties tab to nautilus proper, making the thumbnailer available as an independent module, making the MPRIS plugin work better and loads, loads mo.


Download on Flathub

Books

As Documents was removed from the core release, we felt it was time for Books to become independent. And rather than creating a new package inside a distribution, the Flathub version was updated. We also fixed a bunch of bugs, so that's cool :)
Download on Flathub

Weather

I didn't work directly on Weather, but I made some changes to libgweather which means it should be easier to contribute to its location database.

Adding new cities doesn't require adding a weather station by hand, it would just pick the closest one, and weather stations also don't need to be attached to cities either. They were usually attached to villages, sometimes hamlets!

The automatic tests are also more stringent, and test for more things, which should hopefully mean less bugs.

And even more Flatpaks

On Flathub, you'll also find some applications I packaged up in the last 6 months. First is Teo Thomson emulator, GBE+, a Game Boy emulator focused on accessories emulation, and a way to run your old Flash games offline.

by Bastien Nocera (noreply@blogger.com) at March 08, 2019 07:00 PM

February 27, 2019

Sebastian Pölsterlscikit-survival 0.7 released

scikit-survival 0.7 released

This is a long overdue maintenance release of scikit-survival 0.7 that adds compatibility with Python 3.7 and scikit-learn 0.20. For a complete list of changes see the release notes.

Download

Pre-built conda packages are available for Linux, OSX and Windows:

conda install -c sebp scikit-survival

Alternatively, scikit-survival can be installed from source via pip:

pip install -U scikit-survival
sebp Wed, 02/27/2019 - 22:42

Comments

by sebp at February 27, 2019 09:42 PM

GStreamerGStreamer 1.15.2 unstable development release

(GStreamer)

The GStreamer team is pleased to announce the second development release in the unstable 1.15 release series.

The unstable 1.15 release series adds new features on top of the current stable 1.16 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework.

The unstable 1.15 release series is for testing and development purposes in the lead-up to the stable 1.16 series which is scheduled for release in a few weeks time. Any newly-added API can still change until that point, although it is rare for that to happen.

Check out the draft release notes highlighting all the new features, bugfixes, performance optimizations and other important changes.

Packagers: please note that quite a few plugins and libraries have moved between modules since 1.14, so please take extra care and make sure inter-module version dependencies are such that users can only upgrade all modules in one go, instead of seeing a mix of 1.15 and 1.14 on their system.

Binaries for Android, iOS, Mac OS X and Windows will be provided shortly.

Release tarballs can be downloaded directly here:

February 27, 2019 10:00 AM

February 22, 2019

GStreamerGStreamer Rust bindings 0.13.0 release

(GStreamer)

A new version of the GStreamer Rust bindings, 0.13.0, was released.

This new release is the first to include direct support for implementing GStreamer elements and other types in Rust. Previously this was provided via a different crate.
In addition to this, the new release features many API improvements, cleanups, newly added bindings and bugfixes.

As usual this release follows the latest gtk-rs release, and a new version of GStreamer plugins written in Rust was also released.

Details can be found in the release notes for gstreamer-rs and gstreamer-rs-sys.

The code and documentation for the bindings is available on the freedesktop.org GitLab

as well as on crates.io.

If you find any bugs, missing features or other issues please report them in GitLab.

February 22, 2019 03:00 PM

February 13, 2019

Víctor JáquezGenerating a GStreamer-1.14 bundle for TravisCI with Ubuntu/Trusty

For having continous integration in your multimedia project hosted in GitHub with TravisCI, you may want to compile and run tests with a recent version of GStreamer. Nonetheless, TravisCI mainly offers Ubuntu Trusty as one of the possible distributions to deploy in their CI, and that distribution packages GStreamer 1.2, which might be a bit old for your project’s requirements.

A solution for this problem is to provide to TravisCI your own GStreamer bundle with the version you want to compile and test on you project, in this case 1.14. The present blog is recipe I followed to generate that GStreamer bundle with GstGL support.

There are three main issues:

  1. The packaged libglib version is too old, hoping that we will not find an ABI breakage while running the CI.
  2. The packaged ffmpeg version is too old
  3. As we want to compile GStreamer using gst-build, we need a recent version of meson, which requires python3.5, not available in Trusty.

schroot

Old habits die hard, and I have used schroot for handle chroot environments without complains, it handles the bind mounting of /proc, /sys and all that repetitive stuff that seals the isolation of the chrooted environment.

The debootstrap’s variant I use is buildd because it installs the build-essential package.

$ sudo mkdir /srv/chroot/gst-trusty64
$ sudo debootstrap --arch=amd64 --variant=buildd trusty ./gst-trusty64/ http://archive.ubuntu.com/ubuntu
$ sudo vim /etc/schroot/chroot.d

This is the schroot configuration I will use. Please, adapt it to your need.

[gst]
description=Ubuntu Trusty 64-bit for GStreamer
directory=/srv/chroot/gst-trusty64
type=directory
users=vjaquez
root-users=vjaquez
profile=default
setup.fstab=default/vjaquez-home.fstab

I am overrinding the fstab default file for a custom one where the home directory of vjaquez user aims to a clean directory.

$ mkdir -p ~/home-chroot/gst
$ sudo vim /etc/schroot/default/vjaquez-home.fstab
# fstab: static file system information for chroots.
# Note that the mount point will be prefixed by the chroot path
# (CHROOT_PATH)
#
#                
/proc           /proc           none    rw,bind         0       0
/sys            /sys            none    rw,bind         0       0
/dev            /dev            none    rw,bind         0       0
/dev/pts        /dev/pts        none    rw,bind         0       0
/home           /home           none    rw,bind         0       0
/home/vjaquez/home-chroot/gst   /home/vjaquez   none    rw,bind 0       0
/tmp            /tmp            none    rw,bind         0       0

configure chroot environment

We will get into the chroot environment as super user in order to add the required packages. For that pupose we add universe repository in apt.

  • libglib requires: autotools-dev gnome-pkg-tools libtool libffi-dev libelf-dev libpcre3-dev desktop-file-utils libselinux1-dev libgamin-dev dbus dbus-x11 shared-mime-info libxml2-utils
  • Python requires: libssl-dev libreadline-dev libsqlite3-dev
  • GStreamer requires: bison flex yasm python3-pip libasound2-dev libbz2-dev libcap-dev libdrm-dev libegl1-mesa-dev libfaad-dev libgl1-mesa-dev libgles2-mesa-dev libgmp-dev libgsl0-dev libjpeg-dev libmms-dev libmpg123-dev libogg-dev libopus-dev liborc-0.4-dev libpango1.0-dev libpng-dev libpulse-dev librtmp-dev libtheora-dev libtwolame-dev libvorbis-dev libvpx-dev libwebp-dev pkg-config unzip zlib1g-dev
  • And for general setup: language-pack-en ccache git curl
$ schroot --user root --chroot gst
(gst)# sed -i "s/main$/main universe/g" /etc/apt/sources.list
(gst)# apt update
(gst)# apt upgrade
(gst)# apt --no-install-recommends --no-install-suggests install \
autotools-dev gnome-pkg-tools libtool libffi-dev libelf-dev \
libpcre3-dev desktop-file-utils libselinux1-dev libgamin-dev dbus \
dbus-x11 shared-mime-info libxml2-utils \
libssl-dev libreadline-dev libsqlite3-dev \ 
language-pack-en ccache git curl bison flex yasm python3-pip \
libasound2-dev libbz2-dev libcap-dev libdrm-dev libegl1-mesa-dev \
libfaad-dev libgl1-mesa-dev libgles2-mesa-dev libgmp-dev libgsl0-dev \
libjpeg-dev libmms-dev libmpg123-dev libogg-dev libopus-dev \
liborc-0.4-dev libpango1.0-dev libpng-dev libpulse-dev librtmp-dev \
libtheora-dev libtwolame-dev libvorbis-dev libvpx-dev libwebp-dev \
pkg-config unzip zlib1g-dev

Finally we create our installation prefix. In this case /opt/gst to avoid the contamination of /usr/local and logout as root.

(gst)# mkdir -p /opt/gst
(gst)# chown vjaquez /opt/gst
(gst)# exit

compile ffmpeg 3.2

Now, let’s login again, but as the unprivileged user, to build the bundle, starting with ffmpeg. Notice that we are using ccache and building out-of-source.

$ schroot --chroot gst
(gst)$ git clone https://git.ffmpeg.org/ffmpeg.git ffmpeg
(gst)$ cd ffmpeg
(gst)$ git checkout -b work n3.2.12
(gst)$ mkdir build
(gst)$ cd build
(gst)$ ../configure --disable-static --enable-shared \
--disable-programs --enable-pic --disable-doc --prefix=/opt/gst 
(gst)$ PATH=/usr/lib/ccache/:${PATH} make -j8 install

compile glib 2.48

(gst)$ cd ~
(gst)$ git clone https://gitlab.gnome.org/GNOME/glib.git
(gst)$ cd glib
(gst)$ git checkout -b work origin/glib-2-48
(gst)$ mkdir mybuild
(gst)$ cd mybuild
(gst)$ ../autogen.sh --prefix=/opt/gst
(gst)$ PATH=/usr/lib/ccache/:${PATH} make -j8 install

install Python 3.5

Pyenv is a project that allows the automation of installing and executing, in the user home directory, multiple versions of Python.

(gst)$ curl -L https://github.com/pyenv/pyenv-installer/raw/master/bin/pyenv-installer | bash
(gst)$ ~/.pyenv/bin/pyenv install 3.5.0

Install meson 0.50

We will install the last available version of meson in the user home directory, that is why PATH is extended and exported.

(gst)$ cd ~
(gst)$ ~/.pyenv/verion/3.5.0/pip3 install --user meson
(gst)$ export PATH=${HOME}/.local/bin:${PATH}

build GStreamer 1.14

PKG_CONFIG_PATH is exported to expose the compiled versions of ffmpeg and glib. Notice that the libraries are installed in /opt/lib in order to avoid the dispersion of pkg-config files.

(gst)$ cd ~/
(gst)$ export PKG_CONFIG_PATH=/opt/gst/lib/pkgconfig/
(gst)$ git clone https://gitlab.freedesktop.org/gstreamer/gst-build.git
(gst)$ cd gst-build
(gst)$ git checkout -b work origin/1.14
(gst)$ meson -Denable_python=false \
-Ddisable_gst_libav=false -Ddisable_gst_plugins_ugly=true \
-Ddisable_gst_plugins_bad=false -Ddisable_gst_devtools=true \
-Ddisable_gst_editing_services=true -Ddisable_rtsp_server=true \
-Ddisable_gst_omx=true -Ddisable_gstreamer_vaapi=true \
-Ddisable_gstreamer_sharp=true -Ddisable_introspection=true \
--prefix=/opt/gst build --libdir=lib
(gst)$ ninja -C build install

test!

(gst)$ cd ~/
(gst)$ LD_LIBRARY_PATH=/opt/gst/lib \
GST_PLUGIN_SYSTEM_PATH=/opt/gst/lib/gstreamer-1.0/ \
/opt/gst/bin/gst-inspect-1.0

And the list of available elemente shall be shown.

archive the bundle

(gst)$ cd ~/
(gst)$ tar zpcvf gstreamer-1.14-x86_64-linux-gnu.tar.gz -C /opt ./gst

update your .travis.yml

These are the packages you shall add to run this generated GStreamer bundle:

  • libasound2-plugins
  • libfaad2
  • libfftw3-single3
  • libjack-jackd2-0
  • libmms0
  • libmpg123-0
  • libopus0
  • liborc-0.4-0
  • libpulsedsp
  • libsamplerate0
  • libspeexdsp1
  • libtdb1
  • libtheora0
  • libtwolame0
  • libwayland-egl1-mesa
  • libwebp5
  • libwebrtc-audio-processing-0
  • liborc-0.4-dev
  • pulseaudio
  • pulseaudio-utils

And this is the before_install and before_script targets:

      before_install:
        - curl -L http://server.example/gstreamer-1.14-x86_64-linux-gnu.tar.gz | tar xz
        - sed -i "s;prefix=/opt/gst;prefix=$PWD/gst;g" $PWD/gst/lib/pkgconfig/*.pc
        - export PKG_CONFIG_PATH=$PWD/gst/lib/pkgconfig
        - export GST_PLUGIN_SYSTEM_PATH=$PWD/gst/lib/gstreamer-1.0
        - export GST_PLUGIN_SCANNER=$PWD/gst/libexec/gstreamer-1.0/gst-plugin-scanner
        - export PATH=$PATH:$PWD/gst/bin
        - export LD_LIBRARY_PATH=$PWD/gst/lib:$LD_LIBRARY_PATH

      before_script:
        - pulseaudio --start
        - gst-inspect-1.0 | grep Total

by vjaquez at February 13, 2019 07:43 PM

February 11, 2019

Víctor JáquezReview of Igalia’s Multimedia Activities (2018/H2)

This is the first semiyearly report about Igalia’s activities around multimedia, covering the second half of 2018.

Great length of this report was exposed in Phil’s talk surveying mutimedia development in WebKitGTK and WPE:

WebKit Media Source Extensions (MSE)

MSE is a specification that allows JS to generate media streams for playback for Web browsers that support HTML 5 video and audio.

Last semester we upstreamed the support to WebM format in WebKitGTK with the related patches in GStreamer, particularly in qtdemux, matroskademux elements.

WebKit Encrypted Media Extensions (EME)

EME is a specification for enabling playback of encrypted content in Web bowsers that support HTML 5 video.

In a downstream project for WPE WebKit we managed to have almost full test coverage in the YoutubeTV 2018 test suite.

We merged our contributions in upstream, WebKit and GStreamer, most of what is legal to publish, for example, making demuxers aware of encrypted content and make them to send protection events with the initialization data and the encrypted caps, in order to select later the decryption key.

We started to coordinate the upstreaming process of a new implementation of CDM (Content Decryption Module) abstraction and there will be even changes in that abstraction.

Lighting talk about EME implementation in WPE/WebKitGTK in GStreamer Conference 2018.

WebKit WebRTC

WebRTC consists of several interrelated APIs and real time protocols to enable Web applications and sites to captures audio, or A/V streams, and exchange them between browsers without requiring an intermediary.

We added GStreamer interfaces to LibWebRTC, to use it for the network part, while using GStreamer for the media capture and processing. All that was upstreamed in 2018 H2.

Thibault described thoroughly the tasks done for this achievement.

Talk about WebRTC implementation in WPE/WebKitGTK in WebEngines hackfest 2018.

Servo/media

Servo is a browser engine written in Rust designed for high parallelization and high GPU usage.

We added basic support for <video> and <audio> media elements in Servo. Later on, we added the GstreamerGL bindings for Rust in gstreamer-rs to render GL textures from the GStreamer pipeline in Servo.

Lighting talk in the GStreamer Conference 2018.

GstWPE

Taking an idea from the GStreamer Conference, we developed a GStreamer source element that wraps WPE. With this source element, it is possible to blend a web page and video in a single video stream; that is, the output of a Web browser (to say, a rendered web page) is used as a video source of a GStreamer pipeline: GstWPE. The element is already merged in the gst-plugins-bad repository.

Talk about GstWPE in FOSDEM 2019

Demo #1

Demo #2

GStreamer VA-API and gst-MSDK

At last, but not the least, we continued helping with the maintenance of GStreamer-VAAPI and gst-msdk, with code reviewing and on-going migration of the internal library to GObject.

Other activities

The second half of 2018 was also intense in terms of conferences and hackfest for the team:


Thanks to bear with us along all this blog post and to keeping under your radar our work.

by vjaquez at February 11, 2019 12:52 PM

February 09, 2019

Sebastian DrögeMPSC Channel API for painless usage of threads with GTK in Rust

(Sebastian Dröge)

A very common question that comes up on IRC or elsewhere by people trying to use the gtk-rs GTK bindings in Rust is how to modify UI state, or more specifically GTK widgets, from another thread.

Due to GTK only allowing access to its UI state from the main thread and Rust actually enforcing this, unlike other languages, this is less trivial than one might expect. To make this as painless as possible, while also encouraging a more robust threading architecture based on message-passing instead of shared state, I’ve added some new API to the glib-rs bindings: An MPSC (multi-producer/single-consumer) channel very similar to (and based on) the one in the standard library but integrated with the GLib/GTK main loop.

While I’ll mostly write about this in the context of GTK here, this can also be useful in other cases when working with a GLib main loop/context from Rust to have a more structured means of communication between different threads than shared mutable state.

This will be part of the next release and you can find some example code making use of this at the very end. But first I’ll take this opportunity to also explain why it’s not so trivial in Rust first and also explain another solution.

Table of Contents

  1. The Problem
  2. One Solution: Safely working around the type system
  3. A better solution: Message passing via channels

The Problem

Let’s consider the example of an application that has to perform a complicated operation and would like to do this from another thread (as it should to not block the UI!) and in the end report back the result to the user. For demonstration purposes let’s take a thread that simply sleeps for a while and then wants to update a label in the UI with a new value.

Naively we might start with code like the following

let label = gtk::Label::new("not finished");
[...]
// Clone the label so we can also have it available in our thread.
// Note that this behaves like an Rc and only increases the
// reference count.
let label_clone = label.clone();
thread::spawn(move || {
    // Let's sleep for 10s
    thread::sleep(time::Duration::from_secs(10));

    label_clone.set_text("finished");
});

This does not compile and the compiler tells us (between a wall of text containing all the details) that the label simply can’t be sent safely between threads. Which is absolutely correct.

error[E0277]: `std::ptr::NonNull<gobject_sys::GObject>` cannot be sent between threads safely
  --> src/main.rs:28:5
   |
28 |     thread::spawn(move || {
   |     ^^^^^^^^^^^^^ `std::ptr::NonNull<gobject_sys::GObject>` cannot be sent between threads safely
   |
   = help: within `[closure@src/bin/basic.rs:28:19: 31:6 label_clone:gtk::Label]`, the trait `std::marker::Send` is not implemented for `std::ptr::NonNull<gobject_sys::GObject>`
   = note: required because it appears within the type `glib::shared::Shared<gobject_sys::GObject, glib::object::MemoryManager>`
   = note: required because it appears within the type `glib::object::ObjectRef`
   = note: required because it appears within the type `gtk::Label`
   = note: required because it appears within the type `[closure@src/bin/basic.rs:28:19: 31:6 label_clone:gtk::Label]`
   = note: required by `std::thread::spawn`

In, e.g. C, this would not be a problem at all, the compiler does not know about GTK widgets and generally all GTK API to be only safely usable from the main thread, and would happily compile the above. It would the our (the programmer’s) job then to ensure that nothing is ever done with the widget from the other thread, other than passing it around. Among other things, it must also not be destroyed from that other thread (i.e. it must never have the last reference to it and then drop it).

One Solution: Safely working around the type system

So why don’t we do the same as we would do in C and simply pass around raw pointers to the label and do all the memory management ourselves? Well, that would defeat one of the purposes of using Rust and would require quite some unsafe code.

We can do better than that and work around Rust’s type system with regards to thread-safety and instead let the relevant checks (are we only ever using the label from the main thread?) be done at runtime instead. This allows for completely safe code, it might just panic at any time if we accidentally try to do something from wrong thread (like calling a function on it, or dropping it) and not just pass the label around.

The fragile crate provides a type called Fragile for exactly this purpose. It’s a wrapper type like Box, RefCell, Rc, etc. but it allows for any contained type to be safely sent between threads and on access does runtime checks if this is done correctly. In our example this would look like this

let label = gtk::Label::new("not finished");
[...]
// We wrap the label clone in the Fragile type here
// and move that into the new thread instead.
let label_clone = fragile::Fragile::new(label.clone());
thread::spawn(move || {
    // Let's sleep for 10s
    thread::sleep(time::Duration::from_secs(10));

    // To access the contained value, get() has
    // to be called and this is where the runtime
    // checks are happening
    label_clone.get().set_text("finished");
});

Not many changes to the code and it compiles… but at runtime we of course get a panic because we’re accessing the label from the wrong thread

thread '<unnamed>' panicked at 'trying to access wrapped value in fragile container from incorrect thread.', ~/.cargo/registry/src/github.com-1ecc6299db9ec823/fragile-0.3.0/src/fragile.rs:57:13

What we instead need to do here is to somehow defer the change of the label to the main thread, and GLib provides various API for doing exactly that. We’ll make use of the first one here but it’s mostly a matter of taste (and trait bounds: the former takes a FnOnce closure while the latter can be called multiple times and only takes FnMut because of that).

let label = gtk::Label::new("not finished");
[...]
// We wrap the label clone in the Fragile type here
// and move that into the new thread instead.
let label_clone = fragile::Fragile::new(label.clone());
thread::spawn(move || {
    // Let's sleep for 10s
    thread::sleep(time::Duration::from_secs(10));

    // Defer the label update to the main thread.
    // For this we get the default main context,
    // the one used by GTK on the main thread,
    // and use invoke() on it. The closure also
    // takes ownership of the label_clone and drops
    // it at the end. From the correct thread!
    glib::MainContext::default().invoke(move || {
        label_clone.get().set_text("finished");
    });
});

So far so good, this compiles and actually works too. But it feels kind of fragile, and that’s not only because of the name of the crate we use here. The label passed around in different threads is like a landmine only waiting to explode when we use it in the wrong way.

It’s also not very nice because now we conceptually share mutable state between different threads, which is the underlying cause for many thread-safety issues and generally increases complexity of the software considerable.

Let’s try to do better, Rust is all about fearless concurrency after all.

A better solution: Message passing via channels

As the title of this post probably made clear, the better solution is to use channels to do message passing. That’s also a pattern that is generally preferred in many other languages that focus a lot on concurrency, ranging from Erlang to Go, and is also the the recommended way of doing this according to the Rust Book.

So how would this look like? We first of all would have to create a Channel for communicating with our main thread.

As the main thread is running a GLib main loop with its corresponding main context (the loop is the thing that actually is… a loop, and the context is what keeps track of all potential event sources the loop has to handle), we can’t make use of the standard library’s MPSC channel. The Receiver blocks or we would have to poll in intervals, which is rather inefficient.

The futures MPSC channel doesn’t have this problem but requires a futures executor to run on the thread where we want to handle the messages. While the GLib main context also implements a futures executor and we could actually use it, this would pull in the futures crate and all its dependencies and might seem like too much if we only ever use it for message passing anyway. Otherwise, if you use futures also for other parts of your code, go ahead and use the futures MPSC channel instead. It basically works the same as what follows.

For creating a GLib main context channel, there are two functions available: glib::MainContext::channel() and glib::MainContext::sync_channel(). The latter takes a bound for the channel, after which sending to the Sender part will block until there is space in the channel again. Both are returning a tuple containing the Sender and Receiver for this channel, and especially the Sender is working exactly like the one from the standard library. It can be cloned, sent to different threads (as long as the message type of the channel can be) and provides basically the same API.

The Receiver works a bit different, and closer to the for_each() combinator on the futures Receiver. It provides an attach() function that attaches it to a specific main context, and takes a closure that is called from that main context whenever an item is available.

The other part that we need to define on our side then is how the messages should look like that we send through the channel. Usually some kind of enum with all the different kinds of messages you want to handle is a good choice, in our case it could also simply be () as we only have a single kind of message and without payload. But to make it more interesting, let’s add the new string of the label as payload to our messages.

This is how it could look like for example

enum Message {
    UpdateLabel(String),
}
[...]
let label = gtk::Label::new("not finished");
[...]
// Create a new sender/receiver pair with default priority
let (sender, receiver) = glib::MainContext::channel(glib::PRIORITY_DEFAULT);

// Spawn the thread and move the sender in there
thread::spawn(move || {
    thread::sleep(time::Duration::from_secs(10));

    // Sending fails if the receiver is closed
    let _ = sender.send(Message::UpdateLabel(String::from("finished")));
});

// Attach the receiver to the default main context (None)
// and on every message update the label accordingly.
let label_clone = label.clone();
receiver.attach(None, move |msg| {
    match msg {
        Message::UpdateLabel(text) => label_clone.set_text(text.as_str()),
    }

    // Returning false here would close the receiver
    // and have senders fail
    glib::Continue(true)
});

While this is a bit more code than the previous solution, it will also be more easy to maintain and generally allows for clearer code.

We keep all our GTK widgets inside the main thread now, threads only get access to a sender over which they can send messages to the main thread and the main thread handles these messages in whatever way it wants. There is no shared mutable state between the different threads here anymore, apart from the channel itself.

by slomo at February 09, 2019 01:25 PM

January 18, 2019

GStreamerGStreamer 1.15.1 unstable development release

(GStreamer)

The GStreamer team is pleased to announce the first development release in the unstable 1.15 release series.

The unstable 1.15 release series adds new features on top of the current stable 1.16 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework.

The unstable 1.15 release series is for testing and development purposes in the lead-up to the stable 1.16 series which is scheduled for release in a few weeks time. Any newly-added API can still change until that point, although it is rare for that to happen.

Full release notes will be provided in the near future, highlighting all the new features, bugfixes, performance optimizations and other important changes.

Packagers: please note that quite a few plugins and libraries have moved between modules, so please take extra care and make sure inter-module version dependencies are such that users can only upgrade all modules in one go, instead of seeing a mix of 1.15 and 1.14 on their system.

Binaries for Android, iOS, Mac OS X and Windows will be provided shortly.

Release tarballs can be downloaded directly here:

January 18, 2019 12:30 AM

January 16, 2019

Víctor JáquezRust bindings for GStreamerGL: Memoirs

Rust is a great programming language but the community around it’s just amazing. Those are the ingredients for the craft of useful software tools, just like Servo, an experimental browser engine designed for tasks isolation and high parallelization.

Both projects, Rust and Servo, are funded by ">">Mozilla.

Thanks to Mozilla and Igalia I have the opportunity to work on Servo, adding it HTML5 multimedia features.

First, with the help of Fernando Jiménez, we finished what my colleague Philippe Normand and Sebastian Dröge (one of my programming heroes) started: a media player in Rust designed to be integrated in Servo. This media player lives in its own crate: servo/media along with the WebAudio engine. A crate, in Rust jargon, is like a library. This crate is (very ad-hocly) designed to be multimedia framework agnostic, but the only backend right now is for GStreamer. Later we integrated it into Servo adding an initial support for audio and video tags.

Currently, servo/media passes, through a IPC channel, the array with the whole frame to render in Servo. This implies, at least, one copy of the frame in memory, and we would like to avoid it.

For painting and compositing the web content, Servo uses WebRender, a crate designed to use the GPU intensively. Thus, if instead of raw frame data we pass OpenGL textures to WebRender the performance could be enhanced notoriously.

Luckily, GStreamer already supports the uploading, downloading, painting and composition of video frames as OpenGL textures with the OpenGL plugin and its OpenGL Integration library. Even more, with plugins such as GStreamer-VAAPI, Gst-OMX (OpenMAX), and others, it’s possible to process video without using the main CPU or its mapped memory in different platforms.

But from what’s available in GStreamer to what it’s available in Rust there’s a distance. Nonetheless, Sebastian has putting a lot of effort in the Rust bindings for GStreamer, either for applications and plugins, sadly, GStreamer’s OpenGL Integration library (GstGL for short) wasn’t available at that time. So I rolled up my sleeves and got to work on the bindings.

These are the stories of that work.

As GStreamer shares with GTK+ the GObject framework and its introspection mechanism, both projects have collaborated on the required infrastructure to support Rust bindings. Thanks to all the GNOME folks who are working on the intercommunication between Rust and GObject. The quest has been long and complex, since Rust doesn’t map all the object oriented concepts, and GObject, being a set of practices and software helpers to do object oriented programming with C, its usage is not homogeneous.

The Rubicon that ease the generation of Rust bindings for GObject-based projects is GIR, a tool, written in Rust, that reads gir files, along with metadata in toml, and outputs two types of bindings: sys and api.

Rust can call external functions through FFI (foreign function interface), which is just a declaration of a C function with Rust types. But these functions are considered unsafe. The sys bindings, are just the exporting of the C function for the library organized by the library’s namespace.

The next step is to create a safe and rustified API. This is the api bindings.

As we said, GObject libraries are quite homogeneous, and even following the introspection annotations, there will be cases where GIR won’t be able to generate the correct bindings. For that reason GIR is constantly evolving, looking for a common way to solve the corner cases that exist in every GObject project. For example, these are my patches in order to generate the GstGL bindings.

The done tasks were:

For this document we assume that the reader has a functional Rust setup and they know the basic concepts.

Clone and build gir

$ cd ~/ws
$ git clone https://github.com/gtk-rs/gir.git
$ cd gir
$ cargo build --release

The reason to build gir in release mode is because, otherwise would be very slow.

For sys bindings.

These kind of bindings are normally straight forward (and unsafe) since they only map the C API to Rust via FFI mechanism.

$ cd ~/ws
$ git clone https://gitlab.freedesktop.org/gstreamer/gstreamer-rs-sys.git
$ cd gstreamer-rs-sys
$ cp /usr/share/gir-1.0/GstGL-1.0.gir gir-files/
  1. Verify if the gir file is more o less correct
    1. If there something strange, we should fix the code that generated it.
    2. If that is not possible, the last resource is to fix the gir file directly, which is just XML, not manually but through a script using xmlstartlet. See fix.sh in gtk-rs as example.
  2. Create the toml file with the metadata required to create the bindings. In other words, this file contains the exceptions, rules and options used by the tool to generated the bindings. See Gir_GstGL.toml in gstreamer-rs-sys as example. The documentation of the toml file is in the gir’s README.md file.
$ ~/ws/gir/target/release/gir -c Gir_GstGL.toml

This command will generate, as specified in the toml file (target_path), a crate in the directory named gstreamer-gl-sys.

Api bindings.

These type of bindings may require more manual work since their purpose is to offer a rustified API of the library, with all its syntactic sugar, semantics, and so on. But in general terms, the process is similar:

$ cd ~/ws
$ git clone https://gitlab.freedesktop.org/gstreamer/gstreamer-sys.git
$ cd gstreamer-sys
$ cp /usr/share/gir-1.0/GstGL-1.0.gir gir-files/

Again, it would be possible to end up applying fixes to the gir file through a fix.sh script using xmlstartlet.

And again, the confection of the toml file might take a lot of time, by trial and error, by cleaning and tidying up the API. See Gir_GstGL.toml in gstreamer-rs as example.

$ ~/ws/gir/target/release/gir -c Gir_GstGL.toml

A good way to test your bindings is by crafting a test application, which shows how to use the API. Personally I devoted a ton of time in the test application for GstGL, but worth it. It made me aware of a missing part in the crate used for GL applications in Rust, named Glutin, which was a way to get the used EGLDisplay. So also worked on that and sent a pull request that was recently merged. The sweets of the free software development.

Nowadays I’m integrating GstGL API in servo/media and later, Servo!

by vjaquez at January 16, 2019 07:42 PM

December 27, 2018

Jean-François Fortin TamMon avis sur Fizz, une première offre de téléphonie mobile et data raisonnable au Québec

Vidéotron a récemment lancé une flanker brand pour combattre les opérateurs à bas coût, nommée “Fizz“. L’offre est clairement conçue pour les gens qui cherchent “la base, sans les trucs dont on se sert pas”. Et pour une fois, y’a pas de bullshit avec douze forfaits incompréhensibles: c’est à la carte et c’est tout. Et les données qui n’ont pas servi un mois se retrouvent créditées au mois suivant(s), ce qui fait qu’on peut très bien se débrouiller avec 1-2GB de données par mois si on est pas en train d’écouter Games of Thuronesu ou l’intégrale remasterisée des Cités d’Or dessus. Alors vous pouvez bien deviner que je m’y suis abonné avec enthousiasme.

C’est la première fois en deux décennies que je vois une offre “raisonnable” sur le marché des télécoms au Québec (évidemment c’est incomparable à l’Europe ou l’Asie, mais bon…). Au vu de l’absence de compétitions caractérisant le marché des télécommunications au Québec, on ne risque pas d’avoir vraiment mieux avant longtemps. Jusqu’à récemment, j’envisageais même des plans tordus comme “m’abonner à un opérateur comme T-Mobile aux États-Unis pour l’utiliser ici”, c’est dire…

J’utilise maintenant depuis 3 semaines le service de données de Fizz, et globalement “ça marche”. Ayant pris uniquement les données (parce que pour moi tout passe par la VoIP), je ne peux pas attester de la fiabilité des appels et SMS “traditionnels” de Fizz. Durant ces dernières semaines, il y a eu une (ou deux) pannes mais bon, c’est la période beta, et déjà, d’avoir l’internet dans ma poche à prix “correct”, moé j’me sens comme dans l’futur, okay? ;) je sursaute à chaque fois que mon numéro VoIP sonne dans ma poche alors que je suis au milieu de la rue dans le frette.

N’oubliez pas toutefois qu’en date de décembre 2018, le service est toujours en beta (et vous avez droit à un prix réduit qui sera maintenu pour l’avenir tant que vous ne changez pas de forfait), donc il peut rencontrer des problèmes de fiabilité du réseau dans ses débuts (ne me blâmez pas! Ne faites pas dépendre tous vos appels téléphoniques de votre numéro Fizz, ou alors il est possible que vous aurez besoin du soutien moral de MacOwain¹ ;) … mais pour une certaine partie de la population comme moi pour qui la téléphonie mobile n’est pas une “question de vie ou de mort” et attendaient depuis 20 ans une offre de “data” raisonnable, c’est une aubaine. Grâce à ça, j’ai accès à la 4G/LTE couvrant le Canada et USA pour environ 25$ par mois, et je fais même passer mes appels en VoIP dessus. C’est super.

Si l’offre de Fizz vous intéresse et vous voulez un code de référence pour avoir 25$ de crédit sur votre compte, voici le mien: 9AOVM (tous les abonnés Fizz ont un code d’invitation comme celui-là, alors n’allez pas croire que je suis affilié à Fizz autrement qu’en tant que client chez eux). Ou sinon vous pouvez aussi utiliser le code de ma copine: HNKGZ ou le code de l’un de mes amis: 5I21P ou bien V7A53.

N.B.: attention, leur formulaire d’inscription est un peu con, il demande le referral code au moment de créer le compte, mais après on peut rien faire avant de recevoir la carte SIM et l’activer, et ensuite on choisit le forfait, où il demande une deuxième fois le referral code… alors il ne faut pas oublier de le rentrer la 2e fois au moment de choisir son forfait une fois la carte SIM reçue et activée.

¹: La zoothérapie numérique étant tout à fait pertinente dans le contexte des télécommunications émergentes, en cas d’une éventuelle panne de service “téléphonique” de Fizz, pour vous relaxer je vous invite à regarder quelques photos de mon mignon (mais ô combien grassouillet) corgi, Mac Owain le Conquérant.

P.s.: Vous avez aimé cet article? Peut-être aimerez-vous mon bon vieil article sur la VoIP–pas mal plus long et détaillé, mais ô combien geek–intitulé “Se libérer de Bell et Vidéotron: passez Go et réclamez 1000$ par année” ;-)

The post Mon avis sur Fizz, une première offre de téléphonie mobile et data raisonnable au Québec appeared first on J.F. Fortin Tam.

by Jeff at December 27, 2018 09:33 PM