July 15, 2019

Jean-François Fortin TamAvailable for hire, 2019 edition

Hey folks, I’m back and I’m looking for some new work to challenge me—preferrably again for an organization that does something good and meaningful for the world. You can read my profile on my website, or keep reading here to discover about what I’ve been up to in the past few years.

Sometime after the end of my second term on the GNOME Foundation, I was contacted by a mysterious computer vendor that ships a vanilla GNOME on their laptops, Purism.

A laptop that was sent to me for review

They wanted my help to get their business back on track, and so I did. I began with the easy, low-hanging fruit:

  • Reviewing and restructuring their public-facing content;
  • Doing in-depth technical reviewing of their hardware products, finding industrial design flaws and reporting extensively on ways the products could be improved in future revisions;
  • Using my photo & video studio to shoot official real-world images (some of which you can see below) for use in various marketing collaterals. I also produced and edited videos that played a strong part in increasing the public’s confidence in these products.

As my work was appreciated and I was effectively showing business acumen and leadership across teams & departments, I was shortly afterwards promoted from “director of communications” to CMO.

At the very beginning I had thought it would be a short-lived contract; in practice, my partnership with Purism lasted nearly three years, as I helped the company go from strength to strength. I guess I must’ve done something right 😉

Here are some of the key accomplishments during that time:

Fun designing a professional technical brochure for conferences
  • Grew the business’ gross monthly revenue significantly (by a factor of 10, and up to a factor of 55) over the course of two years.
  • Helped devise and run the Librem 5 phone crowdfunding campaign that raised over $US 2.1 million, with significant press coverage. This proved initial market demand and reduced the risk of entering this new market. As the Linux Action Show commented during two of their episodes: “Wow, can we hire this PR department?” “They’ve done such a good job at promoting this!
  • Made the public-facing brand shine. Over time, converted some of the toughest critics into avid supporters, and turned the company’s name into one that earned trust and commands respect in our industry.
  • Did extensive research of over a hundred events (tradeshows, conferences) aligned with Purism’s business; planned and optimized sponsorships and team attendance to a selection of these events. Designed bespoke brochures, manned product exhibit booths, etc.
  • Leveraged good news, mitigated setbacks, managed customer’s expectations.
  • Devised department budget approximations and projections in preparation for investment growth.
  • Provided support and business experience to the “operations” & “customer support” departments.
  • Defined the marketing department structure and wrote critical roles to recruit for. The director of sales commented that those were “the best job descriptions [he’d] ever seen, across 50 organizations”, so apparently marketeers can make great recruiting copywriters too 😉
  • Identified many marketing and community management infrastructure issues, oversaw the deployment of solutions.
  • Onboarded members of the sales & bizdev teams so that they could blend into the organization’s culture, tap into tacit knowledge and hit the ground running
  • Coined the terms “Adaptive Design” and “Adaptive Applications” as a better, more precise terminology for convergent software in the GNOME platform. Yes, I was the team’s ghostwriter at times, and did extensive copy editing to turn technical reports into blog posts that would appeal to a wider audience while satisfying accuracy requirements.
  • Designed public surveys to gauge market demand for specific products in the B2C space, or to assess enterprise products & services requirements in the B2B space.
  • Etc. Etc.

That’s the jist of it.

With all that said, startups face challenges outside the scope of any single department. There comes a moment when your expertise has made all the difference it could in that environment, therefore making it necessary to conclude the work to seek a new challenge.

After spending a few weeks winding down that project and doing some personal R&D (there were lots of things to take care of in my backlog), I am now officially announcing my availability for hire. Or, in retweetable words:

If you know a business or organization that would benefit from my help, please feel free to share this blog post with them, or to contact me to let me know about opportunities.

The post Available for hire, 2019 edition appeared first on The Open Sourcerer.

by Jeff at July 15, 2019 12:40 PM

July 04, 2019

Jean-François Fortin TamAwakening from the lucid dream

Illustration by Guweiz

As I entered the Facility, I could see Nicole sitting on a folding chair and tapping away frantically on her Gemini PDA, in what looked like a very temporary desk setup.

“That looks very uncomfortable!” I remarked.

She didn’t reply, but I didn’t expect a reply. She was clearly too busy and in a rush to get stuff done. Why all the clickety-clack, wasn’t the phone team around, just inside? Oh, she must be dealing with suppliers.

I moved past the tiny entrance lobby and into a slightly larger mess hall where I saw a bunch of people spread around the room, near tables hugging the walls, and having a gathering that seemed a bit like a hackfest “breakroom” discussion, with various people from the outside. At one of the tables, over what seemed like a grid of brainstorming papers, I saw Kat, who was busy with three other people, planning some sort of project sprint.

“Are you using Scrum?” I prompted.

“I am, but I don’t think they are” she answered, heads down in preparation work to assemble a team.

While I was happy to see her familiar face, her presence struck me as odd; what would Kat, a Collabora QA team lead, be doing managing community folks on-site at a Purism facility in California? I would have thought Heather (Purism’s phone project manager) would be around, but I didn’t see her or other recognizeable members of the team. Well, probably because I was just passing through a crowd of 20 people spread on tables around a lobby area—a transitional space—set up as an ad-hoc workshop. One of the walls had big windows that let me see into a shipping area and actual meeting rooms. I went to the meeting rooms.

As I entered a rectangular, classroom-like area, I found a hackfest—or rather a BoF session—taking place. In one of the table rows, I was greeted by Zeeshan Ali and Christian Hergert, who were having some idle chat.

As I shook their hands I said, “Hello old friends! Long time no see! Too bad this isn’t real.”

“How do you know this is not real?” Christian replied.

I slapped myself in the face, loudly, a couple of times. “Well,” I said, “even though these slaps feel fairly real, I’m pretty sure that’s just my mind playing a trick on me.”

I knew I was inside a mind construct, because I didn’t know the purpose of my visit nor how I’d got here… I knew I didn’t have the luxury to be flying across the continent on my own to leisurely walk into a hackfest I had no preparation for, and I knew Purism weren’t the ones paying for my trip: they no longer were my client since my last visit at the beginning of the summer.

Christian & Zeeshan were still looking at me, as if waiting to confirm I was lucid enough.

“The matrix isn’t that good,” I sighed.

A couple of seconds later, my eyes opened and I woke up. Looking at the bedside clock, it was 4:45 AM.

“Ugh, I hate to be right sometimes.”

Epilogue

It had been quite a long time since I last had a lucid dream, so I jotted this one down for further analysis. After all, good ol’ Sigmund always said,

“The interpretation of dreams is the royal road to a knowledge of the unconscious activities of the mind.”

…and lo, the dream I had that night was very much fashioned upon the context I have found myself in recently. Having been recently freed from an all-consuming battlefront, I can now take a step back to reposition myself, clear the fog and enjoy the summer after the last seven months of gray and gloomy weather. The coïncidences and parallels to be drawn—between the weather, this dream, and reality—are so many, it’s uncanny.

It is much like a reawakening indeed.

As I looked at all this today, I thought it would make a great introduction for me to resume blogging more regularly. And, well, it’s a fun story to share.

Resurfacing

This summer, as I now have some “personal R&D” time for the first time in many years, I am taking the opportunity to get back on track regarding many aspects of my digital life:

  • sorting out and cleaning up data from various projects;
  • getting back to “Inbox Zero”;
  • upgrading my software & personal systems;
  • reviewing my processes & tools for greater efficiency and effectiveness;
  • taking care of my web infrastructure and online footprint.

As part of this R&D phase, I hope to drain the swamp and start blogging again. Here’s the rough roadmap of what I plan to be writing about:

  1. a statement of availability for hire (or as a friend once said to me in 2014 on the verge of a crowdfunding campaign: “Where have all the cowboysgood managers gone?“);
  2. a retrospective on “What the hell have I been up to for the last few years”;
  3. maybe some stories of organizations outside GNOME or the IT industry, for variety;
  4. lifehacks and technical discoveries I’ve found in that time;
  5. personal productivity tips & food for thought.

If I get to #3, 4, 5 in the coming months, I’ll consider that pretty good. And if there’s something you’d really like to see me write about in particular, feel free to contact me.

The post Awakening from the lucid dream appeared first on The Open Sourcerer.

by Jeff at July 04, 2019 12:14 AM

June 26, 2019

Andy Wingofibs, lies, and benchmarks

(Andy Wingo)

Friends, consider the recursive Fibonacci function, expressed most lovelily in Haskell:

fib 0 = 0
fib 1 = 1
fib n = fib (n-1) + fib (n-2)

Computing elements of the Fibonacci sequence ("Fibonacci numbers") is a common microbenchmark. Microbenchmarks are like a Suzuki exercises for learning violin: not written to be good tunes (good programs), but rather to help you improve a skill.

The fib microbenchmark teaches language implementors to improve recursive function call performance.

I'm writing this article because after adding native code generation to Guile, I wanted to check how Guile was doing relative to other language implementations. The results are mixed. We can start with the most favorable of the comparisons: Guile present versus Guile of the past.


I collected these numbers on my i7-7500U CPU @ 2.70GHz 2-core laptop, with no particular performance tuning, running each benchmark 10 times, waiting 2 seconds between measurements. The bar value indicates the median elapsed time, and above each bar is an overlayed histogram of all results for that scenario. Note that the y axis is on a log scale. The 2.9.3* version corresponds to unreleased Guile from git.

Good news: Guile has been getting significantly faster over time! Over decades, true, but I'm pleased.

where are we? static edition

How good are Guile's numbers on an absolute level? It's hard to say because there's no absolute performance oracle out there. However there are relative performance oracles, so we can try out perhaps some other language implementations.

First up would be the industrial C compilers, GCC and LLVM. We can throw in a few more "static" language implementations as well: compilers that completely translate to machine code ahead-of-time, with no type feedback, and a minimal run-time.


Here we see that GCC is doing best on this benchmark, completing in an impressive 0.304 seconds. It's interesting that the result differs so much from clang. I had a look at the disassembly for GCC and I see:

fib:
    push   %r12
    mov    %rdi,%rax
    push   %rbp
    mov    %rdi,%rbp
    push   %rbx
    cmp    $0x1,%rdi
    jle    finish
    mov    %rdi,%rbx
    xor    %r12d,%r12d
again:
    lea    -0x1(%rbx),%rdi
    sub    $0x2,%rbx
    callq  fib
    add    %rax,%r12
    cmp    $0x1,%rbx
    jg     again
    and    $0x1,%ebp
    lea    0x0(%rbp,%r12,1),%rax
finish:
    pop    %rbx
    pop    %rbp
    pop    %r12
    retq   

It's not quite straightforward; what's the loop there for? It turns out that GCC inlines one of the recursive calls to fib. The microbenchmark is no longer measuring call performance, because GCC managed to reduce the number of calls. If I had to guess, I would say this optimization doesn't have a wide applicability and is just to game benchmarks. In that case, well played, GCC, well played.

LLVM's compiler (clang) looks more like what we'd expect:

fib:
   push   %r14
   push   %rbx
   push   %rax
   mov    %rdi,%rbx
   cmp    $0x2,%rdi
   jge    recurse
   mov    %rbx,%rax
   add    $0x8,%rsp
   pop    %rbx
   pop    %r14
   retq   
recurse:
   lea    -0x1(%rbx),%rdi
   callq  fib
   mov    %rax,%r14
   add    $0xfffffffffffffffe,%rbx
   mov    %rbx,%rdi
   callq  fib
   add    %r14,%rax
   add    $0x8,%rsp
   pop    %rbx
   pop    %r14
   retq   

I bolded the two recursive calls.

Incidentally, the fib as implemented by GCC and LLVM isn't quite the same program as Guile's version. If the result gets too big, GCC and LLVM will overflow, whereas in Guile we overflow into a bignum. Also in C, it's possible to "smash the stack" if you recurse too much; compilers and run-times attempt to mitigate this danger but it's not completely gone. In Guile you can recurse however much you want. Finally in Guile you can interrupt the process if you like; the compiled code is instrumented with safe-points that can be used to run profiling hooks, debugging, and so on. Needless to say, this is not part of C's mission.

Some of these additional features can be implemented with no significant performance cost (e.g., via guard pages). But it's fair to expect that they have some amount of overhead. More on that later.

The other compilers are OCaml's ocamlopt, coming in with a very respectable result; Go, also doing well; and V8 WebAssembly via Node. As you know, you can compile C to WebAssembly, and then V8 will compile that to machine code. In practice it's just as static as any other compiler, but the generated assembly is a bit more involved:


fib_tramp:
    jmp    fib

fib:
    push   %rbp
    mov    %rsp,%rbp
    pushq  $0xa
    push   %rsi
    sub    $0x10,%rsp
    mov    %rsi,%rbx
    mov    0x2f(%rbx),%rdx
    mov    %rax,-0x18(%rbp)
    cmp    %rsp,(%rdx)
    jae    stack_check
post_stack_check:
    cmp    $0x2,%eax
    jl     return_n
    lea    -0x2(%rax),%edx
    mov    %rbx,%rsi
    mov    %rax,%r10
    mov    %rdx,%rax
    mov    %r10,%rdx
    callq  fib_tramp
    mov    -0x18(%rbp),%rbx
    sub    $0x1,%ebx
    mov    %rax,-0x20(%rbp)
    mov    -0x10(%rbp),%rsi
    mov    %rax,%r10
    mov    %rbx,%rax
    mov    %r10,%rbx
    callq  fib_tramp
return:
    mov    -0x20(%rbp),%rbx
    add    %ebx,%eax
    mov    %rbp,%rsp
    pop    %rbp
    retq   
return_n:
    jmp    return
stack_check:
    callq  WasmStackGuard
    mov    -0x10(%rbp),%rbx
    mov    -0x18(%rbp),%rax
    jmp    post_stack_check

Apparently fib compiles to a function of two arguments, the first passed in rsi, and the second in rax. (V8 uses a custom calling convention for its compiled WebAssembly.) The first synthesized argument is a handle onto run-time data structures for the current thread or isolate, and in the function prelude there's a check to see that the function has enough stack. V8 uses these stack checks also to handle interrupts, for when a web page is stuck in JavaScript.

Otherwise, it's a more or less normal function, with a bit more register/stack traffic than would be strictly needed, but pretty good.

do optimizations matter?

You've heard of Moore's Law -- though it doesn't apply any more, it roughly translated into hardware doubling in speed every 18 months. (Yes, I know it wasn't precisely that.) There is a corresponding rule of thumb for compiler land, Proebsting's Law: compiler optimizations make software twice as fast every 18 years. Zow!

The previous results with GCC and LLVM were with optimizations enabled (-O3). One way to measure Proebsting's Law would be to compare the results with -O0. Obviously in this case the program is small and we aren't expecting much work out of the optimizer, but it's interesting to see anyway:


Answer: optimizations don't matter much for this benchark. This investigation does give a good baseline for compilers from high-level languages, like Guile: in the absence of clever trickery like the recursive inlining thing GCC does and in the absence of industrial-strength instruction selection, what's a good baseline target for a compiler? Here we see for this benchmark that it's somewhere between 420 and 620 milliseconds or so. Go gets there, and OCaml does even better.

how is time being spent, anyway?

Might we expect V8/WebAssembly to get there soon enough, or is the stack check that costly? How much time does one stack check take anyway? For that we'd have to determine the number of recursive calls for a given invocation.

Friends, it's not entirely clear to me why this is, but I instrumented a copy of fib, and I found that the number of calls in fib(n) was a more or less constant factor of the result of calling fib. That ratio converges to twice the golden ratio, which means that since fib(n+1) ~= φ * fib(n), then the number of calls in fib(n) is approximately 2 * fib(n+1). I scratched my head for a bit as to why this is and I gave up; the Lord works in mysterious ways.

Anyway for fib(40), that means that there are around 3.31e8 calls, absent GCC shenanigans. So that would indicate that each call for clang takes around 1.27 ns, which at turbo-boost speeds on this machine is 4.44 cycles. At maximum throughput (4 IPC), that would indicate 17.8 instructions per call, and indeed on the n > 2 path I count 17 instructions.

For WebAssembly I calculate 2.25 nanoseconds per call, or 7.9 cycles, or 31.5 (fused) instructions at max IPC. And indeed counting the extra jumps in the trampoline, I get 33 cycles on the recursive path. I count 4 instructions for the stack check itself, one to save the current isolate, and two to shuffle the current isolate into place for the recursive calls. But, compared to clang, V8 puts 6 words on the stack per call, as opposed to only 4 for LLVM. I think with better interprocedural register allocation for the isolate (i.e.: reserve a register for it), V8 could get a nice boost for call-heavy workloads.

where are we? dynamic edition

Guile doesn't aim to replace C; it's different. It has garbage collection, an integrated debugger, and a compiler that's available at run-time, it is dynamically typed. It's perhaps more fair to compare to languages that have some of these characteristics, so I ran these tests on versions of recursive fib written in a number of languages. Note that all of the numbers in this post include start-up time.


Here, the ocamlc line is the same as before, but using the bytecode compiler instead of the native compiler. It's a bit of an odd thing to include but it performs so well I just had to include it.

I think the real takeaway here is that Chez Scheme has fantastic performance. I have not been able to see the disassembly -- does it do the trick like GCC does? -- but the numbers are great, and I can see why Racket decided to rebase its implementation on top of it.

Interestingly, as far as I understand, Chez implements stack checks in the straightfoward way (an inline test-and-branch), not with a guard page, and instead of using the stack check as a generic ability to interrupt a computation in a timely manner as V8 does, Chez emits a separate interrupt check. I would like to be able to see Chez's disassembly but haven't gotten around to figuring out how yet.

Since I originally published this article, I added a LuaJIT entry as well. As you can see, LuaJIT performs as well as Chez in this benchmark.

Haskell's call performance is surprisingly bad here, beaten even by OCaml's bytecode compiler; is this the cost of laziness, or just a lacuna of the implementation? I do not know. I do know I have this mental image that Haskell is a good compiler but apparently if that's the standard, so is Guile :)

Finally, in this comparison section, I was not surprised by cpython's relatively poor performance; we know cpython is not fast. I think though that it just goes to show how little these microbenchmarks are worth when it comes to user experience; like many of you I use plenty of Python programs in my daily work and don't find them slow at all. Think of micro-benchmarks like x-ray diffraction; they can reveal the hidden substructure of DNA but they say nothing at all about the organism.

where to now?

Perhaps you noted that in the last graph, the Guile and Chez lines were labelled "(lexical)". That's because instead of running this program:

(define (fib n)
  (if (< n 2)
      n
      (+ (fib (- n 1)) (fib (- n 2)))))

They were running this, instead:

(define (fib n)
  (define (fib* n)
    (if (< n 2)
        n
        (+ (fib* (- n 1)) (fib* (- n 2)))))
  (fib* n))

The thing is, historically, Scheme programs have treated top-level definitions as being mutable. This is because you don't know the extent of the top-level scope -- there could always be someone else who comes and adds a new definition of fib, effectively mutating the existing definition in place.

This practice has its uses. It's useful to be able to go in to a long-running system and change a definition to fix a bug or add a feature. It's also a useful way of developing programs, to incrementally build the program bit by bit.


But, I would say that as someone who as written and maintained a lot of Scheme code, it's not a normal occurence to mutate a top-level binding on purpose, and it has a significant performance impact. If the compiler knows the target to a call, that unlocks a number of important optimizations: type check elision on the callee, more optimal closure representation, smaller stack frames, possible contification (turning calls into jumps), argument and return value count elision, representation specialization, and so on.

This overhead is especially egregious for calls inside modules. Scheme-the-language only gained modules relatively recently -- relative to the history of scheme -- and one of the aspects of modules is precisely to allow reasoning about top-level module-level bindings. This is why running Chez Scheme with the --program option is generally faster than --script (which I used for all of these tests): it opts in to the "newer" specification of what a top-level binding is.

In Guile we would probably like to move towards a more static way of treating top-level bindings, at least those within a single compilation unit. But we haven't done so yet. It's probably the most important single optimization we can make over the near term, though.

As an aside, it seems that LuaJIT also shows a similar performance differential for local function fib(n) versus just plain function fib(n).

It's true though that even absent lexical optimizations, top-level calls can be made more efficient in Guile. I am not sure if we can reach Chez with the current setup of having a template JIT, because we need two return addresses: one virtual (for bytecode) and one "native" (for JIT code). Register allocation is also something to improve but it turns out to not be so important for fib, as there are few live values and they need to spill for the recursive call. But, we can avoid some of the indirection on the call, probably using an inline cache associated with the callee; Chez has had this optimization since 1984!

what guile learned from fib

This exercise has been useful to speed up Guile's procedure calls, as you can see for the difference between the latest Guile 2.9.2 release and what hasn't been released yet (2.9.3).

To decide what improvements to make, I extracted the assembly that Guile generated for fib to a standalone file, and tweaked it in a number of ways to determine what the potential impact of different scenarios was. Some of the detritus from this investigation is here.

There were three big performance improvements. One was to avoid eagerly initializing the slots in a function's stack frame; this took a surprising amount of run-time. Fortunately the rest of the toolchain like the local variable inspector was already ready for this change.

Another thing that became clear from this investigation was that our stack frames were too large; there was too much memory traffic. I was able to improve this in the lexical-call by adding an optimization to elide useless closure bindings. Usually in Guile when you call a procedure, you pass the callee as the 0th parameter, then the arguments. This is so the procedure has access to its closure. For some "well-known" procedures -- procedures whose callers can be enumerated -- we optimize to pass a specialized representation of the closure instead ("closure optimization"). But for well-known procedures with no free variables, there's no closure, so we were just passing a throwaway value (#f). An unhappy combination of Guile's current calling convention being stack-based and a strange outcome from the slot allocator meant that frames were a couple words too big. Changing to allow a custom calling convention in this case sped up fib considerably.

Finally, and also significantly, Guile's JIT code generation used to manually handle calls and returns via manual stack management and indirect jumps, instead of using the platform calling convention and the C stack. This is to allow unlimited stack growth. However, it turns out that the indirect jumps at return sites were stalling the pipeline. Instead we switched to use call/return but keep our manual stack management; this allows the CPU to use its return address stack to predict return targets, speeding up code.

et voilà

Well, long article! Thanks for reading. There's more to do but I need to hit the publish button and pop this off my stack. Until next time, happy hacking!

by Andy Wingo at June 26, 2019 10:34 AM

June 24, 2019

GStreamerGStreamer Rust bindings 0.14.0 release

(GStreamer)

A new version of the GStreamer Rust bindings, 0.14.0, was released.

Apart from updating to GStreamer 1.16, this release is mostly focussed on adding more bindings for various APIs and general API cleanup and bugfixes.
The most notable API additions in this release are bindings for gst::Memory and gst::Allocator as well as bindings for gst_base::BaseParse and gst_video::VideoDecoder and VideoEncoder. The latter also come with support for implementing subclasses and the gst-plugins-rs module contains an video decoder and parser (for CDG), and a video encoder (for AV1) based on this.

As usual this release follows the latest gtk-rs release, and a new version of the GStreamer plugins written in Rust was also released.

Details can be found in the release notes for gstreamer-rs and gstreamer-rs-sys.

The code and documentation for the bindings is available on the freedesktop.org GitLab

as well as on crates.io.

If you find any bugs, missing features or other issues please report them in GitLab.

June 24, 2019 08:00 PM

Christian SchallerOn the Road to Fedora Workstation 31

(Christian Schaller)

So I hope everyone is enjoying Fedora Workstation 30, but we don’t rest on our laurels here so I thought I share some of things we are working on for Fedora Workstation 31. This is not an exhaustive list, but some of the more major items we are working on.

Wayland – Our primary focus is still on finishing the Wayland transition and we feel we are getting close now, and thank you to the community for their help in testing and verifying Wayland over the last few years. The single biggest goal currently is fully removing our X Windowing System dependency, meaning that GNOME Shell should be able to run without needing XWayland. For those wondering why that has taken so much time, well it is simple; for 20 years developers could safely assume we where running atop of X. So refactoring everything needed to remove any code that makes the assumption that it is running on top of X.org has been a major effort. The work is mostly done now for the shell itself, but there are a few items left in regards to the GNOME Setting daemon where we need to expel the X dependency. Olivier Fourdan is working on removing those settings daemon bits as part of his work to improve the Wayland accessibility support. We are optimistic that can declare this work done within a GNOME release or two. So GNOME 3.34 or maybe 3.36. Once that work is complete an X server (XWayland) would only be started if you actually run a X application and when you shut that application down the X server will be shut down too.

Wayland logo

Wayland Graphics


Another change that Hans de Goede is working on at the moment is allowing X applications to be run as root under XWayland. In general running desktop apps as root isn’t considered adviceable from a security point of view, but since it always worked under X we feel it should continue to be there for XWayland too. This should fix a few applications out there which only works when run as root currently. One last item Hans de Goede is looking at is improving SDLs Wayland support in regards to how it deals with scaling of lower resolution games. Thanks to the great effort by Valve and others we got a huge catalog of games available under Linux now and we want to ensure that those keep running and runs well. So we will work with the SDL devs to come up with a solution here, we just don’t know the exact shape and and form the solution will take yet, so stay tuned.

Finally there is the NVidia binary driver support question. So you can run a native Wayland session on top of the binary driver and you had that ability for a very long time. Unfortunately there has been no support for the binary driver in XWayland and thus and X applications (which there are a lot of) would not be getting any HW accelerated 3D graphics support. Adam Jackson has worked on letting XWaylands load the binary NVidia x.org driver and we are now waiting on NVidia to review that work and hopefully be able to update their driver to support it.

Once we are done with this we expect X.org to go into hard maintenance mode fairly quickly. The reality is that X.org is basically maintained by us and thus once we stop paying attention to it there is unlikely to be any major new releases coming out and there might even be some bitrot setting in over time. We will keep an eye on it as we will want to ensure X.org stays supportable until the end of the RHEL8 lifecycle at a minimum, but let this be a friendly notice for everyone who rely the work we do maintaining the Linux graphics stack, get onto Wayland, that is where the future is.

PipeWire – Wim Taymans keeps improving the core features of Pipewire, as we work step by step to be ready to replace Jack and PulseAudio. He has recently been focusing on improving existing features like the desktop sharing portal together with Jonas Adahl and we are planning a hackfest for Wayland in the fall, current plan is to do it around the All Systems Go conference in Berlin, but due to some scheduling conflicts by some of our core stakeholders we might need to reschedule it to a little later in fall.
A new user for the desktop sharing portal is the new Miracast support that Benjamin Berg has been steadily working on. The Miracast support is shaping up and you can grab the Network Displays test client from his COPR repository while he is working to get the package into Fedora proper. We would appreciate further users testing and feedback as we know there are definitely devices out there where things do not work properly and identifying them is the first step to figuring out how to make our support in the desktop more robust. Eventually we want to make the GNOME integration even more seamless than the standalone app, but for early testing and polish it does the trick. If you are interested in contributing the code is hosted here on github.

Network Display

Network Display application using Miracast

Btw, you still need to set the enable Pipewire flag in Chrome to get the Pipewire support (chrome://flags). So I included a screenshot here to show you where to go in the browser and what the key is called:

Chrome Pipewire Flag

Chrome Pipewire Flag

Flatpak – Work on Flatpak in Fedora is continuing. Current focus is on improving the infrastructure for building Flatpaks from RPMS and automating what we can.This is pre-requisite work for eventually starting to ship some applications as Flatpaks by default and eventually shipping all applications as Flatpaks by default. We are also working on setting things up so that we can offer applications from flathub.io and quay.io out of the box and in accordance with Fedora rules for 3rd party software. We are also making progress on making a Red Hat UBI based runtime available. This means that as a 3rd party developer you can use that to build your applications on top of and be certain that it will be stay around and be supported by Red Hat for the lifetime of a given RHEL release, which means around 10 years. This frees you up as a developer to really update your application at your own pace as opposed to have to chase more short lived runtimes. It will also ensure that your application can be certified for RHEL which gives you access to all our workstation customers in addition to Fedora and all other distros.

Fedora Toolbox – Work is progressing on the Fedora Toolbox, our tool for making working with pet containers feel simple and straightforward. Debarshi Ray is currently looking on improvements to GNOME Terminal that will ensure that you get a more natural behaviour inside the terminal when interacting with pet containers, for instance ensuring that if you have a terminal open to a pet container and create a new tab that tab will also be inside the container inside of pointing at the host. We are also working on finding good ways to make the selection of containers more discoverable, so that you more easily can get access to a Red Hat UBI container or a Red Hat TensorFlow container for instance. There will probably be a bit of a slowdown in terms of new toolbox features soon though as we are going to rewrite it to make it more maintainable. The current implementation is a giant shell script, but the new version will most likely be written in Go (so that we can more easily integrate with the other container libraries and tools out there, mostly written in Go).

Fedora Toolbox

Fedora Toolbox in action

GNOME Classic – We have had Classic mode available in GNOME and Fedora for a long time, but we recently decided to give it a second look and try to improve the experience. So Allan Day reviewed the experience and we decided to make it a more pure GNOME 2 style experience by dropping the overview completely when you run classic mode.
We have also invested time and effort on improving the Classic mode workspace switcher to make life better for people who use a very workspace centric workflow. The goal of the improvements is to make the Classic mode workspace switcher more feature complete and also ensure that it can work with standard GNOME 3 in addition to Classic mode. We know this will greatly improve the experience for many of our users and at the same time hopefully let new people switch to Fedora and GNOME to get the advantage of all the other great improvements we are bringing to Linux and the Linux desktop.

Sysprof & performance – We have had a lot of focus in the community on improving GNOME Shell performance. Our particular focus has been on doing both some major re-architecting of some core subsystems that where needed to make some of the performance improvements you seen even possible. And lately Christian Hergert has been working on improving our tooling for profiling the desktop, so let our developers more easily see exactly where in the stack bottlenecks are and what is causing them. Be sure to read Christians blog for further details about sysprof and friends.

Fleet Commander – our tool for configuring large deployments of Fedora and RHEL desktops should have a release out very soon that can work with Active Directory as your LDAP server. We know a lot of RHEL and Fedora desktop users are part of bigger organizations where Linux users are a minority and thus Active Directory is being deployed in the organization. With this new release Fleet Commander can be run using Active Directory or FreeIPA as the directory server and thus a lot of organizations who previously could not easily deploy Fleet Commander can now take advantage of this powerful tool. Next step for Fleet Commander after that is finishing of some lose ends in terms of our Firefox support and also ensure that you can easily configure GNOME Shell extensions with Fleet Commander. We know a lot of our customers and users are deploying one or more GNOME Shell extensions for their desktop so we want to ensure Fleet Commander can help you do that efficiently across your whole fleet of systems.

Fingerprint support – We been working closely with our hardware partners to bring proper fingerprint reader support to Linux. Bastien Nocera worked on cleaning up the documentation of fprint and make sure there is good sample code and our hardware partners then worked with their suppliers to ensure they provided drivers conforming to the spec for hardware supplied to them. So there is a new drivers from Synaptics finger print readers coming out soon thanks to this effort. We are not stopping there though, Benjamin Berg is continuing the effort to improve the infrastructure for Linux fingerprint reader support, making sure we can support in-device storage of fingerprints for instance.

Fingerprint image

Fingerprint readers now better supported

Gamemode – Christian Kellner has been contributing a bit to gamemode recently, working to make it more secure and also ensure that it can work well with games packaged as Flatpaks. So if you play Linux games, especially those from Ferral Interactive, and want to squeeze some extra performance from your system make sure to install gamemode on your Fedora system.

Dell Totem support – Red Hat has a lot of customers in the fields of animation and CAD/CAM systems. Due to this Benjamin Tissoires and Peter Hutterer been working with Dell on enabling their Totem input device for a while now. That works is now coming to a close with the Totem support shipping in the latest libinput version with the kernel side of things being merged some time ago. You can get the full details from Peters blog about Dell Totem.

Dell Totel

The Dell Totem input device

Media codec support – So the OpenH264 2.0 release is out from Cisco now and Kalev Lember has been working to get the Fedora packages updated. This is a crucial release as it includes the support for Main and High profile that I mentioned in an earlier blog post. That work happened due to a collaboration between Cisco, Endless, Red Hat and Centricular with Jan Schmidt at Centricular doing the work implementing support for these two codecs. This work makes OpenH264 a lot more useful as it now supports playing back most files found in the wild and we been working to ensure it can be used for general playback in Firefox. At the same time Wim Taymans is working to fix some audio quality issues in the AAC implementation we ship so we should soon have both a fully working H264 decoder/encoder in Fedora and a fully functional AAC decoder/encoder. We are still trying to figure out what to do with MPEG2 video as we are ready to ship support for that too, but are still trying to figure out the details of implementation. Beyond that we don’t have any further plans around codecs atm as we feel that with H264, MPEG2 video, AAC, mp3 and AC3 we have them most critical ones covered, alongside the growing family of great free codecs such as VP9, Opus and AV1. We might take a look at the status of things like Windows Media and DivX at some point, but it is not anywhere close to the top of our priority list currently.

by uraeus at June 24, 2019 03:52 PM

June 12, 2019

GStreamerGStreamer Conference 2019 announced to take place in Lyon, France

(GStreamer)

The GStreamer project is happy to announce that this year's GStreamer Conference will take place on Thursday-Friday 31 October - 1 November 2019 in Lyon, France.

You can find more details about the conference on the GStreamer Conference 2019 web site.

A call for papers will be sent out in due course. Registration will open at a later time. We will announce those and any further updates on the gstreamer-announce mailing list, the website, and on Twitter.

Talk slots will be available in varying durations from 20 minutes up to 45 minutes. Whatever you're doing or planning to do with GStreamer, we'd like to hear from you!

We also plan to have sessions with short lightning talks / demos / showcase talks for those who just want to show what they've been working on or do a mini-talk instead of a full-length talk. Lightning talk slots will be allocated on a first-come-first-serve basis, so make sure to reserve your slot if you plan on giving a lightning talk.

There will also be a social event again on Thursday evening.

There are also plans to have a hackfest the weekend right after the conference on 2-3 November 2019.

We hope to see you in Lyon, and please spread the word!

June 12, 2019 02:00 PM

June 03, 2019

Andy Wingopictie, my c++-to-webassembly workbench

(Andy Wingo)

Hello, interwebs! Today I'd like to share a little skunkworks project with y'all: Pictie, a workbench for WebAssembly C++ integration on the web.

loading pictie...

JavaScript disabled, no pictie demo. See the pictie web page for more information. >&&<&>>>&&><<>>&&<><>>

wtf just happened????!?

So! If everything went well, above you have some colors and a prompt that accepts Javascript expressions to evaluate. If the result of evaluating a JS expression is a painter, we paint it onto a canvas.

But allow me to back up a bit. These days everyone is talking about WebAssembly, and I think with good reason: just as many of the world's programs run on JavaScript today, tomorrow much of it will also be in languages compiled to WebAssembly. JavaScript isn't going anywhere, of course; it's around for the long term. It's the "also" aspect of WebAssembly that's interesting, that it appears to be a computing substrate that is compatible with JS and which can extend the range of the kinds of programs that can be written for the web.

And yet, it's early days. What are programs of the future going to look like? What elements of the web platform will be needed when we have systems composed of WebAssembly components combined with JavaScript components, combined with the browser? Is it all going to work? Are there missing pieces? What's the status of the toolchain? What's the developer experience? What's the user experience?

When you look at the current set of applications targetting WebAssembly in the browser, mostly it's games. While compelling, games don't provide a whole lot of insight into the shape of the future web platform, inasmuch as there doesn't have to be much JavaScript interaction when you have an already-working C++ game compiled to WebAssembly. (Indeed, much of the incidental interactions with JS that are currently necessary -- bouncing through JS in order to call WebGL -- people are actively working on removing all of that overhead, so that WebAssembly can call platform facilities (WebGL, etc) directly. But I digress!)

For WebAssembly to really succeed in the browser, there should also be incremental stories -- what does it look like when you start to add WebAssembly modules to a system that is currently written mostly in JavaScript?

To find out the answers to these questions and to evaluate potential platform modifications, I needed a small, standalone test case. So... I wrote one? It seemed like a good idea at the time.

pictie is a test bed

Pictie is a simple, standalone C++ graphics package implementing an algebra of painters. It was created not to be a great graphics package but rather to be a test-bed for compiling C++ libraries to WebAssembly. You can read more about it on its github page.

Structurally, pictie is a modern C++ library with a functional-style interface, smart pointers, reference types, lambdas, and all the rest. We use emscripten to compile it to WebAssembly; you can see more information on how that's done in the repository, or check the README.

Pictie is inspired by Peter Henderson's "Functional Geometry" (1982, 2002). "Functional Geometry" inspired the Picture language from the well-known Structure and Interpretation of Computer Programs computer science textbook.

prototype in action

So far it's been surprising how much stuff just works. There's still lots to do, but just getting a C++ library on the web is pretty easy! I advise you to take a look to see the details.

If you are thinking of dipping your toe into the WebAssembly water, maybe take a look also at Pictie when you're doing your back-of-the-envelope calculations. You can use it or a prototype like it to determine the effects of different compilation options on compile time, load time, throughput, and network trafic. You can check if the different binding strategies are appropriate for your C++ idioms; Pictie currently uses embind (source), but I would like to compare to WebIDL as well. You might also use it if you're considering what shape your C++ library should have to have a minimal overhead in a WebAssembly context.

I use Pictie as a test-bed when working on the web platform; the weakref proposal which adds finalization, leak detection, and working on the binding layers around Emscripten. Eventually I'll be able to use it in other contexts as well, with the WebIDL bindings proposal, typed objects, and GC.

prototype the web forward

As the browser and adjacent environments have come to dominate programming in practice, we lost a bit of the delightful variety from computing. JS is a great language, but it shouldn't be the only medium for programs. WebAssembly is part of this future world, waiting in potentia, where applications for the web can be written in any of a number of languages. But, this future world will only arrive if it "works" -- if all of the various pieces, from standards to browsers to toolchains to virtual machines, only if all of these pieces fit together in some kind of sensible way. Now is the early phase of annealing, when the platform as a whole is actively searching for its new low-entropy state. We're going to need a lot of prototypes to get from here to there. In that spirit, may your prototypes be numerous and soon replaced. Happy annealing!

by Andy Wingo at June 03, 2019 10:10 AM

May 31, 2019

GStreamerGStreamer 1.14.5 stable bug fix release

(GStreamer)

The GStreamer team is pleased to announce another bug fix release in the old stable 1.14 release series of your favourite cross-platform multimedia framework.

This release only contains bugfixes and it should be safe to update from 1.14.x.

The 1.14 series has now been superseded by the new stable 1.16 series, and we recommend you upgrade at your earliest convenience.

See /releases/1.14/ for the details.

Binaries for Android, iOS, Mac OS X and Windows will be made available soon.

Download tarballs directly here: gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-sharp, gstreamer-vaapi, or gst-omx.

May 31, 2019 11:00 PM

May 24, 2019

Andy Wingolightening run-time code generation

(Andy Wingo)

The upcoming Guile 3 release will have just-in-time native code generation. Finally, amirite? There's lots that I'd like to share about that and I need to start somewhere, so this article is about one piece of it: Lightening, a library to generate machine code.

on lightning

Lightening is a fork of GNU Lightning, adapted to suit the needs of Guile. In fact at first we chose to use GNU Lightning directly, "vendored" into the Guile source respository via the git subtree mechanism. (I see that in the meantime, git gained a kind of a subtree command; one day I will have to figure out what it's for.)

GNU Lightning has lots of things going for it. It has support for many architectures, even things like Itanium that I don't really care about but which a couple Guile users use. It abstracts the differences between e.g. x86 and ARMv7 behind a common API, so that in Guile I don't need to duplicate the JIT for each back-end. Such an abstraction can have a slight performance penalty, because maybe it missed the opportunity to generate optimal code, but this is acceptable to me: I was more concerned about the maintenance burden, and GNU Lightning seemed to solve that nicely.

GNU Lightning also has fantastic documentation. It's written in C and not C++, which is the right thing for Guile at this time, and it's also released under the LGPL, which is Guile's license. As it's a GNU project there's a good chance that GNU Guile's needs might be taken into account if any changes need be made.

I mentally associated Paolo Bonzini with the project, who I knew was a good no-nonsense hacker, as he used Lightning for a smalltalk implementation; and I knew also that Matthew Flatt used Lightning in Racket. Then I looked in the source code to see architecture support and was pleasantly surprised to see MIPS, POWER, and so on, so I went with GNU Lightning for Guile in our 2.9.1 release last October.

on lightening the lightning

When I chose GNU Lightning, I had in mind that it was a very simple library to cheaply write machine code into buffers. (Incidentally, if you have never worked with this stuff, I remember a time when I was pleasantly surprised to realize that an assembler could be a library and not just a program that processes text. A CPU interprets machine code. Machine code is just bytes, and you can just write C (or Scheme, or whatever) functions that write bytes into buffers, and pass those buffers off to the CPU. Now you know!)

Anyway indeed GNU Lightning 1.4 or so was that very simple library that I had in my head. I needed simple because I would need to debug any problems that came up, and I didn't want to add more complexity to the C side of Guile -- eventually I should be migrating this code over to Scheme anyway. And, of course, simple can mean fast, and I needed fast code generation.

However, GNU Lightning has a new release series, the 2.x series. This series is a rewrite in a way of the old version. On the plus side, this new series adds all of the weird architectures that I was pleasantly surprised to see. The old 1.4 didn't even have much x86-64 support, much less AArch64.

This new GNU Lightning 2.x series fundamentally changes the way the library works: instead of having a jit_ldr_f function that directly emits code to load a float from memory into a floating-point register, the jit_ldr_f function now creates a node in a graph. Before code is emitted, that graph is optimized, some register allocation happens around call sites and for temporary values, dead code is elided, and so on, then the graph is traversed and code emitted.

Unfortunately this wasn't really what I was looking for. The optimizations were a bit opaque to me and I just wanted something simple. Building the graph took more time than just emitting bytes into a buffer, and it takes more memory as well. When I found bugs, I couldn't tell whether they were related to my usage or in the library itself.

In the end, the node structure wasn't paying its way for me. But I couldn't just go back to the 1.4 series that I remembered -- it didn't have the architecture support that I needed. Faced with the choice between changing GNU Lightning 2.x in ways that went counter to its upstream direction, switching libraries, or refactoring GNU Lightning to be something that I needed, I chose the latter.

in which our protagonist cannot help himself

Friends, I regret to admit: I named the new thing "Lightening". True, it is a lightened Lightning, yes, but I am aware that it's horribly confusing. Pronounced like almost the same, visually almost identical -- I am a bad person. Oh well!!

I ported some of the existing GNU Lightning backends over to Lightening: ia32, x86-64, ARMv7, and AArch64. I deleted the backends for Itanium, HPPA, Alpha, and SPARC; they have no Debian ports and there is no situation in which I can afford to do QA on them. I would gladly accept contributions for PPC64, MIPS, RISC-V, and maybe S/390. At this point I reckon it takes around 20 hours to port an additional backend from GNU Lightning to Lightening.

Incidentally, if you need a code generation library, consider your choices wisely. It is likely that Lightening is not right for you. If you can afford platform-specific code and you need C, Lua's DynASM is probably the right thing for you. If you are in C++, copy the assemblers from a JavaScript engine -- C++ offers much more type safety, capabilities for optimization, and ergonomics.

But if you can only afford one emitter of JIT code for all architectures, you need simple C, you don't need register allocation, you want a simple library to just include in your source code, and you are good with the LGPL, then Lightening could be a thing for you. Check the gitlab page for info on how to test Lightening and how to include it into your project.

giving it a spin

Yesterday's Guile 2.9.2 release includes Lightening, so you can give it a spin. The switch to Lightening allowed us to lower our JIT optimization threshold by a factor of 50, letting us generate fast code sooner. If you try it out, let #guile on freenode know how it went. In any case, happy hacking!

by Andy Wingo at May 24, 2019 08:44 AM

May 23, 2019

Andy Wingobigint shipping in firefox!

(Andy Wingo)

I am delighted to share with folks the results of a project I have been helping out on for the last few months: implementation of "BigInt" in Firefox, which is finally shipping in Firefox 68 (beta).

what's a bigint?

BigInts are a new kind of JavaScript primitive value, like numbers or strings. A BigInt is a true integer: it can take on the value of any finite integer (subject to some arbitrarily large implementation-defined limits, such as the amount of memory in your machine). This contrasts with JavaScript number values, which have the well-known property of only being able to precisely represent integers between -253 and 253.

BigInts are written like "normal" integers, but with an n suffix:

var a = 1n;
var b = a + 42n;
b << 64n
// result: 793209995169510719488n

With the bigint proposal, the usual mathematical operations (+, -, *, /, %, <<, >>, **, and the comparison operators) are extended to operate on bigint values. As a new kind of primitive value, bigint values have their own typeof:

typeof 1n
// result: 'bigint'

Besides allowing for more kinds of math to be easily and efficiently expressed, BigInt also allows for better interoperability with systems that use 64-bit numbers, such as "inodes" in file systems, WebAssembly i64 values, high-precision timers, and so on.

You can read more about the BigInt feature over on MDN, as usual. You might also like this short article on BigInt basics that V8 engineer Mathias Bynens wrote when Chrome shipped support for BigInt last year. There is an accompanying language implementation article as well, for those of y'all that enjoy the nitties and the gritties.

can i ship it?

To try out BigInt in Firefox, simply download a copy of Firefox Beta. This version of Firefox will be fully released to the public in a few weeks, on July 9th. If you're reading this in the future, I'm talking about Firefox 68.

BigInt is also shipping already in V8 and Chrome, and my colleague Caio Lima has an project in progress to implement it in JavaScriptCore / WebKit / Safari. Depending on your target audience, BigInt might be deployable already!

thanks

I must mention that my role in the BigInt work was relatively small; my Igalia colleague Robin Templeton did the bulk of the BigInt implementation work in Firefox, so large ups to them. Hearty thanks also to Mozilla's Jan de Mooij and Jeff Walden for their patient and detailed code reviews.

Thanks as well to the V8 engineers for their open source implementation of BigInt fundamental algorithms, as we used many of them in Firefox.

Finally, I need to make one big thank-you, and I hope that you will join me in expressing it. The road to ship anything in a web browser is long; besides the "simple matter of programming" that it is to implement a feature, you need a specification with buy-in from implementors and web standards people, you need a good working relationship with a browser vendor, you need willing technical reviewers, you need to follow up on the inevitable security bugs that any browser change causes, and all of this takes time. It's all predicated on having the backing of an organization that's foresighted enough to invest in this kind of long-term, high-reward platform engineering.

In that regard I think all people that work on the web platform should send a big shout-out to Tech at Bloomberg for making BigInt possible by underwriting all of Igalia's work in this area. Thank you, Bloomberg, and happy hacking!

by Andy Wingo at May 23, 2019 12:13 PM

May 17, 2019

Guillaume DesmottesRust+GNOME hackfest in Berlin

Last week I had the chance to attend for the first time the GNOME+Rust Hackfest in Berlin. My goal was to finish the GstVideoDecoder bindings, keep learning about Rust and meet people from the GNOME/Rust community. I had some experience with gstreamer-rs as an user but never actually wrote any bindings myself. To make it even more challenging the underlying C API is unfortunatelly unsafe so we had to do some smart tricks to make it safe to use in Rust.

I'm very happy with the work I managed to do during these 4 days. I was able to complete the bindings as well as my CDG decoder using them. The bindings are waiting for final review and the decoder should hopefully be merged soon as well. More importantly I learned a lot about the bindings infrastructure but also about more advanced Rust features and nice API design patterns. Turned out it's much easier to write Rust when you are sitting next to the gtk-rs maintainers. ;)

I'm very grateful to Guillaume and Sebastian for all the time they spent answering my questions and helping fighting the Rust compiler. Thanks also to Alban for letting me stay at his place, to Zeeshan and Kinvolk for organizing and hosting the hackfest and to Collabora for sponsoring my trip.

20190510_154210.jpg

by Guillaume Desmottes at May 17, 2019 07:53 AM

May 10, 2019

Nirbheek ChauhanGStreamer's Meson and Visual Studio Journey


Almost 3 years ago, I wrote about how we at Centricular had been working on an experimental port of GStreamer from Autotools to the Meson build system for faster builds on all platforms, and to allow building with Visual Studio on Windows.

At the time, the response was mixed, and for good reason—Meson was a very new build system, and it needed to work well on all the targets that GStreamer supports, which was all major operating systems. Meson did aim to support all of those, but a lot of work was required to bring platform support up to speed with the requirements of a non-trivial project like GStreamer.

The Status: Today!

After years of work across several components (Meson, Ninja, Cerbero, etc), GStreamer is being built with Meson on all platforms! Autotools is scheduled to be removed in the next release cycle (1.18).

The first stable release with this work was 1.16, which was released yesterday. It has already led to a number of new capabilities:
  • GStreamer can be built with Visual Studio on Windows inside Cerbero, which means we now ship official binaries for GStreamer built with the  MSVC toolchain.
  • From-scratch Cerbero builds are much faster on all platforms, which has aided the implementation of CI-gated merge requests on GitLab.
  • The developer workflow has been streamlined and is the same on all platforms (Linux, Windows, macOS) using the gst-build meta-project. The meta-project can also be used for cross-compilation (Android, iOS, Windows, Linux).
  • The Windows developer workflow no longer requires installing several packages by hand or setting up an MSYS environment. All you need is Git, Python 3, Visual Studio, and 15 min for the initial build.
  • Profiling on Windows is now possible, and I've personally used it to profile and fix numerous Windows-specific performance issues.
  • Visual Studio projects that use GStreamer now have debug symbols since we're no longer mixing MinGW and MSVC binaries. This also enables usable crash reports and symbol servers.
  • We can ship plugins that can only be built with MSVC on Windows, such as the Intel MSDK hardware codec plugin, Directshow plugins, and also easily enable new Windows 10 features in existing plugins such as WASAPI.
  • iOS bitcode builds are more correct, since Meson is smart enough to know how to disable incompatible compiler options on specific build targets.
  • The iOS framework now also ships shared libraries in addition to the static libraries.
Overall, it's been a huge success and we're really happy with how things have turned out!

You can download the prebuilt MSVC binaries, reproduce them yourself, or quickly bootstrap a GStreamer development environment. The choice is yours!

Further Musings

While working on this over the years, what's really stood out to me was how this sort of gargantuan task was made possible through the power of community-driven FOSS and community-focused consultancy.

Our build system migration quest has been long with valleys full of yaks with thick coats of fur, and it would have been prohibitively expensive for a single entity to sponsor it all. Thanks to the inherently collaborative nature of community FOSS projects, people from various backgrounds and across companies could come together and make this possible.

There are many other examples of this, but seeing the improbable happen from the inside is something special.

Special shout-outs to ZEISS, Barco, Pexip, and Cablecast.tv for sponsoring various parts of this work!

Their contributions also made it easier for us to spend thousands more hours of non-sponsored time to fill in the gaps so that all the sponsored work done could be upstreamed in a form that's useful for everyone who uses GStreamer. This sort of thing is, in my opinion, an essential characteristic of being a community-focused consultancy, and we make sure that it always has high priority.

by Nirbheek (noreply@blogger.com) at May 10, 2019 09:48 PM

May 04, 2019

Sebastian PölsterlEvaluating Survival Models

Evaluating Survival Models

The most frequently used evaluation metric of survival models is the concordance index (c index, c statistic). It is a measure of rank correlation between predicted risk scores $\hat{f}$ and observed time points $y$ that is closely related to Kendall’s τ. It is defined as the ratio of correctly ordered (concordant) pairs to comparable pairs. Two samples $i$ and $j$ are comparable if the sample with lower observed time $y$ experienced an event, i.e., if $y_j > y_i$ and $\delta_i = 1$, where $\delta_i$ is a binary event indicator. A comparable pair $(i, j)$ is concordant if the estimated risk $\hat{f}$ by a survival model is higher for subjects with lower survival time, i.e., $\hat{f}_i >\hat{f}_j \land y_j > y_i$, otherwise the pair is discordant. Harrell's estimator of the c index is implemented in concordance_index_censored.

While Harrell's concordance index is easy to interpret and compute, it has some shortcomings:

  1. it has been shown that it is too optimistic with increasing amount of censoring [1],
  2. it is not a useful measure of performance if a specific time range is of primary interest (e.g. predicting death within 2 years).

Since version 0.8, scikit-survival supports an alternative estimator of the concordance index from right-censored survival data, implemented in concordance_index_ipcw, that addresses the first issue.

The second point can be addressed by extending the well known receiver operating characteristic curve (ROC curve) to possibly censored survival times. Given a time point $t$, we can estimate how well a predictive model can distinguishing subjects who will experience an event by time $t$ (sensitivity) from those who will not (specificity). The function cumulative_dynamic_auc implements an estimator of the cumulative/dynamic area under the ROC for a given list of time points.

The first part of this post will illustrate the first issue with simulated survival data, while the second part will focus on the time-dependent area under the ROC applied to data from a real study.

To see the full source code for producing the figures in this post, please see this notebook.

Bias of Harrell's Concordance Index

Harrell's concordance index is known to be biased upwards if the amount of censoring in the test data is high [1]. Uno et al. proposed an alternative estimator of the concordance index that behaves better in such situations. In this section, we are going to apply concordance_index_censored and concordance_index_ipcw to synthetic survival data and compare their results.

Simulation Study

We are generating a synthetic biomarker by sampling from a standard normal distribution. For a given hazard ratio, we compute the associated (actual) survival time by drawing from an exponential distribution. The censoring times were generated from an uniform independent distribution $\textrm{Uniform}(0,\gamma)$, where we choose $\gamma$ to produce different amounts of censoring.

Since Uno's estimator is based on inverse probability of censoring weighting, we need to estimate the probability of being censored at a given time point. This probability needs to be non-zero for all observed time points. Therefore, we restrict the test data to all samples with observed time lower than the maximum event time $\tau$. Usually, one would use the tau argument of concordance_index_ipcw for this, but we apply the selection before to pass identical inputs to concordance_index_censored and concordance_index_ipcw. The estimates of the concordance index are therefore restricted to the interval $[0, \tau]$.

Let us assume a moderate hazard ratio of 2 and generate a small synthetic dataset of 100 samples from which we estimate the concordance index. We repeat this experiment 200 times and plot mean and standard deviation of the difference between the actual (in the absence of censoring) and estimated concordance index.

Since the hazard ratio remains constant and only the amount of censoring changes, we would want an estimator for which the difference between the actual and estimated c to remain approximately constant across simulations.

We can observe that estimates are on average below the actual value, except for the highest amount of censoring, where Harrell's c begins overestimating the performance (on average).

With such a small dataset, the variance of differences is quite big, so let us increase the amount of data to 1000 and repeat the simulation.

Now we can observe that Harrell's c begins to overestimate performance starting with approximately 49% censoring while Uno's c is still underestimating the performance, but is on average very close to the actual performance for large amounts of censoring.

For the final experiment, we double the size of the dataset to 2000 and repeat the analysis.

The trend we observed in the previous simulation is now even more pronounced. Harrell's c is becoming more and more overconfident in the performance of the synthetic marker with increasing amount of censoring, while Uno's c remains stable.

In summary, while the difference between concordance_index_ipcw and concordance_index_censored is negligible for small amounts of censoring, when analyzing survival data with moderate to high amounts of censoring, you might want to consider estimating the performance using concordance_index_ipcw instead of concordance_index_censored.

Time-dependent Area under the ROC

The area under the receiver operating characteristics curve (ROC curve) is a popular performance measure for binary classification task. In the medical domain, it is often used to determine how well estimated risk scores can separate diseased patients (cases) from healthy patients (controls). Given a predicted risk score $\hat{f}$, the ROC curve compares the false positive rate (1 - specificity) against the true positive rate (sensitivity) for each possible value of $\hat{f}$.

When extending the ROC curve to continuous outcomes, in particular survival time, a patient’s disease status is typically not fixed and changes over time: at enrollment a subject is usually healthy, but may be diseased at some later time point. Consequently, sensitivity and specificity become time-dependent measures. Here, we consider cumulative cases and dynamic controls at a given time point $t$, which gives rise to the time-dependent cumulative/dynamic ROC at time $t$. Cumulative cases are all individuals that experienced an event prior to or at time $t$ ($t_i \leq t$), whereas dynamic controls are those with $t_i>t$. By computing the area under the cumulative/dynamic ROC at time $t$, we can determine how well a model can distinguish subjects who fail by a given time ($t_i \leq t$) from subjects who fail after this time ($t_i>t$). Hence, it is most relevant if one wants to predict the occurrence of an event in a period up to time $t$ rather than at a specific time point $t$.

The cumulative_dynamic_auc function implements an estimator of the cumulative/dynamic area under the ROC at a given list of time points. To illustrate its use, we are going to use data from a study that investigated to which extent the serum immunoglobulin free light chain (FLC) assay can be used predict overall survival. The dataset has 7874 subjects and 9 features; the endpoint is death, which occurred for 2169 subjects (27.5%).

First, we are loading the data and split it into train and test set to evaluate how well markers generalize.

x, y = load_flchain()
 
(x_train, x_test,
 y_train, y_test) = train_test_split(x, y, test_size=0.2, random_state=0)

Serum creatinine measurements are missing for some patients, therefore we are just going to impute these values with the mean using scikit-learn's SimpleImputer.

num_columns = ['age', 'creatinine', 'kappa', 'lambda']
 
imputer = SimpleImputer().fit(x_train.loc[:, num_columns])
x_train = imputer.transform(x_train.loc[:, num_columns])
x_test = imputer.transform(x_test.loc[:, num_columns])

Similar to Uno's estimator of the concordance index described above, we need to be a little bit careful when selecting the test data and time points we want to evaluate the ROC at, due to the estimator's dependence on inverse probability of censoring weighting. First, we are going to check whether the observed time of the test data lies within the observed time range of the training data.

y_events = y_train[y_train['death']]
train_min, train_max = y_events["futime"].min(), y_events["futime"].max()
 
y_events = y_test[y_test['death']]
test_min, test_max = y_events["futime"].min(), y_events["futime"].max()
 
assert train_min = test_min  test_max  train_max, \
    "time range or test data is not within time range of training data."

When choosing the time points to evaluate the ROC at, it is important to remember to choose the last time point such that the probability of being censored after the last time point is non-zero. In the simulation study above, we set the upper bound to the maximum event time, here we use a more conservative approach by setting the upper bound to the 80% percentile of observed time points, because the censoring rate is quite large at 72.5%. Note that this approach would be appropriate for choosing tau of concordance_index_ipcw too.

times = np.percentile(y["futime"], np.linspace(5, 81, 15))
print(times)

[ 470.3        1259.         1998.         2464.82428571 2979.
 3401.         3787.99857143 4051.         4249.         4410.17285714
 4543.         4631.         4695.         4781.         4844.        ]

We begin by considering individual real-valued features as risk scores without actually fitting a survival model. Hence, we obtain an estimate of how well age, creatinine, kappa FLC, and lambda FLC are able to distinguish cases from controls at each time point.

The plot shows the estimated area under the time-dependent ROC at each time point and the average across all time points as dashed line.

We can see that age is overall the most discriminative feature, followed by $\kappa$ and $\lambda$ FLC. That fact that age is the strongest predictor of overall survival in the general population is hardly surprising (we have to die at some point after all). More differences become evident when considering time: the discriminative power of FLC decreases at later time points, while that of age increases. The observation for age again follows common sense. In contrast, FLC seems to be a good predictor of death in the near future, but not so much if it occurs decades later.

Next, we will fit an actual survial model to predict the risk of death from the Veterans' Administration Lung Cancer Trial. After fitting a Cox proportional hazards model, we want to assess how well the model can distinguish survivors from deceased in weekly intervals, up to 6 months after enrollment.

The plot shows that the model is doing quite well on average with an AUC of ~0.82 (dashed line). However, there is a clear difference in performance between the first and second half of the time range. Performance increases up to about 100 days from enrollment, but quickly drops thereafter. Thus, we can conclude that the model is less effective in predicting death past 100 days.

Conclusion

I hope this post helped you to understand some of the pitfalls when estimating the performance of markers and models from right-censored survival data. We illustrated that Harrell's estimator of the concordance index is biased when the amount of censoring is high, and that Uno's estimator is more appropriate in this situation. Finally, we demonstrated that the time-dependent area under the ROC is a very useful tool when we want to predict the occurrence of an event in a period up to time $t$ rather than at a specific time point $t$.

sebp Sat, 05/04/2019 - 13:12

Comments

Hey, is there an implementation of Random Survival Forest in scikit-survival?

by sebp at May 04, 2019 11:12 AM

May 01, 2019

Sebastian Pölsterlscikit-survival 0.8 released

scikit-survival 0.8 released

This release of scikit-survival 0.8 adds some nice enhancements for validating survival models.

Previously, scikit-survival only supported Harrell's concordance index to assess the performance of survival models. While it is easy to interpret and compute, it has some shortcomings:

  1. it has been shown that it is too optimistic with increasing amount of censoring1,
  2. it is not a useful measure of performance if a specific time point is of primary interest (e.g. predicting 2 year survival).

The first issue is addressed by the new concordance_index_ipcw function, which implements an alternative estimator of the concordance index.1,2

The second point can be addressed by extending the well known receiver operating characteristic curve (ROC curve) to possibly censored survival times. Given a time point t, we can estimate how well a predictive model can distinguishing subjects who will experience an event by time t (sensitivity) from those who will not (specificity). The newly added function cumulative_dynamic_auc implements an estimator of the cumulative/dynamic area under the ROC for a given list of time points3.

Both estimators rely on inverse probability of censoring weighting, which means they require access to training data to estimate the censoring distribution from. Therefore, if the amount of censoring is high, some care must be taken in selecting a suitable time range for evaluation.

For a complete list of changes see the release notes.

Download

Pre-built conda packages are available for Linux, OSX and Windows:

conda install -c sebp scikit-survival

Alternatively, scikit-survival can be installed from source via pip:

pip install -U scikit-survival

References

  1. Uno, H., Cai, T., Pencina, M. J., D’Agostino, R. B., & Wei, L. J. (2011). On the C-statistics for evaluating overall adequacy of risk prediction procedures with censored survival data. Statistics in Medicine, 30(10), 1105–1117.
  2. H. Hung and C. T. Chiang, Estimation methods for time-dependent AUC models with survival data, Canadian Journal of Statistics, vol. 38, no. 1, pp. 8–26, 2010.
  3. H. Uno, T. Cai, L. Tian, and L. J. Wei, Evaluating prediction rules for t-year survivors with censored regression models, Journal of the American Statistical Association, vol. 102, pp. 527–537, 2007.
sebp Wed, 05/01/2019 - 17:50

Comments

Hi!

Thank you a lot for your amazing library!
Currently I'm doing a project (at uni) and was wondering if it's possible to use 'concordance_index_ipcw ' as a scoring function for GridSearch(with cv)?
(as aside of 'y_pred' and 'y_true' it needs 'y_train' values)

by sebp at May 01, 2019 03:50 PM

April 24, 2019

Guillaume DesmottesGStreamer buffer flow analyzer

Gstreamer's logging system is an incredibly powerful ally when debugging but it can sometimes be a bit daunting to dig through the massive amount of generated logs. I often find myself writing small scripts processing gst logs when debugging. Their goal is generally to automatically extract some specific information or metrics from the generated logs. Such scripts are usually quickly written and quickly disposed once I'm done with my debugging but I've been wondering how I could make them easier to write and to re-use.

gst-log-parser is an attempt to solve these two problems by providing a library parsing GStreamer logs and enabling users to easily build such tools. It's written in Rust and is shipped with a few tools that I wrote to track actual bugs in GStreamer elements and applications.

One of those tool is a buffer flow analyzer which can be used to provide various information regarding the buffers exchanged through your pipeline. It relies on logs generated by the upstream stats tracer, so no modification in GStreamer core or in plugins is required.

First step is to generate the logs, this is easily done by defining these env variables: GST_DEBUG="GST_TRACER:7" GST_DEBUG_FILE=gst.log GST_TRACERS=stats

We can then use flow for example to detect decreasing pts or dts:

cargo run --release --bin flow gst.log check-decreasing-pts

Decreasing pts tsdemux0:audio_0_0042 00:00:02.550852023 < 00:00:02.555653845

Or to detect gaps of at least 100ms in the buffers flow:

cargo run --release --bin flow gst.log gap 100

gap from udpsrc0:src : 00:00:00.100142318 since previous buffer (received: 00:00:02.924532910 previous: 00:00:02.824390592)

It can also be used to draw graphs of the pts/dts produced by each src pad over time:

cargo run --release --bin flow gst.log plot-pts

gst-flow-pts.png


These are just a few examples of the kind of information we can extract from stats logs. I'll likely add more tools in the future and I'm happy to hear suggestions about other features that would make your life easier when debugging GStreamer.

by Guillaume Desmottes at April 24, 2019 10:07 AM

April 22, 2019

Thibault SaunierGStreamer Editing Services OpenTimelineIO support

(Thibault Saunier)

GStreamer Editing Services OpenTimelineIO support

OpenTimelineIO is an Open Source API and interchange format for editorial timeline information, it basically allows some form of interoperability between the different post production Video Editing tools. It is being developed by Pixar and several other studios are contributing to the project allowing it to evolve quickly.

We, at Igalia, recently landed support for the GStreamer Editing Services (GES) serialization format in OpenTimelineIO, making it possible to convert GES timelines to any format supported by the library. This is extremely useful to integrate GES into existing Post production workflow as it allows projects in any format supported by OpentTimelineIO to be used in the GStreamer Editing Services and vice versa.

On top of that we are building a GESFormatter that allows us to transparently handle any file format supported by OpenTimelineIO. In practice it will be possible to use cuts produced by other video editing tools in any project using GES, for instance Pitivi:

At Igalia we are aiming at making GStreamer ready to be used in existing Video post production pipelines and this work is one step in that direction. We are working on additional features in GES to fill the gaps toward that goal, for instance we are now implementing nested timeline support and framerate based timestamps in GES. Once we implement them, those features will enhance compatibility of Video Editing projects created from other NLE softwares through OpenTimelineIO. Stay tuned for more information!

by thiblahute at April 22, 2019 03:21 PM

April 19, 2019

GStreamerGStreamer 1.16.0 new major stable release

(GStreamer)

The GStreamer team is excited to announce a new major feature release of your favourite cross-platform multimedia framework!

The 1.16 release series adds new features on top of the previous 1.14 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework.

Highlights:

  • GStreamer WebRTC stack gained support for data channels for peer-to-peer communication based on SCTP, BUNDLE support, as well as support for multiple TURN servers.
  • AV1 video codec support for Matroska and QuickTime/MP4 containers and more configuration options and supported input formats for the AOMedia AV1 encoder
  • Support for Closed Captions and other Ancillary Data in video
  • Support for planar (non-interleaved) raw audio
  • GstVideoAggregator, compositor and OpenGL mixer elements are now in -base
  • New alternate fields interlace mode where each buffer carries a single field
  • WebM and Matroska ContentEncryption support in the Matroska demuxer
  • new WebKit WPE-based web browser source element
  • Video4Linux: HEVC encoding and decoding, JPEG encoding, and improved dmabuf import/export
  • Hardware-accelerated Nvidia video decoder gained support for VP8/VP9 decoding, whilst the encoder gained support for H.265/HEVC encoding.
  • Many improvements to the Intel Media SDK based hardware-accelerated video decoder and encoder plugin (msdk): dmabuf import/export for zero-copy integration with other components; VP9 decoding; 10-bit HEVC encoding; video post-processing (vpp) support including deinterlacing; and the video decoder now handles dynamic resolution changes.
  • The ASS/SSA subtitle overlay renderer can now handle multiple subtitles that overlap in time and will show them on screen simultaneously
  • The Meson build is now feature-complete (with the exception of plugin docs) and it is now the recommended build system on all platforms. The Autotools build is scheduled to be removed in the next cycle.
  • The GStreamer Rust bindings and Rust plugins module are now officially part of upstream GStreamer.
  • The GStreamer Editing Services gained a gesdemux element that allows directly playing back serialized edit list with playbin or (uri)decodebin
  • Many performance improvements

Full release notes can be found here.

Binaries for Android, iOS, Mac OS X and Windows will be provided in the next days.

You can download release tarballs directly here: gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, gstreamer-sharp, or gst-omx.

April 19, 2019 11:00 AM

April 15, 2019

GStreamerOrc 0.4.29 bug-fix release

(GStreamer)

The GStreamer team is pleased to announce another maintenance bug-fix release of liborc, the Optimized Inner Loop Runtime Compiler. Main changes since the previous release:

  • PowerPC: Support ELFv2 ABI (A. Wilcox) and ppc64le
  • Mips backend: only enable if the DSPr2 ASE is present
  • Windows and MSVC build fixes
  • orccpu-arm: Allow 'cpuinfo' fallback on non-android
  • pkg-config file for orc-test library
  • orcc: add --decorator command line argument to add function decorators in header files
  • meson: Make orcc detectable from other subprojects
  • meson: add options to disable tests, docs, benchmarks, examples, tools, etc.
  • meson: misc. other fixes

Direct tarball download: orc-0.4.29.

April 15, 2019 12:00 PM

April 11, 2019

GStreamerGStreamer 1.15.90 pre-release (1.16.0 release candidate 1)

(GStreamer)

The GStreamer team is pleased to announce the first release candidate for the upcoming stable 1.16.0 release.

Check out the draft release notes highlighting all the new features, bugfixes, performance optimizations and other important changes.

Packagers: please note that quite a few plugins and libraries have moved between modules since 1.14, so please take extra care and make sure inter-module version dependencies are such that users can only upgrade all modules in one go, instead of seeing a mix of 1.15 and 1.14 on their system.

Binaries for Android, iOS, Mac OS X and Windows will be provided shortly.

Release tarballs can be downloaded directly here:

Known errata: possible crash on 32-bit systems when creating seek events

April 11, 2019 08:00 AM

April 08, 2019

Phil NormandIntroducing WPEQt, a WPE API for Qt5

(Phil Normand)

WPEQt provides a QML plugin implementing an API very similar to the QWebView API. This blog post explains the rationale behind this new project aimed for QtWebKit users.

Qt5 already provides multiple WebView APIs, one based on QtWebKit (deprecated) and one based on QWebEngine (aka Chromium). WPEQt aims to provide …

by Philippe Normand at April 08, 2019 10:20 AM

April 06, 2019

Bastien NoceraDeveloper tool for i18n: “Pseudolocale”

(Bastien Nocera) While browsing for some internationalisation/localisation features, I found an interesting piece of functionality in Android's developer documentation. I'll quote it here:
A pseudolocale is a locale that is designed to simulate characteristics of languages that cause UI, layout, and other translation-related problems when an app is translated.
I've now implemented this for applications and libraries that use gettext, as an LD_PRELOAD library, available from this repository.


The current implementation can highlight a number of potential problems (paraphrasing the Android documentation again):
- String concatenation, which displays as one message split across 2 or more brackets.
- Hard-coded strings, which cannot be sent to translation, display as unaccented text in the pseudolocale to make them easy to notice.
- Right-to-left (RTL) problems such as elements not being mirrored.

Our old friend, Office Runner 


Testing brought some unexpected results :)

by Bastien Nocera (noreply@blogger.com) at April 06, 2019 11:41 AM

Nirbheek ChauhanGStreamer and Meson: A New Hope

Anyone who has written a non-trivial project using Autotools has realized that (and wondered why) it requires you to be aware of 5 different languages. Once you spend enough time with the innards of the system, you begin to realize that it is nothing short of an astonishing feat of engineering. Engineering that belongs in a museum. Not as part of critical infrastructure.

Autotools was created in the 1980s and caters to the needs of an entirely different world of software from what we have at present. Worse yet, it carries over accumulated cruft from the past 40 years — ostensibly for better “cross-platform support” but that “support” is mostly for extinct platforms that five people in the whole world remember.

We've learned how to make it work for most cases that concern FOSS developers on Linux, and it can be made to limp along on other platforms that the majority of people use, but it does not inspire confidence or really anything except frustration. People will not like your project or contribute to it if the build system takes 10x longer to compile on their platform of choice, does not integrate with the preferred IDE, and requires knowledge arcane enough to be indistinguishable from cargo-cult programming.

As a result there have been several (terrible) efforts at replacing it and each has been either incomplete, short-sighted, slow, or just plain ugly. During my time as a Gentoo developer in another life, I came in close contact with and developed a keen hatred for each of these alternative build systems. And so I mutely went back to Autotools and learned that I hated it the least of them all.

Sometime last year, Tim heard about this new build system called ‘Meson’ whose author had created an experimental port of GStreamer that built it in record time.

Intrigued, he tried it out and found that it finished suspiciously quickly. His first instinct was that it was broken and hadn’t actually built everything! Turns out this build system written in Python 3 with Ninja as the backend actually was that fast. About 2.5x faster on Linux and 10x faster on Windows for building the core GStreamer repository.

Upon further investigation, Tim and I found that Meson also has really clean generic cross-compilation support (including iOS and Android), runs natively (and just as quickly) on OS X and Windows, supports GNU, Clang, and MSVC toolchains, and can even (configure and) generate XCode and Visual Studio project files!

But the critical thing that convinced me was that the creator Jussi Pakkanen was genuinely interested in the use-cases of widely-used software such as Qt, GNOME, and GStreamer and had already added support for several tools and idioms that we use — pkg-config, gtk-doc, gobject-introspection, gdbus-codegen, and so on. The project places strong emphasis on both speed and ease of use and is quite friendly to contributions.

Over the past few months, Tim and I at Centricular have been working on creating Meson ports for most of the GStreamer repositories and the fundamental dependencies (libffi, glib, orc) and improving the MSVC toolchain support in Meson.

We are proud to report that you can now build GStreamer on Linux using the GNU toolchain and on Windows with either MinGW or MSVC 2015 using Meson build files that ship with the source (building upon Jussi's initial ports).

Other toolchain/platform combinations haven't been tested yet, but they should work in theory (minus bugs!), and we intend to test and bugfix all the configurations supported by GStreamer (Linux, OS X, Windows, iOS, Android) before proposing it for inclusion as an alternative build system for the GStreamer project.

You can either grab the source yourself and build everything, or use our (with luck, temporary) fork of GStreamer's cross-platform build aggregator Cerbero.

Update: I wrote a new post with detailed steps on how to build using Cerbero and generate Visual Studio project files.

Second update: All this is now upstream, see the upstream Cerbero repository's README

Personally, I really hope that Meson gains widespread adoption. Calling Autotools the Xorg of build systems is flattery. It really is just a terrible system. We really need to invest in something that works for us rather than against us.

PS: If you just want a quick look at what the build system syntax looks like, take a look at this or the basic tutorial.

by Nirbheek (noreply@blogger.com) at April 06, 2019 05:08 AM

Nirbheek ChauhanBuilding and Developing GStreamer using Visual Studio

Two months ago, I talked about how we at Centricular have been working on a Meson port of GStreamer and its basic dependencies (glib, libffi, and orc) for various reasons — faster builds, better cross-platform support (particularly Windows), better toolchain support, ease of use, and for a better build system future in general.

Meson also has built-in support for things like gtk-doc, gobject-introspection, translations, etc. It can even generate Visual Studio project files at build time so projects don't have to expend resources maintaining those separately.

Today I'm here to share instructions on how to use Cerbero (our “aggregating” build system) to build all of GStreamer on Windows using MSVC 2015 (wherever possible). Note that this means you won't see any Meson invocations at all because Cerbero does all that work for you.

Note that this is still all unofficial and has not been proposed for inclusion upstream. We still have a few issues that need to be ironed out before we can do that¹.

Update: As of March 2019, all these instructions are obsolete since MSVC support has been merged into upstream Cerbero. I'm leaving this outdated text as-is for archival purposes.

First, you need to setup the environment on Windows by installing a bunch of external tools: Python 2, Python3, Git, etc. You can find the instructions for that here:

https://github.com/centricular/cerbero#windows

This is very similar to the old Cerbero instructions, but some new tools are needed. Once you've done everything there (Visual Studio especially takes a while to fetch and install itself), the next step is fetching Cerbero:

$ git clone https://github.com/centricular/cerbero.git

This will clone and checkout the meson-1.8 branch that will build GStreamer 1.8.x. Next, we bootstrap it:

https://github.com/centricular/cerbero#bootstrap

Now we're (finally) ready to build GStreamer. Just invoke the package command:

python2 cerbero-uninstalled -c config/win32-mixed-msvc.cbc package gstreamer-1.0

This will build all the `recipes` that constitute GStreamer, including the core libraries and all the plugins including their external dependencies. This comes to about 76 recipes. Out of all these recipes, only the following are ported to Meson and are built with MSVC:

bzip2.recipe
orc.recipe
libffi.recipe (only 32-bit)
glib.recipe
gstreamer-1.0.recipe
gst-plugins-base-1.0.recipe
gst-plugins-good-1.0.recipe
gst-plugins-bad-1.0.recipe
gst-plugins-ugly-1.0.recipe

The rest still mostly use Autotools, plain GNU make or cmake. Almost all of these are still built with MinGW. The only exception is libvpx, which uses its custom make-based build system but is built with MSVC.

Eventually we want to build everything including all external dependencies with MSVC by porting everything to Meson, but as you can imagine it's not an easy task. :-)

However, even with just these recipes, there is a large improvement in how quickly you can build all of GStreamer inside Cerbero on Windows. For instance, the time required for building gstreamer-1.0.recipe which builds gstreamer.git went from 10 minutes to 45 seconds. It is now easier to do GStreamer development on Windows since rebuilding doesn't take an inordinate amount of time!

As a further improvement for doing GStreamer development on Windows, for all these recipes (except libffi because of complicated reasons), you can also generate Visual Studio 2015 project files and use them from within Visual Studio for editing, building, and so on.

Go ahead, try it out and tell me if it works for you!

As an aside, I've also been working on some proper in-depth documentation of Cerbero that explains how the tool works, the recipe format, supported configurations, and so on. You can see the work-in-progress if you wish to.

1. Most importantly, the tests cannot be built yet because GStreamer bundles a very old version of libcheck. I'm currently working on fixing that.

by Nirbheek (noreply@blogger.com) at April 06, 2019 05:07 AM

April 03, 2019

Christian SchallerPreparing for Fedora Workstation 30

(Christian Schaller)

I just installed the Fedora Workstation 30 Beta yesterday and so far things are looking great. As many others have reported to, with the GNOME 3.32 update things definitely feels faster and smoother. So I thought it was a good time to talk about what is coming in Fedora Workstation 30 and what we are currently working on.

Fractional Scaling: One of the big features that landed, although still considered experimental was the fractional scaling feature that has been a collaboration between Jonas Ådahl here at Red hat and Marco Trevisan at Canonical. It has taken quite some time since the initial hackfest as it is a complex task, but we are getting close. Fractional scaling is a critical feature for many HiDPI screen laptops to get a desktop size that perfectly fits their screen, not being to small or to large.

Screen sharing support for Chrome and Firefox under Wayland. The Wayland security model doesn’t allow any application to freely grab images or streams of the whole desktop like you could under X. This is of course a huge improvement in security, but it did cause some disruption for valid usecases like screen sharing with things like BlueJeans and Google Hangouts. We been working on resolving that with the help of PipeWire. We been at it for some time and things are now coming together. Chrome 73 ships with everything needed to make this work with Chrome, although you have to turn it on manually (got to this URL to turn it on: chrome://flags/#enable-webrtc-pipewire-capturer). The reason it needs to be manually enabled is not that it is unreliable, it is because the UI is still a little fugly due to a combination of feature overlap between the browser and the desktop and also how the security feature of the desktop is done. We are trying to come up with ways for the UI to be smoother without sacrificing your privacy/security. For Firefox we will keep shipping with our downstream patch until we manage to get it landed upstream.

Firefox for Wayland: Martin Stransky has been hard at work making Firefox be able to run Wayland-native. That work is tantalizingly near, but we decided to postpone it for Fedora Workstation 31 in the end to make sure it is really well polished before releasing it upon the world. The advantage of Wayland native Firefox is that in addition to bring us one step closer to not needing to run an X server (XWayland) all the time it also enables things like fractional scaling mentioned above to work for Firefox.

OpenH264 improved: As many of you know Firefox relies on a library called OpenH264, provided by Cisco, for its H264 video codec support for WebRTC. This library is also provided to Fedora users from Cisco free of charge (you can install it through GNOME Software). However its usefulness have been somewhat limited due to only supporting the baseline profile used for video calling, but not the Main and High profiles used by most online video content. Well what I can tell you is that Red Hat, Endless and Cisco partnered with Centricular some time ago to add support for decoding those profiles to OpenH264 and that work is now almost complete. The basic code enabling them is already merged, but Jan Schmidt at Centricular is working on fixing a few files that are still giving us problems. As soon as that is generally shipping we hope to get Firefox to be able to use OpenH264 also for things like Youtube playback and of course also use OpenH264 to playback any H264 using GStreamer applications like Totem. So a big thank you to Endless, Cisco and Centricular for working with us on this and thus enabling us to have a legal way to offer H264 support to our users.

NVidia binary driver support under Wayland: We been putting it quite a bit of effort trying to tie off the lose ends for using the NVidia binary driver with Wayland. We did manage to fix a long list of bugs like dealing with various colorspace issues, multimonitor setups and so on. For Intel and AMD graphics users things should actually be pretty good to go at this point. The last major item holding us back on the NVidia side is full support for using the binary driver with XWayland applications (native Wayland applications should work fine already). Adam Jackson worked diligently to get all the pieces in place and we do think we have a model now that will allow NVidia to provide an updated driver that should enable XWayland. As it stands though that driver update is likely to only come out towards the fall, so we will keep defaulting to X for NVidia binary driver users for some time more.

Gaming under Wayland. Olivier Fourdan and Jonas Ådahl has trying to crush any major Wayland bug reported for quite some time now and one area where we seem to have rounded the corner is for games. Valve has been kind enough to give us the ability to install and run any steam game for testing purposes, so whenever we found a game giving us trouble we have been able to let Olivier and Jonas reproduce it easily. So on my own gaming box I am now able to run all the Steam games I have under Wayland, including those using Proton, without a hitch. We haven’t tested with the full Steam catalog of course, there are thousands, so if your favourite game is giving you trouble under Wayland still, please let us know. Talking about gaming one area we will try to free up some cycles going forward to look deeper at is Flatpaks and gaming. We already done quite a bit of work in this area, with things like the NVidia binary driver extension and the Steam package on Flathub. But we know from leading linux game devs that there are still some challenges to be resolved, like making host device access for gamepads simpler from within the Flatpak sandbox.

Flatpak Creation in Fedora. Owen Taylor has been in charge of getting Flatpaks building in Fedora, ensuring we can produce Flatpaks from Fedora packages. Owen set up a system to track the Fedora Flatpak status, we got about 10 applications so far, but hope to greatly grow that number of time as we polish up the system. This enables us to start planning for shipping some applications in Fedora Workstation as Flatpaks by default in a future release. This respository will be available by default in Fedora workstation 30 and you can choose the flatpak version of the package through the new drop down box in the top right corner of GNOME Software. For now the RPM version of the package is still the default, but we expect to change that in later releases of Fedora Workstation.

Gedit in GNOME Software with Source drop down box

Gedit in GNOME Software with Source drop down box

Fedora Toolbox – Debarshi Ray is leading the effort we call Fedora Toolbox, which is our starting point for our goal to revitalise and revolutionize development on Linux. Fedora Toolbox is trying to take the model of a pet container for development and make it seamless and natural. Our goal is to make it dead simple to create pet containers for your projects, so you can for instance have a Fedora pet container where you develop against the leading edge libraries and tools in Fedora, and you can have a RHEL based container where you develop against the library versions and tools shipping in RHEL (makes updating and fixing in production applications a lot easier) and maybe a SteamOS container to work on your little game project. Currently the model is that you have one pet container per OS your targeting, but we are pondering if maybe having one pet container per project would be even better if we can find good ways to avoid it being a lot of extra overhead (by for example having to re-install all your favourite command line tools in the container) or just outright confusing (which container got what tools and libraries again). Our goal here though is to ensure Fedora becomes the premier container native OS out there and thus a natural home for developers doing container development.
We are also working with the team inside Red Hat focusing on AI/ML and trying to ensure that we have a super smooth way for you to get a pet container with things like TensorFlow and CUDA up and running quickly.

Being an excellent platform for Openshift and Kubernetes development: We are putting effort into together with the Red Hat developer tools organization to bringing the OpenShift and CodeReady Studio and CodeReady Workspaces tools to Fedora. These tools have so far been very focused on RHEL support, but thanks to Flatpak for CodeReady Studio and web integration for CodeReady Workspaces we now have a path for making them easily available in Fedora too. In the world of Kubernetes OpenShift is where you want to be, and we want Fedora Workstation to be the ultimate portal for OpenShift development.

Fleet Commander with Active Directory support – So we are about to hit a very major milestone with Fleet Commander our large scale desktop management tool for Fedora and RHEL. Oliver Gutierrez has been hard at work making it work with Active Directory in addition to the existing FreeIPA support. We know that a majority of people interested in Fleet Commander are only using Active Directory currently, so being able to use Active Directory with Fleet Commander should make this great tool available to a huge number of new users. So if you are managing a University computer lab or a large number of Fedora or RHEL clients in your company we should soon have a Fleet Commander release out that you can use. And if you are not using Fedora or RHEL today well Fleet Commander is a very big reason for switching over!
We will do a proper announcement with further details once the release with Active Directory support is out.

PipeWire – I don’t have a major development to report, just a lot of steady work being done to stabilize and improve PipeWire. As mentioned earlier we now have Wayland screen sharing and recording working smoothly in the major browsers which is the user facing feature I think most of you will notice. Wim is still working on pushing the audio side it forward, but that is also a huge task. We have started talking about organizing a new hackfest soon to see if we can accelerate the effort further again. Likely scenario at this point in time is that we start enabling the JACK side of PipeWire first, maybe as early as Fedora Workstation 31, and then come back and do the PulseAudio replacement as a last stage.

Improved Input handling Another area we keep focusing on is improving input in Fedora. Peter Hutterer and Benjamin Tissoires are working hard on improving the stack. Peter just sent an extensive RFC out for how to deal with high resolution mice under Linux and Benjamin has been trying to get support for the Dell Totem landed. Neither will be there unfortunately for Fedora Workstation 30,but we expect to land this before Fedora Workstation 31.

Flicker-free boot
Hans de Goede has continued working on his effort to create a flicker-free boot experience with Fedora. The results of this work is on display in Fedora Workstation 30 and will for most of you now provide a seamless bootup experience . This effort is not so much about functionality as it is about ensuring you have an end-to-end polished experience with your Linux desktop. Things like the constant mode changes we seen in the past contribute to giving Linux an image of being unpolished and we want Fedora to be the vehicle that breaks down that image.

Ramping up Silverblue

For those of you following Fedora you are probably aware of Silverblue, which is our effort to re-think the Linux desktop distribution from the ground up and help us take the Linux desktop to a new level. The distribution model hasn’t really changed much over the last 20 years and we probably polished up the offering as far as we can within the scope of that model. For instance I upgraded my system to Fedora 30 beta yesterday and it was a long and tedious process of looking at about 6000 individual packages get updated from the Fedora 29 version to the Fedora 30 version one by one. I didn’t hit a lot of major snags despite this being a beta, but it is screamingly obvious that updating your operating system in this way is both slow and inherently fragile as anyone of those 6000 packages might hit a problem during upgrade and leave the system in a unknown state, especially since its common for packages to run scripts and similar as part of their upgrade.

Silverblue provides a revolutionary replacement for that process. First of all since it ships as a unified image we make life a lot easier for our QE team who can then test and verify against a single image which is in a known state. This in turn ensures that you as a user can feel confident that the new OS version will not break something on your system. And since the new version is just an image stored on your system next to the old one, upgrading is just about rebooting your system. There is no waiting for individual packages to get upgraded, as everything is already there and ready. Compare it to booting into a different kernel version on Fedora, it is quick and trivial.
And this also means that in the unlikely case that there is a problem with the new OS version you can just as easily go back to the previous version, by rebooting again and choosing to boot into that version. So you basically have instant upgrades with instant rollback if needed.
We believe this will radically change the way you look at OS upgrades forever, in fact you might almost forget they are happening.

And since Silverblue will basically be a Flatpak (and other containers) only OS you will have a clean delimitation between OS and applications. This means that even if we do major updates to the host, your applications should remain unaffected by the host OS update.
In fact we have some very interesting developments underway for Flatpak, with some major new efforts underway, efforts that I would love to talk about, but they are tied to some major Red Hat announcements that will happen at this years Red Hat Summit which will happen on May 7th – May 9th, so I will leave it as a teaser and then let you all know once the Summit is underway and Red Hats related major announcements are done.

There is a lot of work happening around Silverblue and as it happens Matthias Clasen wrote a long blog entry about it today. That blog goes into a lot more details on some of the Silverblue work items we been doing.

Anyway, I feel really excited about Silverblue and as we continue to refine the experience and figure out how everything will look in this brave new world I am sure everyone else will get excited too. Silverblue represents the single biggest evolution of the Linux desktop since the original GNOME and KDE releases back in the late nineties. It is not just about trying to tweak the existing experience, but an attempt at taking a big leap forward and provide an operating system that embodies all that we learned over these last 20 years and provide a natural home for developers and creators of all kind in our container centric computing future. Be sure to grab the Silverblue image of Fedora 30 beta and give it a test run. I recommend activating flathub.org repo to get started in order to get a decent range of applications available. As we move forward we are working hard to ensure that you have the world of applications available out of the box, so no need to go an enable any 3rd party repositories, but there are some more work that needs to happen before we can do that.

Summary
So Fedora Workstation 30 is going to be another exiting release of both of traditional RPM based Workstation version and of Silverblue, and I hope they will encourage even more people to join our rapidly growing Fedora community. Be sure to join us in #fedora-workstation on freenode IRC to talk!

by uraeus at April 03, 2019 07:13 PM

March 28, 2019

Christian SchallerLVFS adopted by Linux Foundation

(Christian Schaller)

Today the announcement went out that the Linux Vendor Firmware Service has become and official Linux Foundation service. For those that don’t know it yet LVFS is a service that provides firmware for your linux running hardware and it was one off our initial efforts as part of the Fedora Workstation effort to drain the swamp in terms of making Linux a first class desktop operating system.

The effort came about due to Peter Jones, who is Red Hats representative to the UEFI standards body, approaching me to talk about how Microsoft was trying to push for a standardized way to ship UEFI firmware for Windows and how UEFI being a standard openeded a path for us to actually get full support for this without each vendor having to ship and maintain their own proprietary firmware tools. So we did a meeting with Peter Jones and also brought in Richard Hughes who had already been looking at the problem of firmware updates in Linux, partly due to his ColorHug hardware, and the effort got started with Peter working on the low level OS tooling and Richard taking on building the service to drive distribution and the work to integrate it all into GNOME Software. One concern we had of course was if we could reach critical mass for this and get vendors interested, but luckily Dell was just as keen on improving firmware handling under Linux as us and signed on from the start. Having Dell onboard helped give the effort a lot of credibility and as the service matured we ended up having more and more vendors sign up. We also reached out through Red Hats partnerships to push vendors to adopt supporting it. As Richard also mentions in his interview about it, we had made the solution as similar to Microsofts as possible to decrease the threshold for hardware vendors to join, the goal being that if they did the basic work to support Windows they could more or less just ship the same firmware file to LVFS.

One issue that we had gone back on forth about inside Red Hat was the formal setup of the service. While we all agreed the service was hugely beneficial it felt like something that should be a shared service for all of Linux and we felt that if the service was Red Hat provided it might dissuade other vendors to join. So we started looking around for a neutral place to land the service while in the meantime LVFS had a sort of autonomous status being run as a community effort by Richard Hughes. We ended up talking to Chris Wright, the Red Hat CTO, about the project and he offered to facilitate contact with the Linux Foundation. The initial meetings was very positive and the Linux Foundation seemed interested in running the service right from the start, it did end up taking us quite some time to clear all formal and technical hurdles to get there, but I for one is very happy to see the LVFS now being a vendor neutral service provided by the Linux Foundation.

So a big thank you to Richard Hughes, Peter Jones, Chris Wright, Mario Limonciello and Dell and the Linux Foundation for their help in getting us here. And also a big thank you to Fedora and the Fedora community for their help with providing us a place to develop and polish up this service to the benefit of all. To me this is one of many examples of how Fedora keeps innovating and leading the way on Desktop linux.

by uraeus at March 28, 2019 02:50 PM

March 08, 2019

Bastien NoceraVideos and Books in GNOME 3.32

(Bastien Nocera) GNOME 3.32 will very soon be released, so I thought I'd go back on a few of the things that happened with some of our content applications.

Videos
First, many thanks to Marta Bogdanowicz, Baptiste Mille-Mathias, Ekaterina Gerasimova and Andre Klapper who toiled away at updating Videos' user documentation since 2012, when it was still called “Totem”, and then again in 2014 when “Videos” appeared.

The other major change is that Videos is available, fully featured, from Flathub. It should play your Windows Movie Maker films, your circular wafers of polycarbonate plastic and aluminium, and your Devolver indie films. No more hunting codecs or libraries!

In the process, we also fixed a large number of outstanding issues, such as accommodating for the app menu's planned disappearance, moving the audio/video properties tab to nautilus proper, making the thumbnailer available as an independent module, making the MPRIS plugin work better and loads, loads mo.


Download on Flathub

Books

As Documents was removed from the core release, we felt it was time for Books to become independent. And rather than creating a new package inside a distribution, the Flathub version was updated. We also fixed a bunch of bugs, so that's cool :)
Download on Flathub

Weather

I didn't work directly on Weather, but I made some changes to libgweather which means it should be easier to contribute to its location database.

Adding new cities doesn't require adding a weather station by hand, it would just pick the closest one, and weather stations also don't need to be attached to cities either. They were usually attached to villages, sometimes hamlets!

The automatic tests are also more stringent, and test for more things, which should hopefully mean less bugs.

And even more Flatpaks

On Flathub, you'll also find some applications I packaged up in the last 6 months. First is Teo Thomson emulator, GBE+, a Game Boy emulator focused on accessories emulation, and a way to run your old Flash games offline.

by Bastien Nocera (noreply@blogger.com) at March 08, 2019 07:00 PM

February 27, 2019

Sebastian Pölsterlscikit-survival 0.7 released

scikit-survival 0.7 released

This is a long overdue maintenance release of scikit-survival 0.7 that adds compatibility with Python 3.7 and scikit-learn 0.20. For a complete list of changes see the release notes.

Download

Pre-built conda packages are available for Linux, OSX and Windows:

conda install -c sebp scikit-survival

Alternatively, scikit-survival can be installed from source via pip:

pip install -U scikit-survival
sebp Wed, 02/27/2019 - 22:42

Comments

by sebp at February 27, 2019 09:42 PM

GStreamerGStreamer 1.15.2 unstable development release

(GStreamer)

The GStreamer team is pleased to announce the second development release in the unstable 1.15 release series.

The unstable 1.15 release series adds new features on top of the current stable 1.16 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework.

The unstable 1.15 release series is for testing and development purposes in the lead-up to the stable 1.16 series which is scheduled for release in a few weeks time. Any newly-added API can still change until that point, although it is rare for that to happen.

Check out the draft release notes highlighting all the new features, bugfixes, performance optimizations and other important changes.

Packagers: please note that quite a few plugins and libraries have moved between modules since 1.14, so please take extra care and make sure inter-module version dependencies are such that users can only upgrade all modules in one go, instead of seeing a mix of 1.15 and 1.14 on their system.

Binaries for Android, iOS, Mac OS X and Windows will be provided shortly.

Release tarballs can be downloaded directly here:

February 27, 2019 10:00 AM

February 22, 2019

GStreamerGStreamer Rust bindings 0.13.0 release

(GStreamer)

A new version of the GStreamer Rust bindings, 0.13.0, was released.

This new release is the first to include direct support for implementing GStreamer elements and other types in Rust. Previously this was provided via a different crate.
In addition to this, the new release features many API improvements, cleanups, newly added bindings and bugfixes.

As usual this release follows the latest gtk-rs release, and a new version of GStreamer plugins written in Rust was also released.

Details can be found in the release notes for gstreamer-rs and gstreamer-rs-sys.

The code and documentation for the bindings is available on the freedesktop.org GitLab

as well as on crates.io.

If you find any bugs, missing features or other issues please report them in GitLab.

February 22, 2019 03:00 PM

February 13, 2019

Víctor JáquezGenerating a GStreamer-1.14 bundle for TravisCI with Ubuntu/Trusty

For having continous integration in your multimedia project hosted in GitHub with TravisCI, you may want to compile and run tests with a recent version of GStreamer. Nonetheless, TravisCI mainly offers Ubuntu Trusty as one of the possible distributions to deploy in their CI, and that distribution packages GStreamer 1.2, which might be a bit old for your project’s requirements.

A solution for this problem is to provide to TravisCI your own GStreamer bundle with the version you want to compile and test on you project, in this case 1.14. The present blog is recipe I followed to generate that GStreamer bundle with GstGL support.

There are three main issues:

  1. The packaged libglib version is too old, hoping that we will not find an ABI breakage while running the CI.
  2. The packaged ffmpeg version is too old
  3. As we want to compile GStreamer using gst-build, we need a recent version of meson, which requires python3.5, not available in Trusty.

schroot

Old habits die hard, and I have used schroot for handle chroot environments without complains, it handles the bind mounting of /proc, /sys and all that repetitive stuff that seals the isolation of the chrooted environment.

The debootstrap’s variant I use is buildd because it installs the build-essential package.

$ sudo mkdir /srv/chroot/gst-trusty64
$ sudo debootstrap --arch=amd64 --variant=buildd trusty ./gst-trusty64/ http://archive.ubuntu.com/ubuntu
$ sudo vim /etc/schroot/chroot.d

This is the schroot configuration I will use. Please, adapt it to your need.

[gst]
description=Ubuntu Trusty 64-bit for GStreamer
directory=/srv/chroot/gst-trusty64
type=directory
users=vjaquez
root-users=vjaquez
profile=default
setup.fstab=default/vjaquez-home.fstab

I am overrinding the fstab default file for a custom one where the home directory of vjaquez user aims to a clean directory.

$ mkdir -p ~/home-chroot/gst
$ sudo vim /etc/schroot/default/vjaquez-home.fstab
# fstab: static file system information for chroots.
# Note that the mount point will be prefixed by the chroot path
# (CHROOT_PATH)
#
#                
/proc           /proc           none    rw,bind         0       0
/sys            /sys            none    rw,bind         0       0
/dev            /dev            none    rw,bind         0       0
/dev/pts        /dev/pts        none    rw,bind         0       0
/home           /home           none    rw,bind         0       0
/home/vjaquez/home-chroot/gst   /home/vjaquez   none    rw,bind 0       0
/tmp            /tmp            none    rw,bind         0       0

configure chroot environment

We will get into the chroot environment as super user in order to add the required packages. For that pupose we add universe repository in apt.

  • libglib requires: autotools-dev gnome-pkg-tools libtool libffi-dev libelf-dev libpcre3-dev desktop-file-utils libselinux1-dev libgamin-dev dbus dbus-x11 shared-mime-info libxml2-utils
  • Python requires: libssl-dev libreadline-dev libsqlite3-dev
  • GStreamer requires: bison flex yasm python3-pip libasound2-dev libbz2-dev libcap-dev libdrm-dev libegl1-mesa-dev libfaad-dev libgl1-mesa-dev libgles2-mesa-dev libgmp-dev libgsl0-dev libjpeg-dev libmms-dev libmpg123-dev libogg-dev libopus-dev liborc-0.4-dev libpango1.0-dev libpng-dev libpulse-dev librtmp-dev libtheora-dev libtwolame-dev libvorbis-dev libvpx-dev libwebp-dev pkg-config unzip zlib1g-dev
  • And for general setup: language-pack-en ccache git curl
$ schroot --user root --chroot gst
(gst)# sed -i "s/main$/main universe/g" /etc/apt/sources.list
(gst)# apt update
(gst)# apt upgrade
(gst)# apt --no-install-recommends --no-install-suggests install \
autotools-dev gnome-pkg-tools libtool libffi-dev libelf-dev \
libpcre3-dev desktop-file-utils libselinux1-dev libgamin-dev dbus \
dbus-x11 shared-mime-info libxml2-utils \
libssl-dev libreadline-dev libsqlite3-dev \ 
language-pack-en ccache git curl bison flex yasm python3-pip \
libasound2-dev libbz2-dev libcap-dev libdrm-dev libegl1-mesa-dev \
libfaad-dev libgl1-mesa-dev libgles2-mesa-dev libgmp-dev libgsl0-dev \
libjpeg-dev libmms-dev libmpg123-dev libogg-dev libopus-dev \
liborc-0.4-dev libpango1.0-dev libpng-dev libpulse-dev librtmp-dev \
libtheora-dev libtwolame-dev libvorbis-dev libvpx-dev libwebp-dev \
pkg-config unzip zlib1g-dev

Finally we create our installation prefix. In this case /opt/gst to avoid the contamination of /usr/local and logout as root.

(gst)# mkdir -p /opt/gst
(gst)# chown vjaquez /opt/gst
(gst)# exit

compile ffmpeg 3.2

Now, let’s login again, but as the unprivileged user, to build the bundle, starting with ffmpeg. Notice that we are using ccache and building out-of-source.

$ schroot --chroot gst
(gst)$ git clone https://git.ffmpeg.org/ffmpeg.git ffmpeg
(gst)$ cd ffmpeg
(gst)$ git checkout -b work n3.2.12
(gst)$ mkdir build
(gst)$ cd build
(gst)$ ../configure --disable-static --enable-shared \
--disable-programs --enable-pic --disable-doc --prefix=/opt/gst 
(gst)$ PATH=/usr/lib/ccache/:${PATH} make -j8 install

compile glib 2.48

(gst)$ cd ~
(gst)$ git clone https://gitlab.gnome.org/GNOME/glib.git
(gst)$ cd glib
(gst)$ git checkout -b work origin/glib-2-48
(gst)$ mkdir mybuild
(gst)$ cd mybuild
(gst)$ ../autogen.sh --prefix=/opt/gst
(gst)$ PATH=/usr/lib/ccache/:${PATH} make -j8 install

install Python 3.5

Pyenv is a project that allows the automation of installing and executing, in the user home directory, multiple versions of Python.

(gst)$ curl -L https://github.com/pyenv/pyenv-installer/raw/master/bin/pyenv-installer | bash
(gst)$ ~/.pyenv/bin/pyenv install 3.5.0

Install meson 0.50

We will install the last available version of meson in the user home directory, that is why PATH is extended and exported.

(gst)$ cd ~
(gst)$ ~/.pyenv/verion/3.5.0/pip3 install --user meson
(gst)$ export PATH=${HOME}/.local/bin:${PATH}

build GStreamer 1.14

PKG_CONFIG_PATH is exported to expose the compiled versions of ffmpeg and glib. Notice that the libraries are installed in /opt/lib in order to avoid the dispersion of pkg-config files.

(gst)$ cd ~/
(gst)$ export PKG_CONFIG_PATH=/opt/gst/lib/pkgconfig/
(gst)$ git clone https://gitlab.freedesktop.org/gstreamer/gst-build.git
(gst)$ cd gst-build
(gst)$ git checkout -b work origin/1.14
(gst)$ meson -Denable_python=false \
-Ddisable_gst_libav=false -Ddisable_gst_plugins_ugly=true \
-Ddisable_gst_plugins_bad=false -Ddisable_gst_devtools=true \
-Ddisable_gst_editing_services=true -Ddisable_rtsp_server=true \
-Ddisable_gst_omx=true -Ddisable_gstreamer_vaapi=true \
-Ddisable_gstreamer_sharp=true -Ddisable_introspection=true \
--prefix=/opt/gst build --libdir=lib
(gst)$ ninja -C build install

test!

(gst)$ cd ~/
(gst)$ LD_LIBRARY_PATH=/opt/gst/lib \
GST_PLUGIN_SYSTEM_PATH=/opt/gst/lib/gstreamer-1.0/ \
/opt/gst/bin/gst-inspect-1.0

And the list of available elemente shall be shown.

archive the bundle

(gst)$ cd ~/
(gst)$ tar zpcvf gstreamer-1.14-x86_64-linux-gnu.tar.gz -C /opt ./gst

update your .travis.yml

These are the packages you shall add to run this generated GStreamer bundle:

  • libasound2-plugins
  • libfaad2
  • libfftw3-single3
  • libjack-jackd2-0
  • libmms0
  • libmpg123-0
  • libopus0
  • liborc-0.4-0
  • libpulsedsp
  • libsamplerate0
  • libspeexdsp1
  • libtdb1
  • libtheora0
  • libtwolame0
  • libwayland-egl1-mesa
  • libwebp5
  • libwebrtc-audio-processing-0
  • liborc-0.4-dev
  • pulseaudio
  • pulseaudio-utils

And this is the before_install and before_script targets:

      before_install:
        - curl -L http://server.example/gstreamer-1.14-x86_64-linux-gnu.tar.gz | tar xz
        - sed -i "s;prefix=/opt/gst;prefix=$PWD/gst;g" $PWD/gst/lib/pkgconfig/*.pc
        - export PKG_CONFIG_PATH=$PWD/gst/lib/pkgconfig
        - export GST_PLUGIN_SYSTEM_PATH=$PWD/gst/lib/gstreamer-1.0
        - export GST_PLUGIN_SCANNER=$PWD/gst/libexec/gstreamer-1.0/gst-plugin-scanner
        - export PATH=$PATH:$PWD/gst/bin
        - export LD_LIBRARY_PATH=$PWD/gst/lib:$LD_LIBRARY_PATH

      before_script:
        - pulseaudio --start
        - gst-inspect-1.0 | grep Total

by vjaquez at February 13, 2019 07:43 PM