October 19, 2018

Robert McQueenGNOME Foundation Hackfest 2018

(Robert McQueen)

This week, the GNOME Foundation Board of Directors met at the Collabora office in Cambridge, UK, for the second annual Foundation Hackfest. We were also joined by the Executive Director, Neil McGovern, and Director of Operations, Rosanna Yuen. This event was started by last year’s board and is a great opportunity for the newly-elected board to set out goals for the coming year and get some uninterrupted hacking done on policies, documents, etc. While it’s fresh in our mind, we wanted to tell you about some of the things we have been working on this week and what the community can hope to see in the coming months.

Wednesday: Goals

On Wednesday we set out to define the overall goals of the Foundation, so we could focus our activities for the coming years, ensuring that we were working on the right priorities. Neil helped to facilitate the discussion using the Charting Impact process. With that input, we went back to the purpose of the Foundation and mapped that to ten and five year goals, making sure that our current strategies and activities would be consistent with reaching those end points. This is turning out to be a very detailed and time-consuming process. We have made a great start, and hope to have something we can share for comments and input soon. The high level 10-year goals we identified boiled down to:

  • Sustainable project and foundation
  • Wider awareness and mindshare – being a thought leader
  • Increased user base

As we looked at the charter and bylaws, we identified a long-standing issue which we need to solve — there is currently no formal process to cover the “scope” of the Foundation in terms of which software we support with our resources. There is the release team, but that is only a subset of the software we support. We have some examples such as GIMP which “have always been here”, but at present there is no clear process to apply or be included in the Foundation. We need a clear list of projects that use resources such as CI, or have the right to use the GNOME trademark for the project. We have a couple of similar proposals from Allan Day and Carlos Soriano for how we could define and approve projects, and we are planning to work with them over the next couple of weeks to make one proposal for the board to review.

Thursday: Budget forecast

We started the second day with a review of the proposed forecast from Neil and Rosanna, because the Foundation’s financial year starts in October. We have policies in place to allow staff and committees to spend money against their budget without further approval being needed, which means that with no approved budget, it’s very hard for the Foundation to spend any money. The proposed budget was based off the previous year’s actual figures, with changes to reflect the increased staff headcount, increased spend on CI, increased staff travel costs, etc, and ensure after the year’s spending, we follow the reserves policy to keep enough cash to pay the foundation staff for a further year. We’re planning to go back and adjust a few things (internships, marketing, travel, etc) to make sure that we have the right resources for the goals we identified.

We had some “hacking time” in smaller groups to re-visit and clarify various policies, such as the conference and hackfest proposal/approval process, travel sponsorship process and look at ways to support internationalization (particularly to indigenous languages).

Friday: Foundation Planning

The Board started Friday with a board-only (no staff) meeting to make sure we were aligned on the goals that we were setting for the Executive Director during the coming year, informed by the Foundation goals we worked on earlier in the week. To avoid the “seven bosses” problem, there is one board member (myself) responsible for managing the ED’s priorities and performance. It’s important that I take advantage of the opportunity of the face to face meeting to check in with the Board about their feedback for the ED and things I should work together with Neil on over the coming months.

We also discussed a related topic, which is the length of the term that directors serve on the Foundation Board. With 7 staff members, the Foundation needs consistent goals and management from one year to the next, and the time demands on board members should be reduced from previous periods where the Foundation hasn’t had an Executive Director. We want to make sure that our “ten year goals” don’t change every year and undermine the strategies that we put in place and spend the Foundation resources on. We’re planning to change the Board election process so that each director has a two year term, so half of the board will be re-elected each year. This also prevents the situation where the majority of the Board is changed at the same election, losing continuity and institutional knowledge, and taking months for people to get back up to speed.

We finished the day with a formal board meeting to approve the budget, more hack time on various policies (and this blog!). Thanks to Collabora for use of their office space, food, and snacks – and thanks to my fellow Board members and the Foundation’s wonderful and growing staff team

by ramcq at October 19, 2018 03:38 PM

October 15, 2018

Robert McQueenFlatpaks, sandboxes and security

(Robert McQueen)

Last week the Flatpak community woke to the “news” that we are making the world a less secure place and we need to rethink what we’re doing. Personally, I’m not sure this is a fair assessment of the situation. The “tl;dr” summary is: Flatpak confers many benefits besides the sandboxing, and even looking just at the sandboxing, improving app security is a huge problem space and so is a work in progress across multiple upstream projects. Much of what has been achieved so far already delivers incremental improvements in security, and we’re making solid progress on the wider app distribution and portability problem space.

Sandboxing, like security in general, isn’t a binary thing – you can’t just say because you have a sandbox, you have 100% security. Like having two locks on your front door, two front doors, or locks on your windows too, sensible security is about defense in depth. Each barrier that you implement precludes some invalid or possibly malicious behaviour. You hope that in total, all of these barriers would prevent anything bad, but you can never really guarantee this – it’s about multiplying together probabilities to get a smaller number. A computer which is switched off, in a locked faraday cage, with no connectivity, is perfectly secure – but it’s also perfectly useless because you cannot actually use it. Sandboxing is very much the same – whilst you could easily take systemd-nspawn, Docker or any other container technology of choice and 100% lock down a desktop app, you wouldn’t be able to interact with it at all.

Network services have incubated and driven most of the container usage on Linux up until now but they are fundamentally different to desktop applications. For services you can write a simple list of permissions like, “listen on this network port” and “save files over here” whereas desktop applications have a much larger number of touchpoints to the outside world which the user expects and requires for normal functionality. Just thinking off the top of my head you need to consider access to the filesystem, display server, input devices, notifications, IPC, accessibility, fonts, themes, configuration, audio playback and capture, video playback, screen sharing, GPU hardware, printing, app launching, removable media, and joysticks. Without making holes in the sandbox to allow access to these in to your app, it either wouldn’t work at all, or it wouldn’t work in the way that people have come to expect.

What Flatpak brings to this is understanding of the specific desktop app problem space – most of what I listed above is to a greater or lesser extent understood by Flatpak, or support is planned. The Flatpak sandbox is very configurable, allowing the application author to specify which of these resources they need access to. The Flatpak CLI asks the user about these during installation, and we provide the flatpak override command to allow the user to add or remove these sandbox escapes. Flatpak has introduced portals into the Linux desktop ecosystem, which we’re really pleased to be sharing with snap since earlier this year, to provide runtime access to resources outside the sandbox based on policy and user consent. For instance, document access, app launching, input methods and recursive sandboxing (“sandbox me harder”) have portals.

The starting security position on the desktop was quite terrible – anything in your session had basically complete access to everything belonging to your user, and many places to hide.

  • Access to the X socket allows arbitrary input and output to any other app on your desktop, but without it, no app on an X desktop would work. Wayland fixes this, so Flatpak has a fallback setting to allow Wayland to be used if present, and the X socket to be shared if not.
  • Unrestricted access to the PulseAudio socket allows you to reconfigure audio routing, capture microphone input, etc. To ensure user consent we need a portal to control this, where by default you can play audio back but device access needs consent and work is under way to create this portal.
  • Access to the webcam device node means an app can capture video whenever it wants – solving this required a whole new project.
  • Sandboxing access to configuration in dconf is a priority for the project right now, after the 1.0 release.

Even with these caveats, Flatpak brings a bunch of default sandboxing – IPC filtering, a new filesystem, process and UID namespace, seccomp filtering, an immutable /usr and /app – and each of these is already a barrier to certain attacks.

Looking at the specific concerns raised:

  • Hopefully from the above it’s clear that sandboxing desktop apps isn’t just a switch we can flick overnight, but what we already have is far better than having nothing at all. It’s not the intention of Flatpak to somehow mislead people that sandboxed means somehow impervious to all known security issues and can access nothing whatsoever, but we do want to encourage the use of the new technology so that we can work together on driving adoption and making improvements together. The idea is that over time, as the portals are filled out to cover the majority of the interfaces described, and supported in the major widget sets / frameworks, the criteria for earning a nice “sandboxed” badge or submitting your app to Flathub will become stricter. Many of the apps that access --filesystem=home are because they use old widget sets like Gtk2+ and frameworks like Electron that don’t support portals (yet!). Contributions to improve portal integration into other frameworks and desktops are very welcome and as mentioned above will also improve integration and security in other systems that use portals, such as snap.
  • As Alex has already blogged, the freedesktop.org 1.6 runtime was something we threw together because we needed something distro agnostic to actually be able to bootstrap the entire concept of Flatpak and runtimes. A confusing mishmash of Yocto with flatpak-builder, it’s thankfully nearing some form of retirement after a recent round of security fixes. The replacement freedesktop-sdk project has just released its first stable 18.08 release, and rather than “one or two people in their spare time because something like this needs to exist”, is backed by a team from Codethink and with support from the Flatpak, GNOME and KDE communities.
  • I’m not sure how fixing and disclosing a security problem in a relatively immature pre-1.0 program (in June 2017, Flathub had less than 50 apps) is considered an ongoing problem from a security perspective. The wording in the release notes?

Zooming out a little bit, I think it’s worth also highlighting some of the other reasons why Flatpak exists at all – these are far bigger problems with the Linux desktop ecosystem than app security alone, and Flatpak brings a huge array of benefits to the table:

  • Allowing apps to become agnostic of their underlying distribution. The reason that runtimes exist at all is so that apps can specify the ABI and dependencies that they need, and you can run it on whatever distro you want. Flatpak has had this from day one, and it’s been hugely reliable because the sandboxed /usr means the app can rely on getting whatever they need. This is the foundation on which everything else is built.
  • Separating the release/update cadence of distributions from the apps. The flip side of this, which I think is huge for more conservative platforms like Debian or enterprise distributions which don’t want to break their ABIs, hardware support or other guarantees, is that you can still get new apps into users hands. Wider than this, I think it allows us huge new freedoms to move in a direction of reinventing the distro – once you start to pull the gnarly complexity of apps and their dependencies into sandboxes, your constraints are hugely reduced and you can slim down or radically rethink the host system underneath. At Endless OS, Flatpak literally changed the structure of our engineering team, and for the first time allowed us to develop and deliver our OS, SDK and apps in independent teams each with their own cadence.
  • Disintermediating app developers from their users. Flathub now offers over 400 apps, and (at a rough count by Nick Richards over the summer) over half of them are directly maintained by or maintained in conjunction with the upstream developers. This is fantastic – we get the releases when they come out, the developers can choose the dependencies and configuration they need – and they get to deliver this same experience to everyone.
  • Decentralised. Anyone can set up a Flatpak repo! We started our own at Flathub because there needs to be a center of gravity and a complete story to build out a user and developer base, but the idea is that anyone can use the same tools that we do, and publish whatever/wherever they want. GNOME uses GitLab CI to publish nightly Flatpak builds, KDE is setting up the same in their infrastructure, and Fedora is working on completely different infrastructure to build and deliver their packaged applications as Flatpaks.
  • Easy to build. I’ve worked on Debian packages, RPMs, Yocto, etc and I can honestly say that flatpak-builder has done a very good job of making it really easy to put your app manifest together. Because the builds are sandboxed and each runtimes brings with it a consistent SDK environment, they are very reliably reproducible. It’s worth just calling this out because when you’re trying to attract developers to your platform or contributors to your app, hurdles like complex or fragile tools and build processes to learn and debug all add resistance and drag, and discourage contributions. GNOME Builder can take any flatpak’d app and build it for you automatically, ready to hack within minutes.
  • Different ways to distribute apps. Using OSTree under the hood, Flatpak supports single-file app .bundles, pulling from OSTree repos and OCI registries, and at Endless we’ve been working on peer-to-peer distribution like USB sticks and LAN sharing.

Nobody is trying to claim that Flatpak solves all of the problems at once, or that what we have is anywhere near perfect or completely secure, but I think what we have is pretty damn cool (I just wish we’d had it 10 years ago!). Even just in the security space, the overall effort we need is huge, but this is a journey that we are happy to be embarking together with the whole Linux desktop community. Thanks for reading, trying it out, and lending us a hand.

by ramcq at October 15, 2018 01:40 PM

October 11, 2018

Andy Wingoheap object representation in spidermonkey

(Andy Wingo)

I was having a look through SpiderMonkey's source code today and found something interesting about how it represents heap objects and wanted to share.

I was first looking to see how to implement arbitrary-length integers ("bigints") by storing the digits inline in the allocated object. (I'll use the term "object" here, but from JS's perspective, bigints are rather values; they don't have identity. But I digress.) So you have a header indicating how many words it takes to store the digits, and the digits follow. This is how JavaScriptCore and V8 implementations of bigints work.

Incidentally, JSC's implementation was taken from V8. V8's was taken from Dart. Dart's was taken from Go. We might take SpiderMonkey's from Scheme48. Good times, right??

When seeing if SpiderMonkey could use this same strategy, I couldn't find how to make a variable-sized GC-managed allocation. It turns out that in SpiderMonkey you can't do that! SM's memory management system wants to work in terms of fixed-sized "cells". Even for objects that store properties inline in named slots, that's implemented in terms of standard cell sizes. So if an object has 6 slots, it might be implemented as instances of cells that hold 8 slots.

Truly variable-sized allocations seem to be managed off-heap, via malloc or other allocators. I am not quite sure how this works for GC-traced allocations like arrays, but let's assume that somehow it does.

Anyway, the point of this blog post. I was looking to see which part of SpiderMonkey reserves space for type information. For example, almost all objects in V8 start with a "map" word. This is the object's "hidden class". To know what kind of object you've got, you look at the map word. That word points to information corresponding to a class of objects; it's not available to store information that might vary between objects of that same class.

Interestingly, SpiderMonkey doesn't have a map word! Or at least, it doesn't have them on all allocations. Concretely, BigInt values don't need to reserve space for a map word. I can start storing data right from the beginning of the object.

But how can this work, you ask? How does the engine know what the type of some arbitrary object is?

The answer has a few interesting wrinkles. Firstly I should say that for objects that need hidden classes -- e.g. generic JavaScript objects -- there is indeed a map word. SpiderMonkey calls it a "Shape" instead of a "map" or a "hidden class" or a "structure" (as in JSC), but it's there, for that subset of objects.

But not all heap objects need to have these words. Strings, for example, are values rather than objects, and in SpiderMonkey they just have a small type code rather than a map word. But you know it's a string rather than something else in two ways: one, for "newborn" objects (those in the nursery), the GC reserves a bit to indicate whether the object is a string or not. (Really: it's specific to strings.)

For objects promoted out to the heap ("tenured" objects), objects of similar kinds are allocated in the same memory region (in kind-specific "arenas"). There are about a dozen trace kinds, corresponding to arena kinds. To get the kind of object, you find its arena by rounding the object's address down to the arena size, then look at the arena to see what kind of objects it has.

There's another cell bit reserved to indicate that an object has been moved, and that the rest of the bits have been overwritten with a forwarding pointer. These two reserved bits mostly don't conflict with any use a derived class might want to make from the first word of an object; if the derived class uses the first word for integer data, it's easy to just reserve the bits. If the first word is a pointer, then it's probably always aligned to a 4- or 8-byte boundary, so the low bits are zero anyway.

The upshot is that while we won't be able to allocate digits inline to BigInt objects in SpiderMonkey in the general case, we won't have a per-object map word overhead; and we can optimize the common case of digits requiring only a word or two of storage to have the digit pointer point to inline storage. GC is about compromise, and it seems this can be a good one.

Well, that's all I wanted to say. Looking forward to getting BigInt turned on upstream in Firefox!

by Andy Wingo at October 11, 2018 02:33 PM

October 09, 2018

GStreamerGStreamer Conference 2018: Talks Abstracts and Speakers Biographies now available

(GStreamer)

The GStreamer Conference team is pleased to announce that talk abstracts and speaker biographies are now available for this year's lineup of talks and speakers, covering again an exciting range of topics!

The GStreamer Conference 2018 will take place on 25-26 October 2018 in Edinburgh (Scotland) just after the Embedded Linux Conference Europe (ELCE).

Details about the conference and how to register can be found on the conference website.

This year's topics and speakers:

Lightning Talks:

  • gst-mfx, gst-msdk and the Intel Media SDK: an update (provisional title)
    Haihao Xiang, Intel
  • Improved flexibility and stability in GStreamer V4L2 support
    Nicolas Dufresne, Collabora
  • GstQTOverlay
    Carlos Aguero, RidgeRun
  • Documenting GStreamer
    Mathieu Duponchelle, Centricular
  • GstCUDA
    Jose Jimenez-Chavarria, RidgeRun
  • GstWebRTCBin in the real world
    Mathieu Duponchelle, Centricular
  • Servo and GStreamer
    Víctor Jáquez, Igalia
  • Interoperability between GStreamer and DirectShow
    Stéphane Cerveau, Fluendo
  • Interoperability between GStreamer and FFMPEG
    Marek Olejnik, Fluendo
  • Encrypted Media Extensions with GStreamer in WebKit
    Xabier Rodríguez Calvar, Igalia
  • DataChannels in GstWebRTC
    Matthew Waters, Centricular
  • Me TV – a journey from C and Xine to Rust and GStreamer, via D
    Russel Winder
  • ...and many more
  • ...
  • Submit your lightning talk now!

Many thanks to our sponsors, Collabora, Pexip, Igalia, Fluendo, Facebook, Centricular and Zeiss, without whom the conference would not be possible in this form. And to Ubicast who will be recording the talks again.

Considering becoming a sponsor? Please check out our sponsor brief.

We hope to see you all in Edinburgh in October! Don't forget to register!

October 09, 2018 01:30 PM

October 07, 2018

Sebastian Pölsterlscikit-survival 0.6.0 released

scikit-survival 0.6.0 released

Today, I released scikit-survival 0.6.0. This release is long overdue and adds support for NumPy 1.14 and pandas up to 0.23. In addition, the new class sksurv.util.Surv makes it easier to construct a structured array from NumPy arrays, lists, or a pandas data frame. The examples below showcase how to create a structured array for the dependent variable.

First, we can construct a structered array from a list of boolean event indicators and a list of integers for the observed time:

import pandas
from sksurv.util import Surv
 
y = Surv.from_arrays([True, False, False, True, True], [1, 19, 11, 6, 9])

which equals

y = numpy.array([( True,  1.), (False, 19.), (False, 11.), ( True,  6.),
                 ( True,  9.)], dtype=[('event', '?'), ('time', ')])

Alternatively, we can use a 0/1 valued list for the event indicator:

y = Surv.from_arrays([1, 0, 0, 1, 1], [1, 19, 11, 6, 9])

Finally, if event indicator and observed time are stored in a pandas data frame,
we can just use Surv.from_dataframe and tell it what columns to use:

data = pandas.DataFrame({"some_event": [True, False, False, True, True],
                         "time_of_event": [1, 19, 11, 6, 9]})
y = Surv.from_dataframe("some_event", "time_of_event", data)

The some_event column can also be 0/1 valued, of course.

Download

You can install the latest version via Anaconda (Linux, OSX and Windows):

conda install -c sebp scikit-survival

or via pip:

pip install -U scikit-survival
sebp Sun, 10/07/2018 - 18:14

Comments

by sebp at October 07, 2018 04:14 PM

October 05, 2018

Christian SchallerGStreamer Conference 2018

(Christian Schaller)

For the 9th time this year there will be the GStreamer Conference. This year it will be in Edinburgh, UK right after the Embedded Linux Conference Europe, on the 25th of 26th of October. The GStreamer Conference is always a lot of fun with a wide variety of talks around Linux and multimedia, not all of them tied to GStreamer itself, for instance in the past we had a lot of talks about PulseAudio, V4L, OpenGL and Vulkan and new codecs.This year I am really looking forward to talks such as the DeepStream talk by NVidia, Bringing Deep Neural Networks to GStreamer by Pexip and D3Dx Video Game Streaming on Windows by Bebo, to mention a few.

For a variety of reasons I missed the last couple of conferences, but this year I will be back in attendance and I am really looking forward to it. In fact it will be the first GStreamer Conference I am attending that I am not the organizer for, so it will be nice to really be able to just enjoy the conference and the hallway track this time.

So if you haven’t booked yourself in already I strongly recommend going to the GStreamer Conference website and getting yourself signed up to attend.

See you all in Edinburgh!

Also looking forward to seeing everyone attending the PipeWire Hackfest happening right after the GStreamer Conference.

by uraeus at October 05, 2018 05:08 PM

October 02, 2018

GStreamerGStreamer 1.14.4 stable bug fix release

(GStreamer)

The GStreamer team is pleased to announce another bug fix release in the stable 1.14 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.14.x.

See /releases/1.14/ for the details.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Download tarballs directly here: gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-sharp, gstreamer-vaapi, or gst-omx.

October 02, 2018 11:30 PM

September 24, 2018

Christian SchallerGetting the team together to revolutionize Linux audio

(Christian Schaller)

So anyone reading my blog posts would probably have picked up on my excitement for the PipeWire project, the effort to unify the world of Linux audio, add an equivalent video bit and provide multimedia handling capabilities to containerized applications. The video part as I have mentioned before was the critical first step and that is starting to look really good with the screen sharing functionality in GNOME shell already using PipeWire and equivalent PipeWire support being added to KDE by Jan Grulich. We have internal patches for both Firefox and Chrome(ium) which we are polishing up to propose them upstream, but we will in the meantime offer them as downstream patches in Fedora as soon as they are ready for primetime. Once those patches are deployed you should have any browser based desktop sharing software, like Google Hangouts, working fully under Wayland (and X).

With the video part of PipeWire already in production we decided the time has come to try to accelerate the development of the audio bits. So PipeWire creator Wim Taymans, PulseAudio developer Arun Raghavan and myself decided to try to host a PipeWire hackfest this fall to bring together many of the core Linux audio developers to try to hash out a plan and a roadmap. So I am very happy to say that at the end of October we will have a gathering in Edinburgh to work on this and the critical people we where hoping to have there are coming. Filipe Coelho who is the current lead developer on Jack will be there alongside Arun Raghavan, Colin Guthrie and Tanu Kaskinen from PulseAudio, Bastien Nocera from the GNOME project and Jan Grulich from KDE will be there representing desktop integration and finally Nirbheek Chauhan, Nicolas Dufresne and George Kiagiadakis from the GStreamer project. I think we have about the right amount of people for this to be productive and at the same time have representation from everyone who needs to be there, so I am feeling very optimistic that we can come out of this event with both a plan for what we want to do and the right people involved to make it happen. The idea that we can have a shared infrastructure for consumer level audio and pro-audio under Linux really excites me and I do believe that if we do this right Linux will take a huge step forward as a natural home for pro-audio desktop users.

A big thanks you to the GNOME Foundation for sponsoring this event and allow us to bring all this people together!

by uraeus at September 24, 2018 07:31 PM

September 19, 2018

GStreamerGStreamer Conference 2018: Schedule of Talks and Speakers available

(GStreamer)

The GStreamer Conference team is pleased to announce this year's lineup of talks and speakers covering again an exciting range of topics!

The GStreamer Conference 2018 will take place on 25-26 October 2018 in Edinburgh (Scotland) just after the Embedded Linux Conference Europe (ELCE).

Details about the conference and how to register can be found on the conference website.

This year's topics and speakers:

Lightning Talks:

  • gst-mfx, gst-msdk and the Intel Media SDK: an update (provisional title)
    Haihao Xiang, Intel
  • Improved flexibility and stability in GStreamer V4L2 support
    Nicolas Dufresne, Collabora
  • GstQTOverlay
    Carlos Aguero, RidgeRun
  • Documenting GStreamer
    Mathieu Duponchelle, Centricular
  • GstCUDA
    Jose Jimenez-Chavarria, RidgeRun
  • GstWebRTCBin in the real world
    Mathieu Duponchelle, Centricular
  • Servo and GStreamer
    Víctor Jáquez, Igalia
  • Interoperability between GStreamer and DirectShow
    Stéphane Cerveau, Fluendo
  • Interoperability between GStreamer and FFMPEG
    Marek Olejnik, Fluendo
  • Encrypted Media Extensions with GStreamer in WebKit
    Xabier Rodríguez Calvar, Igalia
  • DataChannels in GstWebRTC
    Matthew Waters, Centricular
  • ...and many more
  • ...
  • Submit your lightning talk now!

Full talk abstracts and speaker biographies will be published shortly.

Many thanks to our sponsors, Collabora, Igalia, Fluendo, Facebook, Centricular and Zeiss, without whom the conference would not be possible in this form. And to Ubicast who will be recording the talks again.

Considering becoming a sponsor? Please check out our sponsor brief.

We hope to see you all in Edinburgh in October! Don't forget to register!

September 19, 2018 05:00 PM

September 16, 2018

GStreamerGStreamer 1.14.3 stable bug fix release

(GStreamer)

The GStreamer team is pleased to announce another bug fix release in the stable 1.14 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.14.x.

See /releases/1.14/ for the details.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Download tarballs directly here: gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-sharp, gstreamer-vaapi, or gst-omx.

September 16, 2018 08:00 PM

September 10, 2018

Sebastian DrögeGStreamer Rust bindings 0.12 and GStreamer Plugin 0.3 release

(Sebastian Dröge)

After almost 6 months, a new release of the GStreamer Rust bindings and the GStreamer plugin writing infrastructure for Rust is out. As usual this was coinciding with the release of all the gtk-rs crates to make use of all the new features they contain.

Thanks to all the contributors of both gtk-rs and the GStreamer bindings for all the nice changes that happened over the last 6 months!

And as usual, if you find any bugs please report them and if you have any questions let me know.

GStreamer Bindings

For the full changelog check here.

Most changes this time were internally, especially because many user-facing changes (like Debug impls for various types) were already backported to the minor releases in the 0.11 release series.

WebRTC

The biggest change this time is probably the inclusion of bindings for the GStreamer WebRTC library.

This allows using building all kinds of WebRTC applications outside the browser (or providing a WebRTC implementation for a browser), and while not as full-featured as Google’s own implementation, this interoperates well with the various browsers and generally works much better on embedded devices.

A small example application in Rust is available here.

Serde

Optionally, serde trait implementations for the Serialize and Deserialize trait can be enabled for various fundamental GStreamer types, including caps, buffers, events, messages and tag lists. This allows serializing them into any format that can be handled by serde (which are many!), and deserializing them back to normal Rust structs.

Generic Tag API

Previously only a strongly-typed tag API was exposed that made it impossible to use the wrong data type for a specific tag, e.g. code that tries to store a string for the track number or an integer for the title would simply not compile:

let mut tags = gst::TagList::new();
{
    let tags = tags.get_mut().unwrap();
    tags.add::<Title>(&"some title", gst::TagMergeMode::Append);
    tags.add::<TrackNumber>(&12, gst::TagMergeMode::Append);
}

While this is convenient, it made it rather complicated to work with tag lists if you only wanted to handle them in a generic way. For example by iterating over the tag list and simply checking what kind of tags are available. To solve that, a new generic API was added in addition. This works on glib::Values, which can store any kind of type, and using the wrong type for a specific tag would simply cause an error at runtime instead of compile-time.

let mut tags = gst::TagList::new();
{
    let tags = tags.get_mut().unwrap();
    tags.add_generic(&gst::tags::TAG_TITLE, &"some title", gst::TagMergeMode::Append)
.expect("wrong type for title tag");
    tags.add_generic(&gst::tags::TAG_TRACK_NUMBER, &12, gst::TagMergeMode::Append)
.expect("wrong type for track number tag");
}

This also greatly simplified the serde serialization/deserialization for tag lists.

GStreamer Plugins

For the full changelog check here.

gobject-subclass

The main change this time is that all the generic GObject subclassing infrastructure was moved out of the gst-plugin crate and moved to its own gobject-subclass crate as part of the gtk-rs organization.

As part of this, some major refactoring has happened that allows subclassing more different types but also makes it simpler to add new types. There are also experimental crates for adding some subclassing support to gio and gtk, and a PR for autogenerating part of the code via the gir code generator.

More classes!

The other big addition this time is that it’s now possible to subclass GStreamer Pads and GhostPads, to implement the ChildProxy interface and to subclass the Aggregator and AggregatorPad class.

This now allows to write custom mixer/muxer-style elements (or generally elements that have multiple sink pads) in Rust via the Aggregator base class, and to have custom pad types for elements to allow for setting custom properties on the pads (e.g. to control the opacity of a single video mixer input).

There is currently no example for such an element, but I’ll add a very simple video mixer to the repository some time in the next weeks and will also write a blog post about it for explaining all the steps.

by slomo at September 10, 2018 11:41 AM

August 01, 2018

Christian SchallerSupporting developers on Patreon (and similar)

(Christian Schaller)

For some time now I been supporting two Linux developers on patreon. Namely Ryan Gordon of Linux game porting and SDL development fame and Tanu Kaskinen who is a lead developer on PulseAudio these days.

One of the things I often think about is how we can enable more people to make a living from working on the Linux desktop and related technologies. If your reading my blog there is a good chance that you are enabling people to make a living on working on the Linux desktop by paying for RHEL Workstation subscriptions through your work. So a big thank you for that. The fact that Red Hat has paying customers for our desktop products is critical in terms of our ability to do so much of the maintenance and development work we do around the Linux Desktop and Linux graphics stack.

That said I do feel we need more venues than just employment by companies such as Red Hat and this is where I would love to see more people supporting their favourite projects and developers through for instance Patreon. Because unlike one of funding campaigns repeat crowdfunding like Patreon can give developers predictable income, which means they don’t have to worry about how to pay their rent or how to feed their kids.

So in terms of the two Patreons I support Ryan is probably the closest to being able to rely on it for his livelihood, but of course more Patreon supporters will enable Ryan to be even less reliant on payments from game makers. And Tanu’s patreon income at the moment is helping him to spend quite a bit of time on PulseAudio, but it is definitely not providing him with a living income. So if you are reading this I strongly recommend that you support Ryan Gordon and Tanu Kaskinen on Patreon. You don’t need to pledge a lot, I think in general it is in fact better to have many people pledging 10 dollars a Month than a few pledging hundreds, because the impact of one person coming or going is thus a lot less. And of course this is not just limited to Ryan and Tanu, search around and see if any projects or developers you personally care deeply about are using crowdfunding and support them, because if more of us did so then more people would be able to make a living of developing our favourite open source software.

Update: Seems I wasn’t the only one thinking about this, Flatpak announced today that application devs can put their crowdfunding information into their flatpaks and it will be advertised in GNOME Software.

by uraeus at August 01, 2018 02:32 PM

July 31, 2018

Thibault SaunierWebKitGTK and WPE gain WebRTC support back!

(Thibault Saunier)

WebRTC is a w3c draft protocol that “enables rich, high-quality RTP applications to be developed for the browser, mobile platforms, and IoT devices, and allow them all to communicate via a common set of protocols”. The protocol is mainly used to provide video conferencing systems from within web browsers.

https://appr.tc running in WebKitGTKhttps://appr.tc running in WebKitGTK

A brief history

At the very beginning of the WebRTC protocol, before 2013, Google was still using WebKit in chrome and they started to implement support using LibWebRTC but when they started the blink fork the implementation stopped in WebKit.

Around 2015/2016 Ericsson and Igalia (later sponsored by Metrological) implemented WebRTC support into WebKit, but instead of using LibWebRTC from google, OpenWebRTC was used. This had the advantage of being implemented on top of the GStreamer framework which happens to be used for the Multimedia processing inside WebKitGTK and WebKitWPE. At that point in time, the standardization of the WebRTC protocol was still moving fast, mostly pushed by Google itself, and it was hard to be interoperable with the rest of the world. Despite of that, the WebKit/GTK/WPE WebRTC implementation started to be usable with website like appr.tc at the end of 2016.

Meanwhile, in late 2016, Apple decided to implement WebRTC support on top of google LibWebRTC in their ports of WebKit which led to WebRTC support in WebKit announcement in June 2017.

Later in 2017 the OpenWebRTC project lost momentum and as it was considered unmaintained, we, at Igalia, decided to use LibWebRTC for WebKitGTK and WebKitWPE too. At that point, the OpenWebRTC backend was completely removed.

GStreamer/LibWebRTC implementation

Given that Apple had implemented a LibWebRTC based backend in WebKit, and because this library is being used by the two main web browsers (Chrome and Firefox), we decided to reuse Apple’s work to implement support in our ports based on LibWebRTC at the end of 2017. A that point, the two main APIs required to allow video conferencing with WebRTC needed to be implemented:

  • MediaDevices.GetUserMedia and MediaStream: Allows to retrieve Audio and Video streams from the user Cameras and Microphones (potentially more than that but those are the main use cases we cared about).
  • RTCPeerConnection: Represents a WebRTC connection between the local computer and a remote peer.

As WeKit/GTK/WPE heavily relies on GStreamer for the multimedia processing, and given its flexibility, we made sure that our implementation of those APIs leverage the power of the framework and the existing integration of GStreamer in our WebKit ports.

Note that the whole implementation is reusing (after refactoring) big parts of the infrastructure introduced during the previously described history of WebRTC support in WebKit.

GetUserMedia/MediaStream

To implement that part of the API the following main components were developed:

  • RealtimeMediaSourceCenterLibWebRTC: Main entry point for our GStreamer based LibWebRTC backend.
  • GStreamerCaptureDeviceManager: A class to list and manage local Video/Audio devices using the GstDeviceMonitor API.
  • GStreamerCaptureDevice: Implementation of WebKit abstraction for capture devices, basically wrapping GstDevices.
  • GStreamerMediaStreamSource: A GStreamer Source element which wraps WebKit abstraction of MediaStreams to be used directly in a playbin3 pipeline (through a custom mediastream:// protocol). This implementation leverages latest GstStream APIs so it is already one foot into the future.

The main commit can be found here

RTCPeerConnection

Enabling the PeerConnection API meant bridging previously implemented APIs and the LibWebRTC backend developed by Apple:

  • RealtimeOutgoing/Video/Audio/SourceLibWebRTC: Passing local stream (basically from microphone or camera) to LibWebRTC to be sent to the peer.
  • RealtimeIncoming/Video/Audio/SourceLibWebRTC: Passing remote stream (from a remote peer) to the MediaStream object and in turn to the GStreamerMediaStreamSource element.

On top of that and to leverage GStreamer Memory management and negotiation capabilities we implemented encoders and decoder for LibWebRTC (namely GStreamerVideoEncoder and GStreamerVideoDecoders). This brings us a huge number of Hardware accelerated encoders and decoders implementations, especially on embedded devices, which is a big advantage in particular for WPE which is tuned for those platforms.

The main commit can be found here

WebKitWebRTC dataflow diagram

Conclusion

While we were able to make GStreamer and LibWebRTC work well together in that implementation, using the new GstWebRTC component (that is now in upstream GStreamer) as a WebRTC backend would be cleaner. Many pieces of the current implementation could be reused and it would allow us to have a simpler infrastructure and avoid having several RTP stack in the WebKitGTK and WebKitWPE ports.

Most of the required APIs and features have been implemented, but a few are still under development (namely MediaDevices.enumerateDevices, canvas captureStream and WebAudio and MediaStream bridging) meaning that many Web applications using WebRTC already work, but some don’t yet, we are working on those!

A big thanks to my employer Igalia and Metrological for sponsoring our work on that!

by thiblahute at July 31, 2018 06:06 PM

July 26, 2018

Christian SchallerSome thoughts on smart home technology

(Christian Schaller)

A couple of Months ago we visited IKEA and saw the IKEA Trådfri smart lighting system. Since it was relatively cheap we decided to buy their starter pack and enough bulbs to make the recessed lights in our living rooms smartlight. I got it up and running and had some fun switching the light hue and turning the lights on and off from my phone.
A bit later I got a Google Assistant speaker at Google I/O and the system suddenly became somewhat more useful as I could control the lights by calling out to the google assistant. I was also able to connect our AC system thermostats to the google assistant so we could change the room temperature using voice commands.
As a result of this I ended up reading up on smart home technologies and found that my IKEA hub and bulbs was conforming to the ZigBee standard and that I should be able to buy further Zigbee compatible devices from other vendors to extend it. So I ordered a Zigbee compatible in-wall light switch from GE and I also ordered a Zigbee compatible in-ceiling switch from a company called Nue through Amazon. Once I got these home and tried to get them up and running I found that my understanding was flawed as there are two Zigbee standards, the older Zigbee HA and the newer Zigbee LightLink. The IKEA stuff is Zigbee LightLink while at least the GE switch was Zigbee HA and thus the IKEA hub could not control it. So I ended up ordering a Samsung SmartThings hub which supports Zigbee HA, Zigbee LL and the competing system called Z-Wave. At which point I got both my IKEA lights and my two devices working with it.
In the meantime I had also gotten myself a Google Assistant compatible portable aircon from FrigidAire for my home office (no part of main house) and a Google Assistant compatible fire alarm from Nest for the same office space.

So having lived in my new smarter home for a while now what are my conclusions? Well first of all that we still have some way to go before this is truly seamless and obvious. I consider myself a fairly technical person, but it still took quite a bit of googling for me to be able to get everything working. Secondly a lot of the smart home stuff feel a bit gimmicky in the end. For instance the Frigidaire portable air conditioner integration with the Google Assistant is more annoying than useful. It basically requires me to start by asking to talk to Frigidaire and then listen to a ton of crap before I can even start trying to do anything. As for the lights we do actually turn them on and off quite a bit using the voice commands (at least I try to until I realize my wife has disconnected the google assistant in order to use its cable to charge her phone :). I also realized that while installing and buying the in-wall switches are a bit more costly and complicated than just getting some smart bulbs it does work a lot better as I can then control the lights using both voice and switch. Because the smart bulbs can not be turned on using voice if you have a normal switch turned off (obviously, but not something I thought about before buying). So getting something like the IKEA bulbs is a nice and cheap way to try this stuff out I don’t see it as our long term solution here. The thermostat we haven’t controlled or queried once since I did the initial testing of its connection to Google assistant.

All in all I have to say that the smart home tech is cute, but it is far from being essential. I might end up putting in more wall switches for the light going forward, but apart from that, having smart home support in a device is not going to drive my purchasing decisions. Maybe as the tech matures and becomes more mainstream it will also become more useful, but as it stands it mostly solves first world problems (although of course there are real gains here for various accessibility situations).

by uraeus at July 26, 2018 07:40 PM

July 20, 2018

GStreamerGStreamer 1.14.2 stable bug fix release

(GStreamer)

The GStreamer team is pleased to announce the second bug fix release in the stable 1.14 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.14.x.

See /releases/1.14/ for the details.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Download tarballs directly here: gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-sharp, gstreamer-vaapi, or gst-omx.

July 20, 2018 03:00 PM

July 18, 2018

Christian SchallerAn update from Fedora Workstation land

(Christian Schaller)

Battery life
I was very happy to see that Fedora Workstation 28 in the Phoronix benchmark got the best power consumption on a Dell XPS 13. Improving battery life has been a priority for us and Hans de Goede has been doing some incredible work so far. And best of all; more is to come :). So if you want great battery life with Linux on your laptop then be sure to be running Fedora on your laptop! On that note and to state the obvious, be aware that Fedora Workstation adoption rates are actually a major metric for us to decide where to put our efforts, so if we see good growth in Fedora due to people enjoying the improved battery life it enables us to keep investing in improving battery life, if we don’t see the growth we will need to conclude people don’t care that much and more our investment elsewhere.

Desktop remoting under Wayland
The team is also making great strides with desktop remoting under Wayland. In Fedora Workstation 29 we will have the VNC based GNOME Shell integrated desktop sharing working under Wayland thanks to the work done by Jonas Ådahl. It relies on PipeWire to share you Wayland session over VNC.
On a similar note Jan Grulich, Tomas Popela and Eike Rathke has been working on enabling Wayland desktop sharing through Firefox and Chromium. They are reporting good progress and actually did a video call between Firefox and Chromium last week, sharing their desktops with each other. This is enables by providing a PipeWire backend for both Firefox and Chromium. They are now working on cleaning up their patches and prepare them for submission upstream. We are also looking at providing a patched Firefox in Fedora Workstation 28 supporting this.

PipeWire
Wim Taymans talked about and demonstrated the latest improvements to PipeWire during GUADEC last week. He now got a libpulse.so drop in replacement that allows applications like Totem and Rhythmbox to play audio through PipeWire using the PulseAudio GStreamer plugin as Pipewire now provides a libpulse.so drop in replacement. Wim also keeps improving the Jack support in PipeWire by testing Jack applications one by one and fixing corner cases as he discovers them or they are reported by the Linux pro-audio community. We also ended up ordering Wim a Sony HT-Z9F soundbar for testing as we want to ensure PipeWire has great support for passthrough, be that SPDIF, HDMI or Bluetooth. The HT-Z9F also supports LDAC audio which is a new high quality audio format for Bluetooth and we want PipeWire to have full support for it.
To accelerate Pipewire development and adoption for audio we also decied to try to organize a PipeWire and Linux Audio hackfest this fall, with the goal of mapping our remaining issues and to try to bring the wider linux audio community together. So I am very happy that Arun Raghavan of PulseAudio fame agreed to be one of the co-organizer of this hackfest. Anyone interested in attending the PipeWire 2018 hackfest either add yourself to the attendee list or contact me (contact information can be found through the hackfest page) and I be happy to add you. The primary goal is to have developers from the PulseAudio and JACK communities attend alongside Wim Taymans and Bastien Nocera so we can make sure we got everything we need on the development roadmap and try to ensure we all pull in the same direction.

GNOME Builder
Christian Hergert did an update during GUADEC this year on GNOME Builder. As usual a ton of interesting stuff happening including new support for developing towards embedded devices like the upcoming Purism phone. Christian in his talk mentioned how Builder is probably the worlds first ‘Container Native IDE’ where it both is being developed with being packaged as a Flatpak in mind, but also developed with the aim of creating Flatpaks as its primary output. So a lot of effort is being put into both making sure it works well being inside a container itself, but also got all the bells and whistles for creating containers from your code. Another worthwhile point to mention is that Builder is also one of the best IDEs for doing Rust development in general!

Game mode in Fedora
Feral Interactive, one of the leading Linux game companies, released a tool they call gamemode for Linux not long ago. Since we want gamers to be first class citizens in Fedora Workstation we ended up going back and forth internally a bit about what to do about it, basically discussing if there was another way to resolve the problem even more seamlessly than gamemode. In the end we concluded that while the ideal solution would be to have the default CPU governor be able to deal with games better, we also realized that the technical challenge games posed to the CPU governor, by having a very uneven workload, is hard to resolve automatically and not something we have the resources currently to take a deep dive into. So in the end we decided that just packaging gamemode was the most reasonable way forward. So the package is lined up for the next batch update in Fedora 28 so you should soon be able to install it and for Fedora Workstation 29 we are looking at including it as part of the default install.

by uraeus at July 18, 2018 03:23 PM

June 26, 2018

Gustavo OrrilloExperiments with Flutter and Dart

Some time ago I heard about the Flutter SDK for cross-platform mobile development. Having written some apps as part of my research that needed to be available for both iOS and Android, and feeling sometimes frustrated by the fact that I could run Processing sketches on iPhones, I’ve been interested in this kind of cross-platform SDKs. In fact, I used Kivy in the past to create an app for prognosis of Ebola patients. I liked that Kivy is based on Python and had a simple UI toolkit. However, it was not clear to me if Kivy would allow to create native UIs on iOS and Android, and implementing the Processing API as a Python library that could be used in Kivy seemed difficult at the time (although, with the new P5 library from Abhik Pal, this may be easier now). More recently, I revisited the prognosis app and created a new version which incorporates a more sophisticated visualization of prognosis predictions, as well as patient care and management recommendations. The Android version is here, and the iOS version here. In this case, I ended up creating separate projects for each version, one with Android Studio and the Android SDK from Google, and the other with XCode and the iOS SDK from Apple (and consequently, the development time doubled).

(To be fair, you can use the Processing iCompiler from Frogg to write and run Processing sketches directly on your phone, but that’s a different use scenario)

In any case, I was also aware of a number of other existing SDK/frameworks for cross-platform mobile develoment: Xamarin, React Native, Cordova, PhoneGap… but never got enough time to explore at least a few of them in some detail. React Native seems very popular nowadays and it is JavaScript-based (there is even a p5-wrapper that one can use to embed p5.js sketches into mobile apps). Coming more from a Java background, and being quite involved in the Processing for Android project, I was looking for something closer to these programming languages/environments.

Flutter is based on a relatively new language called Dart, which according to what I found online, Dart is easy to learn for people with experience in Java. So, I decided to give it a try and see how hard would be to write Processing-like sketches with the Dart language together with the Flutter SDK.

I had to get used to Flutter’s widget and layout system first, but eventually got to understand the basics of it (the online documentation is pretty good) and to access lower-level rendering functions in Flutter. The CustomPainter class was key to achieve this, as it allows to paint on the canvas of a widget pretty much anything you want:

class PPainter extends ChangeNotifier implements CustomPainter {
  @override
  void paint(Canvas canvas, Size size) {
    draw(); 
  }

  void draw() {
    // draw anything you want...
  }
}

Next, I had to figure out how to animate the drawing, and handle touch events. Luckily, Flutter includes several built-in classes to generate UI animations, and I could use them to trigger the drawing in the custom painter on a continuous loop. Finally, some online code gave me enough hints to implement basic input handling:

class PWidget extends StatelessWidget {
  PPainter painter;
  
  @override
  Widget build(BuildContext context) {
    return new Container(
      child: new ClipRect(
          child: new CustomPaint(
            painter: painter,
            child: new GestureDetector(
              onTapDown: (details) {
                painter.onTapDown(context, details);
              },
              onTapUp: (details) {
                painter.onTapUp(context, details);
              },
            ),
          )
      ),
    );
  }              

With these basic elements in place, I was able to start implementing a few functions from the Processing API in order to create a simple proof-of-concept Dart package that encapsulates these functions and can be imported into a Flutter app. In the end, a Dart sketch in Flutter looks surprisingly similar to a Java Processing sketch:

class MySketch extends PPainter {
  var strokes = new List<List<PVector>>();

  void setup() {
    fullScreen();
  }

  void draw() {
    background(color(255, 255, 255));

    noFill();
    strokeWeight(10);
    stroke(color(10, 40, 200, 60));
    for (var stroke in strokes) {
      beginShape();
      for (var p in stroke) {
        vertex(p.x, p.y);
      }
      endShape();
    }
  }

  void mousePressed() {
    strokes.add([new PVector(mouseX, mouseY)]);
  }

  void mouseDragged() {
    var stroke = strokes.last;
    stroke.add(new PVector(mouseX, mouseY));
  }
}

This sketch can be embedded into a minimal UI layout in Flutter, if the purpose is just to show the sketch’s output with no other UI components. This is how the sketch above looks like when running on an iPhone and and Android, side to side:

Running a simple Processing sketch on iOS and Android

So far, the experiment proved to be quite succesful! I published the p5 package on the Dart Pub repository, so it can be easily imported into any Flutter app. Currently, it only includes a hanful of the Processing API functions, so it is not that useful, but an promising starting point nonetheless!

And in the end, I was able to write a Processing sketch and run it on an iPhone and an Android device from Android Studio, which was pretty satisfying:

Using Android Studio to debug on an iPhone

by Andres Colubri at June 26, 2018 11:00 PM

June 22, 2018

Bastien NoceraThomson 8-bit computers, a history

(Bastien Nocera) In March 1986, my dad was in the market for a Thomson TO7/70. I have the circled classified ads in “Téo” issue 1 to prove that :)



TO7/70 with its chiclet keyboard and optical pen, courtesy of MO5.com

The “Plan Informatique pour Tous” was in full swing, and Thomson were supplying schools with micro-computers. My dad, as a primary school teacher, needed to know how to operate those computers, and eventually teach them to kids.

The first thing he showed us when he got the computer, on the living room TV, was a game called “Panic” or “Panique” where you controlled a missile, protecting a town from flying saucers that flew across the screen from either side, faster and faster as the game went on. I still haven't been able to locate this game again.

A couple of years later, the TO7/70 was replaced by a TO9, with a floppy disk, and my dad used that computer to write an educational software about top-down additions, as part of a training program run by the teachers schools (“Écoles Normales” renamed to “IUFM“ in 1990).

After months of nagging, and some spring cleaning, he found the listings of his educational software, which I've liberated, with his permission. I'm currently still working out how to generate floppy disks that are usable directly in emulators. But here's an early screenshot.


Later on, my dad got an IBM PC compatible, an Olivetti PC/1, on which I'd play a clone of Asteroids for hours, but that's another story. The TO9 got passed down to me, and after spending a full summer doing planning for my hot-dog and chips van business (I was 10 or 11, and I had weird hobbies already), and entering every game from the “102 Programmes pour...” series of books, the TO9 got put to the side at Christmas, replaced by a Sega Master System, using that same handy SCART connector on the Thomson monitor.

But how does this concern you. Well, I've worked with RetroManCave on a Minitel episode not too long ago, and he agreed to do a history of the Thomson micro-computers. I did a fair bit of the research and fact-checking, as well as some needed repairs to the (prototype!) hardware I managed to find for the occasion. The result is this first look at the history of Thomson.



Finally, if you fancy diving into the Thomson computers, there will be an episode coming shortly about the MO5E hardware, and some games worth running on it, on the same YouTube channel.

I'm currently working on bringing the “TeoTO8D emulator to Flathub, for Linux users. When that's ready, grab some games from the DCMOTO archival site, and have some fun!

I'll also be posting some nitty gritty details about Thomson repairs on my Micro Repairs Twitter feed for the more technically enclined among you.

by Bastien Nocera (noreply@blogger.com) at June 22, 2018 04:06 PM

June 12, 2018

Bastien NoceraFingerprint reader support, the second coming

(Bastien Nocera) Fingerprint readers are more and more common on Windows laptops, and hardware makers would really like to not have to make a separate SKU without the fingerprint reader just for Linux, if that fingerprint reader is unsupported there.

The original makers of those fingerprint readers just need to send patches to the libfprint Bugzilla, I hear you say, and the problem's solved!

But it turns out it's pretty difficult to write those new drivers, and those patches, without an insight on how the internals of libfprint work, and what all those internal, undocumented APIs mean.

Most of the drivers already present in libfprint are the results of reverse engineering, which means that none of them is a best-of-breed example of a driver, with all the unknown values and magic numbers.

Let's try to fix all this!

Step 1: fail faster

When you're writing a driver, the last thing you want is to have to wait for your compilation to fail. We ported libfprint to meson and shaved off a significant amount of time from a successful compilation. We also reduced the number of places where new drivers need to be declared to be added to the compilation.

Step 2: make it clearer

While doxygen is nice because it requires very little scaffolding to generate API documentation, the output is also not up to the level we expect. We ported the documentation to gtk-doc, which has a more readable page layout, easy support for cross-references, and gives us more control over how introductory paragraphs are laid out. See the before and after for yourselves.

Step 3: fail elsewhere

You created your patch locally, tested it out, and it's ready to go! But you don't know about git-bz, and you ended up attaching a patch file which you uploaded. Except you uploaded the wrong patch. Or the patch with the right name but from the wrong directory. Or you know git-bz but used the wrong commit id and uploaded another unrelated patch. This is all a bit too much.

We migrated our bugs and repository for both libfprint and fprintd to Freedesktop.org's GitLab. Merge Requests are automatically built, discussions are easier to follow!

Step 4: show it to me

Now that we have spiffy documentation, unified bug, patches and sources under one roof, we need to modernise our website. We used GitLab's CI/CD integration to generate our website from sources, including creating API documentation and listing supported devices from git master, to reduce the need to search the sources for that information.

Step 5: simplify

This process has started, but isn't finished yet. We're slowly splitting up the internal API between "internal internal" (what the library uses to work internally) and "internal for drivers" which we eventually hope to document to make writing drivers easier. This is partially done, but will need a lot more work in the coming months.

TL;DR: We migrated libfprint to meson, gtk-doc, GitLab, added a CI, and are writing docs for driver authors, everything's on the website!

by Bastien Nocera (noreply@blogger.com) at June 12, 2018 07:00 PM

GStreamerGStreamer Conference 2018 Announced

(GStreamer)

The GStreamer project is happy to announce that this year's GStreamer Conference will take place on Thursday-Friday 25-26 October 2018 in Edinburgh, Scotland.

You can find more details about the conference on the GStreamer Conference 2018 web site.

A call for papers will be sent out shortly. Registration will open at a later time. We will announce those and any further updates on the gstreamer-announce mailing list, the website, and on Twitter.

Talk slots will be available in varying durations from 20 minutes up to 45 minutes. Whatever you're doing or planning to do with GStreamer, we'd like to hear from you!

We also plan to have sessions with short lightning talks / demos / showcase talks for those who just want to show what they've been working on or do a mini-talk instead of a full-length talk. Lightning talk slots will be allocated on a first-come-first-serve basis, so make sure to reserve your slot if you plan on giving a lightning talk.

There will also be a social event again on Thursday evening.

There are also plans to have a hackfest the weekend right after the conference.

We hope to see you in Edinburgh!

June 12, 2018 12:00 PM

June 07, 2018

Christian Schaller3rd Party Software in Fedora Workstation

(Christian Schaller)

So you have probably noticed by now that we started offering some 3rd party software in the latest Fedora Workstation namely Google Chrome, Steam, NVidia driver and PyCharm. This has come about due to a long discussion in the Fedora community on how we position Fedora Workstation and how we can improve our user experience. The principles we base of this policy you can read up on in this policy document. To sum it up though the idea is that while the Fedora operating system you install will continue as it has been for the last decade to be based on only free software (with an exception for firmware) you will be able to more easily find and install the plethora of applications out there through our software store application, GNOME Software. We also expect that as the world of Linux software moves towards containers in general and Flatpaks specifically we will have an increasing number of these 3rd party applications available in Fedora.

So the question I know some of you will have is, what do one actually have to do in order to get a 3rd party application listed in Fedora Workstation? Well wonder no longer as we put up a few documents now outlining the steps you will need to take. Compared to traditional linux packaging the major difference in the requirements around improved metadata for your application, so we are covering that aspect in special detail. These documents cover both RPMS and Flatpaks.

First of all you can get a general overview from our 3rd Party guidelines document. This document also explains how you submit a request to the Fedora Workstation Working group for your application to be added.

Then if you want to dig into the details of what metadata you need to create for your application there is the in-depth metadata tutorial here and finally once you are ready to set up your repository there is a tutorial explaining how you ensure your repository is able to provide the metadata you created above.

We expect to add more and more applications to Fedora Workstation over time here, and I would especially recommend that you look into Flatpaking your 3rd party application as it will decouple your application from the host operating system and thus decrease the workload on you maintaining your application for use in Fedora Workstation (and elsewhere).

by uraeus at June 07, 2018 01:22 PM

May 29, 2018

Christian SchallerAdding support for the Dell Canvas and Totem

(Christian Schaller)

I am very happy to see that Benjamin Tissoires work to enable the Dell Canvas and Totem has started to land in the upstream kernel. This work is the result of a collaboration between ourselves at Red Hat and Dell to bring this exciting device to Linux users.

Dell Canvas 27

Dell Canvas

The Dell Canvas and totem is essentially a graphics tablet with a stylus and also a turnable knob that can be placed onto the graphics tablet. Dell feature some videos on their site showcasing the Dell Canvas being used in ares such as drawing, video editing and CAD.

So for Linux applications supporting graphic drawing tablets already the canvas should work once this lands, but where we hope to see applications developers step up is adding support in their application for the totem. I have been pondering how we could help make that happen as we would be happy to donate a Dell Canvas to help kickstart application support, I am just unsure about the best way to go ahead.
I was considering offering one as a prize for the first application to add support for the totem, but that seems to be a chicken and egg problem by definition. If anyone got any suggestions for how to get one of these into the hands of the developer most interested and able to take advantage of it?

by uraeus at May 29, 2018 01:28 PM

May 21, 2018

Andy Wingocorrect or inotify: pick one

(Andy Wingo)

Let's say you decide that you'd like to see what some other processes on your system are doing to a subtree of the file system. You don't want to have to change how those processes work -- you just want to see what files those processes create and delete.

One approach would be to just scan the file-system tree periodically, enumerating its contents. But when the file system tree is large and the change rate is low, that's not an optimal thing to do.

Fortunately, Linux provides an API to allow a process to receive notifications on file-system change events, called inotify. So you open up the inotify(7) manual page, and are greeted with this:

With careful programming, an application can use inotify to efficiently monitor and cache the state of a set of filesystem objects. However, robust applications should allow for the fact that bugs in the monitoring logic or races of the kind described below may leave the cache inconsistent with the filesystem state. It is probably wise to do some consistency checking, and rebuild the cache when inconsistencies are detected.

It's not exactly reassuring is it? I mean, "you had one job" and all.

Reading down a bit farther, I thought that with some "careful programming", I could get by. After a day of trying, I am now certain that it is impossible to build a correct recursive directory monitor with inotify, and I am not even sure that "good enough" solutions exist.

pitfall the first: buffer overflow

Fundamentally, inotify races the monitoring process with all other processes on the system. Events are delivered to the monitoring process via a fixed-size buffer that can overflow, and the monitoring process provides no back-pressure on the system's rate of filesystem modifications. With inotify, you have to be ready to lose events.

This I think is probably the easiest limitation to work around. The kernel can let you know when the buffer overflows, and you can tweak the buffer size. Still, it's a first indication that perfect is not possible.

pitfall the second: now you see it, now you don't

This one is the real kicker. Say you get an event that says that a file "frenemies.txt" has been created in the directory "/contacts/". You go to open the file -- but is it still there? By the time you get around to looking for it, it could have been deleted, or renamed, or maybe even created again or replaced! This is a TOCTTOU race, built-in to the inotify API. It is literally impossible to use inotify without this class of error.

The canonical solution to this kind of issue in the kernel is to use file descriptors instead. Instead of or possibly in addition to getting a name with the file change event, you get a descriptor to a (possibly-unlinked) open file, which you would then be responsible for closing. But that's not what inotify does. Oh well!

pitfall the third: race conditions between inotify instances

When you inotify a directory, you get change notifications for just that directory. If you want to get change notifications for subdirectories, you need to open more inotify instances and poll on them all. However now you have N2 problems: as poll and the like return an unordered set of readable file descriptors, each with their own ordering, you no longer have access to a linear order in which changes occurred.

It is impossible to build a recursive directory watcher that definitively says "ok, first /contacts/frenemies.txt was created, then /contacts was renamed to /peeps, ..." because you have no ordering between the different watches. You don't know that there was ever even a time that /contacts/frenemies.txt was an accessible file name; it could have been only ever openable as /peeps/frenemies.txt.

Of course, this is the most basic ordering problem. If you are building a monitoring tool that actually wants to open files -- good luck bubster! It literally cannot be correct. (It might work well enough, of course.)

reflections

As far as I am aware, inotify came out to address the needs of desktop search tools like the belated Beagle (11/10 good pupper just trying to get his pup on). Especially in the days of spinning metal, grovelling over the whole hard-drive was a real non-starter, especially if the search database should to be up-to-date.

But after looking into inotify, I start to see why someone at Google said that desktop search was in some ways harder than web search -- I mean we all struggle to find files on our own machines, even now, 15 years after the whole dnotify/inotify thing started. Part of it is that the given the choice between supporting reliable, fool-proof file system indexes on the one hand, and overclocking the IOPS benchmarks on the other, the kernel gave us inotify. I understand it, but inotify still sucks.

I dunno about you all but whenever I've had to document such an egregious uncorrectable failure mode as any of the ones in the inotify manual, I have rewritten the software instead. In that spirit, I hope that some day we shall send inotify to the pet cemetery, to rest in peace beside Beagle.

by Andy Wingo at May 21, 2018 02:29 PM

May 17, 2018

GStreamerGStreamer 1.14.1 stable bug fix release

(GStreamer)

The GStreamer team is pleased to announce the first bug fix release in the stable 1.14 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.14.x.

See /releases/1.14/ for the details.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Download tarballs directly here: gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

May 17, 2018 05:00 PM

May 16, 2018

Andy Wingolightweight concurrency in lua

(Andy Wingo)

Hello, all! Today I'd like to share some work I have done recently as part of the Snabb user-space networking toolkit. Snabb is mainly about high-performance packet processing, but it also needs to communicate with management-oriented parts of network infrastructure. These communication needs are performed by a dedicated manager process, but that process has many things to do, and can't afford to make blocking operations.

Snabb is written in Lua, which doesn't have built-in facilities for concurrency. What we'd like is to have fibers. Fortunately, Lua's coroutines are powerful enough to implement fibers. Let's do that!

fibers in lua

First we need a scheduling facility. Here's the smallest possible scheduler: simply a queue of tasks and a function to run those tasks.

local task_queue = {}

function schedule_task(thunk)
   table.insert(task_queue, thunk)
end

function run_tasks()
   local queue = task_queue
   task_queue = {}
   for _,thunk in ipairs(queue) do thunk() end
end

For our purposes, a task is just a function that will be called with no arguments.

Now let's build fibers. This is easier than you might think!

local current_fiber = false

function spawn_fiber(fn)
   local fiber = coroutine.create(fn)
   schedule_task(function () resume_fiber(fiber) end)
end

function resume_fiber(fiber, ...)
   current_fiber = fiber
   local ok, err = coroutine.resume(fiber, ...)
   current_fiber = nil
   if not ok then
      print('Error while running fiber: '..tostring(err))
   end
end

function suspend_current_fiber(block, ...)
   -- The block function should arrange to reschedule
   -- the fiber when it becomes runnable.
   block(current_fiber, ...)
   return coroutine.yield()
end

Here, a fiber is simply a coroutine underneath. Suspending a fiber suspends the coroutine. Resuming a fiber runs the coroutine. If you're unfamiliar with coroutines, or coroutines in Lua, maybe have a look at the lua-users wiki page on the topic.

The difference between a fibers facility and just coroutines is that with fibers, you have a scheduler as well. Very much like Scheme's call-with-prompt, coroutines are one of those powerful language building blocks that should rarely be used directly; concurrent programming needs more structure than what Lua offers.

If you're following along, it's probably worth it here to think how you would implement yield based on these functions. A yield implementation should yield control to the scheduler, and resume the fiber on the next scheduler turn. The answer is here.

communication

Once you have fibers and a scheduler, you have concurrency, which means that if you're not careful, you have a mess. Here I think the Go language got the essence of the idea exactly right: Do not communicate by sharing memory; instead, share memory by communicating.

Even though Lua doesn't support multiple machine threads running concurrently, concurrency between fibers can still be fraught with bugs. Tony Hoare's Communicating Sequential Processes showed that we can avoid a class of these bugs by treating communication as a first-class concept.

Happily, the Concurrent ML project showed that it's possible to build these first-class communication facilities as a library, provided the language you are working in has threads of some kind, and fibers are enough. Last year I built a Concurrent ML library for Guile Scheme, and when in Snabb we had a similar need, I ported that code over to Lua. As it's a new take on the problem in a different language, I think I've been able to simplify things even more.

So let's take a crack at implementing Concurrent ML in Lua. In CML, the fundamental primitive for communication is the operation. An operation represents the potential for communication. For example, if you have a channel, it would have methods to return "get operations" and "put operations" on that channel. Actually receiving or sending a message on a channel occurs by performing those operations. One operation can be performed many times, or not at all.

Compared to a system like Go, for example, there are two main advantages of CML. The first is that CML allows non-deterministic choice between a number of potential operations in a generic way. For example, you can construct a operation that, when performed, will either get on one channel or wait for a condition variable to be signalled, whichever comes first. In Go, you can only select between operations on channels.

The other interesting part of CML is that operations are built from a uniform protocol, and so users can implement new kinds of operations. Compare again to Go where all you have are channels, and nothing else.

The CML operation protocol consists three related functions: try which attempts to directly complete an operation in a non-blocking way; block, which is called after a fiber has suspended, and which arranges to resume the fiber when the operation completes; and wrap, which is called on the result of a successfully performed operation.

In Lua, we can call this an implementation of an operation, and create it like this:

function new_op_impl(try, block, wrap)
   return { try=try, block=block, wrap=wrap }
end

Now let's go ahead and write the guts of CML: the operation implementation. We'll represent an operation as a Lua object with two methods. The perform method will attempt to perform the operation, and return the resulting value. If the operation can complete immediately, the call to perform will return directly. Otherwise, perform will suspend the current fiber and arrange to continue only when the operation completes.

The wrap method "decorates" an operation, returning a new operation that, if and when it completes, will "wrap" the result of the completed operation with a function, by applying the function to the result. It's useful to distinguish the sub-operations of a non-deterministic choice from each other.

Here our new_op function will take an array of operation implementations and return an operation that, when performed, will synchronize on the first available operation. As you can see, it already has the equivalent of Go's select built in.

function new_op(impls)
   local op = { impls=impls }
   
   function op.perform()
      for _,impl in ipairs(impls) do
         local success, val = impl.try()
         if success then return impl.wrap(val) end
      end
      local function block(fiber)
         local suspension = new_suspension(fiber)
         for _,impl in ipairs(impls) do
            impl.block(suspension, impl.wrap)
         end
      end
      local wrap, val = suspend_current_fiber(block)
      return wrap(val)
   end

   function op.wrap(f)
      local wrapped = {}
      for _, impl in ipairs(impls) do
         local function wrap(val)
            return f(impl.wrap(val))
         end
         local impl = new_op_impl(impl.try, impl.block, wrap)
         table.insert(wrapped, impl)
      end
      return new_op(wrapped)
   end

   return op
end

There's only one thing missing there, which is new_suspension. When you go to suspend a fiber because none of the operations that it's trying to do can complete directly (i.e. all of the try functions of its impls returned false), at that point the corresponding block functions will publish the fact that the fiber is waiting. However the fiber only waits until the first operation is ready; subsequent operations becoming ready should be ignored. The suspension is the object that manages this state.

function new_suspension(fiber)
   local waiting = true
   local suspension = {}
   function suspension.waiting() return waiting end
   function suspension.complete(wrap, val)
      assert(waiting)
      waiting = false
      local function resume()
         resume_fiber(fiber, wrap, val)
      end
      schedule_task(resume)
   end
   return suspension
end

As you can see, the suspension's complete method is also the bit that actually arranges to resume a suspended fiber.

Finally, just to round out the implementation, here's a function implementing non-deterministic choice from among a number of sub-operations:

function choice(...)
   local impls = {}
   for _, op in ipairs({...}) do
      for _, impl in ipairs(op.impls) do
         table.insert(impls, impl)
      end
   end
   return new_op(impls)
end

on cml

OK, I'm sure this seems a bit abstract at this point. Let's implement something concrete in terms of these primitives: channels.

Channels expose two similar but different kinds of operations: put operations, which try to send a value, and get operations, which try to receive a value. If there's a sender already waiting to send when we go to perform a get_op, the operation continues directly, and we resume the sender; otherwise the receiver publishes its suspension to a queue. The put_op case is similar.

Finally we add some synchronous put and get convenience methods, in terms of their corresponding CML operations.

function new_channel()
   local ch = {}
   -- Queues of suspended fibers waiting to get or put values
   -- via this channel.
   local getq, putq = {}, {}

   local function default_wrap(val) return val end
   local function is_empty(q) return #q == 0 end
   local function peek_front(q) return q[1] end
   local function pop_front(q) return table.remove(q, 1) end
   local function push_back(q, x) q[#q+1] = x end

   -- Since a suspension could complete in multiple ways
   -- because of non-deterministic choice, it could be that
   -- suspensions on a channel's putq or getq are already
   -- completed.  This helper removes already-completed
   -- suspensions.
   local function remove_stale_entries(q)
      local i = 1
      while i <= #q do
         if q[i].suspension.waiting() then
            i = i + 1
         else
            table.remove(q, i)
         end
      end
   end

   -- Make an operation that if and when it completes will
   -- rendezvous with a receiver fiber to send VAL over the
   -- channel.  Result of performing operation is nil.
   function ch.put_op(val)
      local function try()
         remove_stale_entries(getq)
         if is_empty(getq) then
            return false, nil
         else
            local remote = pop_front(getq)
            remote.suspension.complete(remote.wrap, val)
            return true, nil
         end
      end
      local function block(suspension, wrap)
         remove_stale_entries(putq)
         push_back(putq, {suspension=suspension, wrap=wrap, val=val})
      end
      return new_op({new_op_impl(try, block, default_wrap)})
   end

   -- Make an operation that if and when it completes will
   -- rendezvous with a sender fiber to receive one value from
   -- the channel.  Result is the value received.
   function ch.get_op()
      local function try()
         remove_stale_entries(putq)
         if is_empty(putq) then
            return false, nil
         else
            local remote = pop_front(putq)
            remote.suspension.complete(remote.wrap)
            return true, remote.val
         end
      end
      local function block(suspension, wrap)
         remove_stale_entries(getq)
         push_back(getq, {suspension=suspension, wrap=wrap})
      end
      return new_op({new_op_impl(try, block, default_wrap)})
   end

   function ch.put(val) return ch.put_op(val).perform() end
   function ch.get()    return ch.get_op().perform()    end

   return ch
end

a wee example

You might be wondering what it's like to program with channels in Lua, so here's a little example that shows a prime sieve based on channels. It's not a great example of concurrency in that it's not an inherently concurrent problem, but it's cute to show computations in terms of infinite streams.

function prime_sieve(count)
   local function sieve(p, rx)
      local tx = new_channel()
      spawn_fiber(function ()
         while true do
            local n = rx.get()
            if n % p ~= 0 then tx.put(n) end
         end
      end)
      return tx
   end

   local function integers_from(n)
      local tx = new_channel()
      spawn_fiber(function ()
         while true do
            tx.put(n)
            n = n + 1
         end
      end)
      return tx
   end

   local function primes()
      local tx = new_channel()
      spawn_fiber(function ()
         local rx = integers_from(2)
         while true do
            local p = rx.get()
            tx.put(p)
            rx = sieve(p, rx)
         end
      end)
      return tx
   end

   local done = false
   spawn_fiber(function()
      local rx = primes()
      for i=1,count do print(rx.get()) end
      done = true
   end)

   while not done do run_tasks() end
end

Here you also see an example of running the scheduler in the last line.

where next?

Let's put this into perspective: in a couple hundred lines of code, we've gone from minimal Lua to a language with lightweight multitasking, extensible CML-based operations, and CSP-style channels; truly a delight.

There are a number of possible ways to extend this code. One of them is to implement true multithreading, if the language you are working in supports that. In that case there are some small protocol modifications to take into account; see the notes on the Guile CML implementation and especially the Manticore Parallel CML project.

The implementation above is pleasantly small, but it could be faster with the choice of more specialized data structures. I think interested readers probably see a number of opportunities there.

In a library, you might want to avoid the global task_queue and implement nested or multiple independent schedulers, and of course in a parallel situation you'll want core-local schedulers as well.

The implementation above has no notion of time. What we did in the Snabb implementation of fibers was to implement a timer wheel, inspired by Juho Snellman's Ratas, and then add that timer wheel as a task source to Snabb's scheduler. In Snabb, every time the equivalent of run_tasks() is called, a scheduler asks its sources to schedule additional tasks. The timer wheel implementation schedules expired timers. It's straightforward to build CML timeout operations in terms of timers.

Additionally, your system probably has other external sources of communication, such as sockets. The trick to integrating sockets into fibers is to suspend the current fiber whenever an operation on a file descriptor would block, and arrange to resume it when the operation can proceed. Here's the implementation in Snabb.

The only difficult bit with getting nice nonblocking socket support is that you need to be able to suspend the calling thread when you see the EWOULDBLOCK condition, and for coroutines that is often only possible if you implemented the buffered I/O yourself. In Snabb that's what we did: we implemented a compatible replacement for Lua's built-in streams, in Lua. That lets us handle EWOULDBLOCK conditions in a flexible manner. Integrating epoll as a task source also lets us sleep when there are no runnable tasks.

Likewise in the Snabb context, we are also working on a TCP implementation. In that case you want to structure TCP endpoints as fibers, and arrange to suspend and resume them as appropriate, while also allowing timeouts. I think the scheduler and CML patterns are going to allow us to do that without much trouble. (Of course, the TCP implementation will give us lots of trouble!)

Additionally your system might want to communicate with fibers from other threads. It's entirely possible to implement CML on top of pthreads, and it's entirely possible as well to support communication between pthreads and fibers. If this is interesting to you, see Guile's implementation.

When I talked about fibers in an earlier article, I built them in terms of delimited continuations. Delimited continuations are fun and more expressive than coroutines, but it turns out that for fibers, all you need is the expressive power of coroutines -- multi-shot continuations aren't useful. Also I think the presentation might be more straightforward. So if all your language has is coroutines, that's still good enough.

There are many more kinds of standard CML operations; implementing those is also another next step. In particular, I have found semaphores and condition variables to be quite useful. Also, standard CML supports "guards", invoked when an operation is performed, and "nacks", invoked when an operation is definitively not performed because a choice selected some other operation. These can be layered on top; see the Parallel CML paper for notes on "primitive CML".

Also, the choice operator above is left-biased: it will prefer earlier impls over later ones. You might want to not always start with the first impl in the list.

The scheduler shown above is the simplest thing I could come up with. You may want to experiment with other scheduling algorithms, e.g. capability-based scheduling, or kill-safe abstractions. Do it!

Or, it could be you already have a scheduler, like some kind of main loop that's already there. Cool, you can use it directly -- all that fibers needs is some way to schedule functions to run.

godspeed

In summary, I think Concurrent ML should be better-known. Its simplicity and expressivity make it a valuable part of any concurrent system. Already in Snabb it helped us solve some longstanding gnarly issues by making the right solutions expressible.

As Adam Solove says, Concurrent ML is great, but it has a branding problem. Its ideas haven't penetrated the industrial concurrent programming world to the extent that they should. This article is another attempt to try to get the word out. Thanks to Adam for the observation that CML is really a protocol; I'm sure the concepts could be made even more clear, but at least this is a step forward.

All the code in this article is up on a gitlab snippet along with instructions for running the example program from the command line. Give it a go, and happy hacking with CML!

by Andy Wingo at May 16, 2018 03:17 PM

May 07, 2018

Gustavo OrrilloLassa fever in Nigeria: lessons learnt

Back in March of this year we reached an important milestone in the collaboration between the Sabeti Lab and the Irrua Specialist Teaching Hospital (ISTH) in Edo State, Nigeria: we published a joint paper on the journal The Lancet Infectious Diseases describing the largest and most detailed retrospective cohort study of Lassa fever patients in Nigeria, and identifying the clinical and laboratory predictors of outcome observed at ISTH for this deadly disease. This is a culmination of several years of work, starting in 2013.

Lassa virus At the entrance of the Lassa ward at ISTH with Christopher Iruolagbe, one of the clinicians.

First of all, I will give a brief introduction to Lassa fever. This is a member of a family of diseases called Viral Hemorrhagic Fevers (VHFs), which includes Ebola, dengue, and yellow fever among many others. These illnesses are caused by different viruses, but all have fever and hemorrhage as clinical manifestations. The 2014-2016 Ebola outbreak caused widespread concern due to its high mortality and fear for a worldwide pandemic, although its spread was largely contained to the African nations of Liberia, Sierra Leone, and Guinea. The outbreak left a large impact in the region, with almost 30,000 total cases and over 15,000 deaths. In contrast, Lassa fever is an endemic disease, with cases occurring throughout the year in West Africa, and presenting a wide range of clinical severity: most people don’t get sick enough to go to the hospital, but a small percentage become acutely ill and need medical attention. It is estimated that 300,000 people get infected with the Lassa virus every year, but less than 5% of those end up going to the hospital. The overall mortality among hospitalized cases is around 20%, lower than Ebola, but it can be much higher than that for elderly patients and pregnant women. Another difference with Ebola is the natural host of the virus: in the case of Lassa fever, it is the mastomys rat, which enters into people’s homes looking for food and spreads the virus through feces and urine, while for Ebola it is likely to be fruit bats instead. Even though the pathogens causing these diseases were discovered only in the last 50 years (Lassa fever in 1969, Ebola in 1976), genetic studies from our lab indicate that Lassa fever is an old disease, with the virus spreading out of Nigeria at least 400 years ago. A similar picture appears for Ebola, leading to the question of whether we are in the presence of emerging diseases or diagnoses.

Lassa virus TEM micrograph of Lassa virus virions.

Back in 2013, I just began developing the visualization tool Mirador at Fathom Information Design, and was looking for “real world” datasets to apply Mirador to. Around that time, Dr Peter Okokhere, the head of the Lassa fever ward at ISTH, was visiting the Sabeti Lab, and had compiled the records of all the patients treated in the ward since 2011 into an anonymized Excel spreadsheet. Mirador was designed precisely to handle that kind of tabular data, so it seemed to us that it should be straightforward to use Mirador to carry out exploratory analysis of Dr Okokhere’s dataset. In particular, we were interested in finding correlations between the different demographic, clinical, and laboratory variables collected for all patients and their outcome (death or survival). A subsequent step was to apply Machine Learning in order to train prognostic models that could eventually be helpful for clinicians, for example to triage patients upon admission to the ward depending on their death risk (so that time and material resources would be prioritized for high-risk patients). The models’ predictions could also identify the clinical features most strongly the risk, and hence inform medical judgment.

Lassa virus Mirador displaying ISTH data.

However, the work on the ISTH clinical data had to be put on hold as the lab shifted its focus and resources to the Ebola outbreak during the next two years. Some of the initial modeling approaches we were developing for the ISTH dataset were applied to similar datasets from Ebola patients, and this work led to some publications (here and here), where we investigated the possibility of deploying this prognostic models to the clinic in the form of medical apps for patient triage, care, and management. By late 2015, the Ebola outbreak was beginning to subside, and we were able to come back to our research on Lassa. In the meantime, Dr Okokhere had incorporated patients treated in 2014 and 2015, which enlarged the dataset to nearly 300 patients. At that time, I started to realize that I was originally too enthusiastic about the applicability of prognostic models to inform clinical decisions. As I learned while after delving deeper into the topic, such models need to be trained on much larger cohorts and then validated on independently-obtained datasets, so that they can be generalized to new patients. Widely used prognostic scores such as APACHE II or the Framingham Risk Score took thousands of patients and many years to be developed. As unique and detailed as the ISTH dataset is, it cannot support the creation of a Lassa fever prognostic score yet. Thanks to Dr Okokhere’s efforts to digitalize the paper medical records of the patients treated in the ward, we were able to have some data, but unfortunately there are no similar datasets available for analysis and validation. Only one other study, published in 1987, included more than 300 confirmed cases of Lassa fever and detailed demographic, clinical, and laboratory data from patients in Sierra Leone between 1977 and 1979.

Regardless of the current limitations, the ISTH dataset allowed us to provide a significant update on the clinical knowledge of Lassa fever in Nigeria, and to find the most important manifestations of the disease among the patients treated at ISTH.

CFR map Map of cases treated at ISTH between 2011 and 2015, clustered by geographical location and shaded by case fatality rate.

The predictive models also proved to be very useful, not as prognostic scores, but to test the independence between various biomarkers that characterize the pathophysiology of the disease. This, in addition other complementary results and previous experience from the clinicians at ISTH, led us to make hypothesis of medical relevance (e.g.: Lassa virus may damage the kidney cells in some of the patients), which we need to explore further.

Model performance Predictive performance of the logistic regression model trained on the ISTH data. The horizontal axis shows the mortality risk threshold used to predict death vs survival, and the right vertical axis the corresponding sensitivity and specificity of the model. The bars indicate the actual number of patients and mortality within each risk bin.

However, one of the requirements to move forward with this research and also improve patient care is to build better on-site capacity for data collection and management. This not only impacts the handling of the patients’ medical records, but also the laboratory samples that are used for diagnosis, clinical decision, and research. For this reason, we have been designing mobile apps for collecting clinical and laboratory data, using Dimagi’s CommCare platform. This platform has enabled us to translate paper-based protocols for data collection into digital forms that can work with limited internet connectivity, store the records in a HIPPA-compliant database, and generate real-time reports. These apps are currently undergoing the final stages of debug and testing, and will soon be deployed at ISTH’s Lassa ward and research lab for daily use.

CommCare clinical app CommCare clinical app for patient management.

As part of the testing process, I had the chance to visit ISTH recently, together with my lab colleague, Dr Kayla Barnes. We worked with the clinicians and lab staff, going through all the data entry modules in the apps and making sure that they function as expected and fit the workflow patient and sample management protocols. Being my first time in Nigeria and in Africa in general, this was an exceptional experience for me, and felt incredibly welcomed by all people we met during the visit. We even got tailor-made traditional Nigerian dresses we wore the last day of our stay at ISTH, as you can see in some of the pictures below!

ISTH's main entrance Lassa ward building Staff preparing a Ribavirin shot for a patient Working at the Lassa research lab Discussing the data collection protocols Testing the CommCare lab app Wearing traditional Nigeran dresses. Right to left: Eghosa Uyige, Kayla Barnes, Ikponmwosa Odia, John Aiyepada, and myself

Some pictures from our trip to ISTH in March 2018.

As much as we enjoyed a very successful visit and the hospitality of our Nigerian hosts, Nigeria and ISTH in particular were still recovering of the largest recorded Lassa fever outbreak in history: more than 200 confirmed cases were received and tested at ISTH between January and February of this year, which is more than all the cases from 2016 and 2017 combined. This outbreak was covered by several news sources, and it put a lot of strain on the Lassa ward personnel as they run out of beds and had to resort to temporary spaces to accommodate all patients. Fortunately, by the time we arrived the situation was gradually coming back to normal, as reported by the Nigerian Center for Disease Control. However, this recent event is a reminder of the threat posed by Lassa fever and other emerging infectious diseases, and how we need efficient detection and containment mechanisms to stop outbreaks and avoid loss of human life.

Research continues at ISTH and we hope to find insights about the recent outbreak from the viral sequences generated in Nigeria, thanks to the capacity building efforts from the African Center of Excellence for Genomics of Infectious Diseases, which our lab is part of together with several African and international partners. Among further next steps, we plan to carry out analyses of the combined the genomic and clinical data that could shed light on the genetic origin for the variability in the clinical manifestation of Lassa fever, and to develop new privacy-preserving Machine Learning algorithms that would allow us to respond to an outbreak faster by sharing data and models in real-time while protecting patient’s privacy.

by Andres Colubri at May 07, 2018 05:00 PM

April 24, 2018

Christian SchallerWarming up for Fedora Workstation 28

(Christian Schaller)

Been some time now since my last update on what is happening in Fedora Workstation and with current plans to release Fedora Workstation 28 in early May I thought this could be a good time to write something. As usual this is just a small subset of what the team has been doing and I always end up feeling a bit bad for not talking about the avalanche of general fixes and improvements the team adds to each release.

Thunderbolt
Christian Kellner has done a tremendous job keeping everyone informed of his work making sure we have proper Thunderbolt support in Fedora Workstation 28. One important aspect for us of this improved Thunderbolt support is that a lot of docking stations coming out will be requiring it and thus without this work being done you would not be able to use a wide range of docking stations. For a lot of screenshots and more details about how the thunderbolt support is done I recommend reading this article in Christians Blog.

3rd party applications
It has taken us quite some time to get there as getting this feature right both included a lot of internal discussion about policies around it and implementation detail. But starting from Fedora Workstation 28 you will be able to find more 3rd party software listed in GNOME Software if you enable it. The way it will work is that you as part of the initial setup will be asked if you want to have 3rd party software show up in GNOME Software. If you are upgrading you will be asked inside GNOME Software if you want to enable 3rd party software. You can also disable 3rd party software after enabling it from the GNOME Software settings as seen below:

GNOME Software settings

GNOME Software settings

In Fedora Workstation 27 we did have PyCharm available, but we have now added the NVidia driver and Steam to the list for Fedora Workstation 28.

We have also been working with Google to try to get Chrome included here and we are almost there as they merged for instance the needed Appstream metadata some time ago, but the last steps requires some tweaking of how Google generates their package repository (basically adding the appstream metadata to their yum repository) and we don’t have a clear timeline for when that will happen, but as soon as it does the Chrome will also appear in GNOME Software if you have 3rd party software enabled.

As we speak all 3rd party packages are RPMs, but we expect that going forward we will be adding applications packaged as Flatpaks too.

Finally if you want to propose 3rd party applications for inclusion you can find some instructions for how to do it here.

Virtualbox guest
Another major feature that got some attention that we worked on for this release was Hans de Goedes work to ensure Fedora Workstation could run as a virtualbox guest out of the box. We know there are many people who have their first experience with linux running it under Virtualbox on Windows or MacOSX and we wanted to make their first experience as good as possible. Hans worked with the virtualbox team to clean up their kernel drivers and agree on a stable ABI so that they could be merged into the kernel and maintained there from now on.

Firmware updates
The Spectre/Meltdown situation did hammer home to a lot of people the need to have firmware updates easily available and easy to update. We created the Linux Vendor Firmware service for Fedora Workstation users with that in mind and it was great to see the service paying off for many Linux users, not only on Fedora, but also on other distributions who started using the service we provided. I would like to call out to Dell who was a critical partner for the Linux Vendor Firmware effort from day 1 and thus their users got the most benefit from it when Spectre and Meltdown hit. Spectre and Meltdown also helped get a lot of other vendors off the fence or to accelerate their efforts to support LVFS and Richard Hughes and Peter Jones have been working closely with a lot of new vendors during this cycle to get support for their hardware and devices into LVFS. In fact Peter even flew down to the offices one of the biggest laptop vendors recently to help them resolve the last issues before their hardware will start showing up in the firmware service. Thanks to the work of Richard Hughes and Peter Jones you will both see a wider range of brands supported in the Linux Vendor Firmware Service in Fedora Workstation 28, but also a wider range of device classes.

Server side GL Vendor Neutral Dispatch
This is a bit of a technical detail, but Adam Jackson and Lyude Paul has been working hard this cycle on getting what we call Server side GLVND ready for Fedora Workstation 28. Currently we are looking at enabling it either as a zero-day update or short afterwards. so what is Server Side GLVND you say? Well it is basically the last missing piece we need to enable the use of the NVidia binary driver through XWayland. Currently the NVidia driver works with Wayland native OpenGL applications, but if you are trying to run an OpenGL application requiring X we need this to support it. And to be clear once we ship this in Fedora Workstation 28 it will also require a driver update from NVidia to use it, so us shipping it is just step 1 here. We do also expect there to be some need for further tuning once we got all the pieces released to get top notch performance. Of course over time we hope and expect all applications to become Wayland native, but this is a crucial transition technology for many of our users. Of course if you are using Intel or AMD graphics with the Mesa drivers things already work great and this change will not affect you in any way.

Flatpak
Flatpaks basically already work, but we have kept focus this time around on to fleshing out the story in terms of the so called Portals. Portals are essentially how applications are meant to be able to interact with things outside of the container on your desktop. Jan Grulich has put in a lot of great effort making sure we get portal support for Qt and KDE applications, most recently by adding support for the screen capture portal on top of Pipewire. You can ready more about that on Jan Grulichs blog. He is now focusing on getting the printing portal working with Qt.

Wim Taymans has also kept going full steam ahead of PipeWire, which is critical for us to enable applications dealing with cameras and similar on your system to be containerized. More details on that in my previous blog entry talking specifically about Pipewire.

It is also worth noting that we are working with Canonical engineers to ensure Portals also works with Snappy as we want to ensure that developers have a single set of APIs to target in order to allow their applications to be sandboxed on Linux. Alexander Larsson has already reviewed quite a bit of code from the Snappy developers to that effect.

Performance work
Our engineers have spent significant time looking at various performance and memory improvements since the last release. The main credit for the recently talked about ‘memory leak’ goes to Georges Basile Stavracas Neto from Endless, but many from our engineering team helped with diagnosing that and also fixed many other smaller issues along the way. More details about the ‘memory leak’ fix in Georges blog.

We are not done here though and Alberto Ruiz is organizing a big performance focused hackfest in Cambridge, England in May. We hope to bring together many of our core engineers to work with other members of the community to look at possible improvements. The Raspberry Pi will be the main target, but of course most improvements we do to make GNOME Shell run better on a Raspberry Pi also means improvements for normal x86 systems too.

Laptop Battery life
In our efforts to make Linux even better on laptops Hans De Goede spent a lot of time figuring out things we could do to make Fedora Workstation 28 have better battery life. How valuable these changes are will of course depend on your exact hardware, but I expect more or less everyone to have a bit better battery life on Fedora Workstation 28 and for some it could be a lot better battery life. You can read a bit more about these changes in Hans de Goedes blog.

by uraeus at April 24, 2018 05:15 PM

April 23, 2018

Sebastian DrögeGLib/GIO async operations and Rust futures + async/await

(Sebastian Dröge)

Unfortunately I was not able to attend the Rust+GNOME hackfest in Madrid last week, but I could at least spend some of my work time at Centricular on implementing one of the things I wanted to work on during the hackfest. The other one, more closely related to the gnome-class work, will be the topic of a future blog post once I actually have something to show.

So back to the topic. With the latest GIT version of the Rust bindings for GLib, GTK, etc it is now possible to make use of the Rust futures infrastructure for GIO async operations and various other functions. This should make writing of GNOME, and in general GLib-using, applications in Rust quite a bit more convenient.

For the impatient, the summary is that you can use Rust futures with GLib and GIO now, that it works both on the stable and nightly version of the compiler, and with the nightly version of the compiler it is also possible to use async/await. An example with the latter can be found here, and an example just using futures without async/await here.

Table of Contents

  1. Futures
    1. Futures in Rust
    2. Async/Await
    3. Tokio
  2. Futures & GLib/GIO
    1. Callbacks
    2. GLib Futures
    3. GIO Asynchronous Operations
    4. Async/Await
  3. The Future

Futures

First of all, what are futures and how do they work in Rust. In a few words, a future (also called promise elsewhere) is a value that represents the result of an asynchronous operation, e.g. establishing a TCP connection. The operation itself (usually) runs in the background, and only once the operation is finished (or fails), the future resolves to the result of that operation. There are all kinds of ways to combine futures, e.g. to execute some other (potentially async) code with the result once the first operation has finished.

It’s a concept that is also widely used in various other programming languages (e.g. C#, JavaScript, Python, …) for asynchronous programming and can probably be considered a proven concept at this point.

Futures in Rust

In Rust, a future is basically an implementation of relatively simple trait called Future. The following is the definition as of now, but there are discussions to change/simplify/generalize it currently and to also move it to the Rust standard library:

pub trait Future {
    type Item;
    type Error;

    fn poll(&mut self, cx: &mut task::Context) -> Poll<Self::Item, Self::Error>;
}

Anything that implements this trait can be considered an asynchronous operation that resolves to either an Item or an Error. Consumers of the future would call the poll method to check if the future has resolved already (to a result or error), or if the future is not ready yet. In case of the latter, the future itself would at a later point, once it is ready to proceed, notify the consumer about that. It would get a way for notifications from the Context that is passed, and proceeding does not necessarily mean that the future will resolve after this but it could just advance its internal state closer to the final resolution.

Calling poll manually is kind of inconvenient, so generally this is handled by an Executor on which the futures are scheduled and which is running them until their resolution. Equally, it’s inconvenient to have to implement that trait directly so for most common operations there are combinators that can be used on futures to build new futures, usually via closures in one way or another. For example the following would run the passed closure with the successful result of the future, and then have it return another future (Ok(()) is converted via IntoFuture to the future that always resolves successfully with ()), and also maps any errors to ()

fn our_future() -> impl Future<Item = (), Err = ()> {
    some_future
        .and_then(|res| {
            do_something(res);
            Ok(())
        })
        .map_err(|_| ())
}

A future represents only a single value, but there is also a trait for something producing multiple values: a Stream. For more details, best to check the documentation.

Async/Await

The above way of combining futures via combinators and closures is still not too great, and is still close to callback hell. In other languages (e.g. C#, JavaScript, Python, …) this was solved by introducing new features to the language: async for declaring futures with normal code flow, and await for suspending execution transparently and resuming at that point in the code with the result of a future.

Of course this was also implemented in Rust. Currently based on procedural macros, but there are discussions to actually move this also directly into the language and standard library.

The above example would look something like the following with the current version of the macros

#[async]
fn our_future() -> Result<(), ()> {
    let res = await!(some_future)
        .map_err(|_| ())?;

    do_something(res);
    Ok(())
}

This looks almost like normal, synchronous code but is internally converted into a future and completely asynchronous.

Unfortunately this is currently only available on the nightly version of Rust until various bits and pieces get stabilized.

Tokio

Most of the time when people talk about futures in Rust, they implicitly also mean Tokio. Tokio is a pure Rust, cross-platform asynchronous IO library and based on the futures abstraction above. It provides a futures executor and various types for asynchronous IO, e.g. sockets and socket streams.

But while Tokio is a great library, we’re not going to use it here and instead implement a futures executor around GLib. And on top of that implement various futures, also around GLib’s sister library GIO, which is providing lots of API for synchronous and asynchronous IO.

Just like all IO operations in Tokio, all GLib/GIO asynchronous operations are dependent on running with their respective event loop (i.e. the futures executor) and while it’s possible to use both in the same process, each operation has to be scheduled on the correct one.

Futures & GLib/GIO

Asynchronous operations and generally everything event related (timeouts, …) are based on callbacks that you have to register, and are running via a GMainLoop that is executing events from a GMainContext. The latter is just something that stores everything that is scheduled and provides API for polling if something is ready to be executed now, while the former does exactly that: executing.

Callbacks

The callback based API is also available via the Rust bindings, and would for example look as follows

glib::timeout_add(20, || {
    do_something_after_20ms();
    glib::Continue(false) // don't call again
});

glib::idle_add(|| {
    do_something_from_the_main_loop();
    glib::Continue(false) // don't call again
});

some_async_operation(|res| {
    match res {
        Err(err) => report_error_somehow(),
        Ok(res) => {
            do_something_with_result(res);
            some_other_async_operation(|res| {
                do_something_with_other_result(res);
            });
        }
    }
});

As can be seen here already, the callback-based approach leads to quite non-linear code and deep indentation due to all the closures. Also error handling becomes quite tricky due to somehow having handle them from a completely different call stack.

Compared to C this is still far more convenient due to actually having closures that can capture their environment, but we can definitely do better in Rust.

The above code also assumes that somewhere a main loop is running on the default main context, which could be achieved with the following e.g. inside main()

let ctx = glib::MainContext::default();
let l = glib::MainLoop::new(Some(&ctx), false);
ctx.push_thread_default();

// All operations here would be scheduled on this main context
do_things(&l);

// Run everything until someone calls l.quit()
l.run();
ctx.pop_thread_default();

It is also possible to explicitly select for various operations on which main context they should run, but that’s just a minor detail.

GLib Futures

To make this situation a bit nicer, I’ve implemented support for futures in the Rust bindings. This means, that the GLib MainContext is now a futures executor (and arbitrary futures can be scheduled on it), all the GSource related operations in GLib (timeouts, UNIX signals, …) have futures- or stream-based variants and all the GIO asynchronous operations also come with futures variants now. The latter are autogenerated with the gir bindings code generator.

For enabling usage of this, the futures feature of the glib and gio crates have to be enabled, but that’s about it. It is currently still hidden behind a feature gate because the futures infrastructure is still going to go through some API incompatible changes in the near future.

So let’s take a look at how to use it. First of all, setting up the main context and executing a trivial future on it

let c = glib::MainContext::default();
let l = glib::MainLoop::new(Some(&c), false);

c.push_thread_default();

// Spawn a future that is called from the main context
// and after printing something just quits the main loop
let l_clone = l.clone();
c.spawn(futures::lazy(move |_| {
    println!("we're called from the main context");
    l_clone.quit();
    Ok(())
});

l.run();

c.pop_thread_default();

Apart from spawn(), there is also a spawn_local(). The former can be called from any thread but requires the future to implement the Send trait (that is, it must be safe to send it to other threads) while the latter can only be called from the thread that owns the main context but it allows any kind of future to be spawned. In addition there is also a block_on() function on the main context, which allows to run non-static futures up to their completion and returns their result. The spawn functions only work with static futures (i.e. they have no references to any stack frame) and requires the futures to be infallible and resolve to ().

The above code already showed one of the advantages of using futures: it is possible to use all generic futures (that don’t require a specific executor), like futures::lazy or the mpsc/oneshot channels with GLib now. And any of the combinators that are available on futures

let c = MainContext::new();
                                                                                                       
let res = c.block_on(timeout_future(20)
    .and_then(move |_| {
        // Called after 20ms
        Ok(1)
    })
);

assert_eq!(res, Ok(1));

This example also shows the block_on functionality to return an actual value from the future (1 in this case).

GIO Asynchronous Operations

Similarly, all asynchronous GIO operations are now available as futures. For example to open a file asynchronously and getting a gio::InputStream to read from, the following could be done

let file = gio::File::new_for_path("Cargo.toml");

let l_clone = l.clone();
c.spawn_local(
    // Try to open the file
    file.read_async_future(glib::PRIORITY_DEFAULT)
        .map_err(|(_file, err)| {
            format!("Failed to open file: {}", err)
        })
        .and_then(move |(_file, strm)| {
            // Here we could now read from the stream, but
            // instead we just quit the main loop
            l_clone.quit();

            Ok(())
        })
);

A bigger example can be found in the gtk-rs examples repository here. This example is basically reading a file asynchronously in 64 byte chunks and printing it to stdout, then closing the file.

In the same way, network operations or any other asynchronous operation can be handled via futures now.

Async/Await

Compared to a callback-based approach, that bigger example is already a lot nicer but still quite heavy to read. With the async/await extension that I mentioned above already, the code looks much nicer in comparison and really almost like synchronous code. Except that it is not synchronous.

#[async]
fn read_file(file: gio::File) -> Result<(), String> {
    // Try to open the file
    let (_file, strm) = await!(file.read_async_future(glib::PRIORITY_DEFAULT))
        .map_err(|(_file, err)| format!("Failed to open file: {}", err))?;

    Ok(())
}

fn main() {
    [...]
    let future = async_block! {
        match await!(read_file(file)) {
            Ok(()) => (),
            Err(err) => eprintln!("Got error: {}", err),
        }
        l_clone.quit();
        Ok(())
    };

    c.spawn_local(future);
    [...]
}

For compiling this code, the futures-nightly feature has to be enabled for the glib crate, and a nightly compiler must be used.

The bigger example from before with async/await can be found here.

With this we’re already very close in Rust to having the same convenience as in other languages with asynchronous programming. And also it is very similar to what is possible in Vala with GIO asynchronous operations.

The Future

For now this is all finished and available from GIT of the glib and gio crates. This will have to be updated in the future whenever the futures API is changing, but it is planned to stabilize all this in Rust until the end of this year.

In the future it might also make sense to add futures variants for all the GObject signal handlers, so that e.g. handling a click on a GTK+ button could be done similarly from a future (or rather from a Stream as a signal can be emitted multiple times). If this is in the end more convenient than the callback-based approach that is currently used, is to be seen. Some experimentation would be necessary here. Also how to handle return values of signal handlers would have to be figured out.

by slomo at April 23, 2018 08:46 AM

April 17, 2018

Víctor JáquezHow to setup a gst-build environment with Intel’s VA-API stack

gst-build is far superior than gst-uninstalled scripts for developing GStreamer, mainly because its meson and ninja usage. Nonetheless, to integrate external dependencies it is not as easy as in gst-uninstalled.

This guide aims to show how to integrate GStreamer-VAAPI dependencies, in this case with the Intel VA-API driver.

Dependencies

For now we will need meson master, since it this patch is required. The pull request is already merged but it is unreleased yet.

Installation

clone gst-build repository

$ git clone git://anongit.freedesktop.org/gstreamer/gst-build
$ cd gst-build

apply custom patch for gst-build

The patch will add the repositories for libva and intel-vaapi-driver.

$ wget https://people.igalia.com/vjaquez/gst-build-vaapi/0001-add-libva-and-intel-vaapi-driver-as-subprojects.patch
$ git am 0001-add-libva-and-intel-vaapi-driver-as-subprojects.patch

0001-add-libva-and-intel-vaapi-driver-as-subprojects.patch

configure

Running this command, all dependency repositories will be cloned, symbolic links created, and the build directory configured.

$ meson bulid

apply custom patches for libva, intel-vaapi-driver and gstreamer-vaapi

libva

This patch is required since the headers files uninstalled paths doesn’t match with the ones in the “include” directives.

$ cd libva
$ wget https://people.igalia.com/vjaquez/gst-build-vaapi/0001-build-add-headers-for-uninstalled-setup.patch
$ git am 0001-build-add-headers-for-uninstalled-setup.patch
$ cd -

0001-build-add-headers-for-uninstalled-setup.patch

intel-vaapi-driver

The patch handles libva dependency as a subproject.

$ cd intel-vaapi-driver
$ wget https://people.igalia.com/vjaquez/gst-build-vaapi/0001-meson-support-libva-as-subproject.patch
$ git am 0001-meson-support-libva-as-subproject.patch
$ cd -

0001-meson-support-libva-as-subproject.patch

gstreamer-vaapi

Note to myself: this patch must be split and merged in upstream.

$ cd gstreamer-vaapi
$ wget https://people.igalia.com/vjaquez/gst-build-vaapi/0001-build-meson-libva-gst-uninstall-friendly.patch
$ git am 0001-build-meson-libva-gst-uninstall-friendly.patch
$ cd -

0001-build-meson-libva-gst-uninstall-friendly.patch updated: 2018/04/24

build

$ ninja -C build

And wait a couple minutes.

run uninstalled environment for testing

$ ninja -C build uninstalled
[gst-master] $ gst-inspect-1.0 vaapi
Plugin Details:
  Name                     vaapi
  Description              VA-API based elements
  Filename                 /opt/gst/gst-build/build/subprojects/gstreamer-vaapi/gst/vaapi/libgstvaapi.so
  Version                  1.15.0.1
  License                  LGPL
  Source module            gstreamer-vaapi
  Binary package           gstreamer-vaapi
  Origin URL               http://bugzilla.gnome.org/enter_bug.cgi?product=GStreamer

  vaapih264enc: VA-API H264 encoder
  vaapimpeg2enc: VA-API MPEG-2 encoder
  vaapisink: VA-API sink
  vaapidecodebin: VA-API Decode Bin
  vaapipostproc: VA-API video postprocessing
  vaapivc1dec: VA-API VC1 decoder
  vaapih264dec: VA-API H264 decoder
  vaapimpeg2dec: VA-API MPEG2 decoder
  vaapijpegdec: VA-API JPEG decoder

  9 features:
  +-- 9 elements

by vjaquez at April 17, 2018 02:43 PM

April 11, 2018

Nirbheek ChauhanA simple method of measuring audio latency

In my previous blog post, I talked about how I improved the latency of GStreamer's default audio capture and render elements on Windows.

An important part of any such work is a way to accurately measure the latencies in your audio path.

Ideally, one would use a mechanism that can track your buffers and give you a detailed breakdown of how much latency each component of your system adds. For instance, with an audio pipeline like this:

audio-capture → filter1 → filter2 → filter3 → audio-output

If you use GStreamer, you can use the latency tracer to measure how much latency filter1 adds, filter2 adds, and so on.

However, sometimes you need to measure latencies added by components outside of your control, for instance the audio APIs provided by the operating system, the audio drivers, or even the hardware itself. In that case it's really difficult, bordering on impossible, to do an automated breakdown.

But we do need some way of measuring those latencies, and I needed that for the aforementioned work. Maybe we can get an aggregated (total) number?

There's a simple way to do that if we can create a loopback connection in the audio setup. What's a loopback you ask?

Ouroboros snake biting its tail

Essentially, if we can redirect the audio output back to the audio input, that's called a loopback. The simplest way to do this is to connect the speaker-out/line-out to the microphone-in/line-in with a two-sided 3.5mm jack.

photo of male-to-male 3.5mm jack connecting speaker-out to mic-in

Now, when we send an audio wave down to the audio output, it'll show up on the audio input.

Hmm, what if we store the current time when we send the wave out, and compare it with the current time when we get it back? Well, that's the total end-to-end latency!

If we send out a wave periodically, we can measure the latency continuously, even as things are switched around or the pipeline is dynamically reconfigured.

Some of you may notice that this is somewhat similar to how the `ping` command measures latencies across the Internet.

screenshot of ping to 192.168.1.1


Just like a network connection, the loopback connection can be lossy or noisy, f.ex. if you use loudspeakers and a microphone instead of a wire, or if you have (ugh) noise in your line. But unlike network packets, we lose all context once the waves leave our pipeline and we have no way of uniquely identifying each wave.

So the simplest reliable implementation is to have only one wave traveling down the pipeline at a time. If we send a wave out, say, once a second, we can wait about one second for it to show up, and otherwise presume that it was lost.

That's exactly how the audiolatency GStreamer plugin that I wrote works! Here you can see its output while measuring the combined latency of the WASAPI source and sink elements:


The first measurement will always be wrong because of various implementation details in the audio stack, but the next measurements should all be correct.

This mechanism does place an upper bound on the latency that we can measure, and on how often we can measure it, but it should be possible to take more frequent measurements by sending a new wave as soon as the previous one was received (with a 1 second timeout). So this is an enhancement that can be done if people need this feature.

Hope you find the element useful; go forth and measure!

by Nirbheek (noreply@blogger.com) at April 11, 2018 02:13 PM