August 21, 2015

Jean-François Fortin TamHelp us get the GUADEC 2014 videos published

For those who could not attend GUADEC 2015, video recordings have been processed and published here. You might wonder, then, what happened to the GUADEC 2014 videos. The talks in Strasbourg were recorded indeed, but the audio came from the camera’s built-in microphones (so no truly directional mic and no line-in feed). This is problematic for a number of reasons:

  • We were in the city center of Strasbourg with no air conditioning, which means that the windows were open so we heard all sorts of noises (including cars passing on the stone pavement, construction work, etc.) in addition to background noise.
  • One of the rooms did not have a speaker microphone/amplified sound system
  • The camera microphones being far from the speaker means that you hear noises from the audience (such as chairs moving)

So the videos required a significant amount of processing to be adequate for publishing. So far, Bastian Ilsø has been doing the majority of the work and has managed to process about 25% of the talks (Alexandre Franke has also been working on sound processing).

This is where you come in. If you have a little bit of patience and a good pair of headphones, you can help! Take a look at our current dashboard, poke afranke on #guadec through (or by email at afranke at to let us know you will be taking care of talk XYZ, and he will send you a link to the corresponding audio file (and video if you need it). You can then do the processing in Audacity to remove the background noise and occasional noises (ex: chairs) before amplifying the whole sound track. You can find detailed instructions on the recommended way to do that here.

Processing one video’s soundtrack should normally take you a maximum of two hours (since you have to listen, pause and add silences to remove occasional big sounds) per talk. We’d like to get this accomplished as quickly as possible, so you should get involved only if you can commit to spending a few hours on this soon.

Once you’re done, let us know and send us the processed sound file — we will then include it in the video for final editing and publishing.

If a dozen of us processed two of those talks each, we might be done within a week or two! So roll up your sleeves and help us get those important recordings completed for posterity.

Also, GUADEC 2015 speakers: if you haven’t done so already, please email your slides to Alexandre Franke so we can include them with the video files this year.

by nekohayo at August 21, 2015 10:00 AM

Arun RaghavanGUADEC 2015

This one’s a bit late, for reasons that’ll be clear enough later in this post. I had the happy opportunity to go to GUADEC in Gothenburg this year (after missing the last two, unfortunately). It was a great, well-organised event, and I felt super-charged again, meeting all the people making GNOME better every day.

GUADEC picnic @ Gothenburg

I presented a status update of what we’ve been up to in the PulseAudio world in the past few years. Amazingly, all the videos are up already, so you can catch up with anything that you might have missed here.

We also had a meeting of PulseAudio developers which and a number of interesting topics of discussion came up (I’ll try to summarise my notes in a separate post).

A bunch of other interesting discussions happened in the hallways, and I’ll write about that if my investigations take me some place interesting.

Now the downside — I ended up missing the BoF part of GUADEC, and all of the GStreamer hackfest in Montpellier after. As it happens, I contracted dengue and I’m still recovering from this. Fortunately it was the lesser (non-haemorrhagic) version without any complications, so now it’s just a matter of resting till I’ve recuperated completely.

Nevertheless, the first part of the trip was great, and I’d like to thank the GNOME Foundation for sponsoring my travel and stay, without which I would have missed out on all the GUADEC fun this year.

Sponsored by GNOME!

Sponsored by GNOME!

by Arun at August 21, 2015 06:21 AM

August 19, 2015

GStreamerGStreamer Core, Plugins, RTSP Server 1.6.0 release candidate (1.5.90)


The GStreamer team is pleased to announce the first release candidate for the stable 1.6 release series. The 1.6 release series is adding new features on top of the 1.0, 1.2 and 1.4 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework. The final 1.6.0 release is planned in the next few days unless any major bugs are found.

Binaries for Android, iOS, Mac OS X and Windows will be provided separately by the GStreamer project.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, or gst-rtsp-server, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, or gst-rtsp-server,

Check the release announcement mail for details and the release notes above for a list of changes.

August 19, 2015 02:29 PM

August 15, 2015

Jean-François Fortin TamGUADEC 2015, surviving a fire and plane accident, etc.

So I went to GUADEC, as a community member and one of the directors of the GNOME Foundation.


Thanks to the Foundation for covering my travel and accomodation—it makes all the difference!

gnome sponsored badge shadow

It was a really good GUADEC. The local team put a huge amount of efforts into making it possible. There is a public page for sending feedback, and so far it’s pretty positive.


It’s also one of the most relaxing and enjoyable conferences I’ve had since 2011:

  • This year I did not have talks to prepare, sessions to chair, contracts or work obligations to fulfill, hackfests to hold or students to mentor for Pitivi. Therefore, this meant that I was able to focus solely on attending the talks and discussing with as many contributors as possible—and that is of incredible value. I also had good times with friends I hadn’t seen in a long while.
  • The hostel’s showers had hot water.
  • Credit cards accepted (and encouraged) everywhere—No leftover money in a niche currency!
  • Göteborg is very pedestrian-friendly and everyone on the road is courteous and patient
  • Besides tap water, there’s sparkling water on tap! What more could you wish for?
A Mexican standoff between three GStreamer developers. This is what a real deadlock looks like in GStreamer.

A Mexican standoff between three GStreamer developers. This is what a real deadlock looks like in GStreamer.

I prepared two lightning talks on the spot. The first of them was a big satyre and, apparently, most people did not quite realize I was trolling, except Elad whose laugh I could hear all across the auditorium. The French conspiracy looked at me with a vengeful stare over dinner but luckily, since I did not go to the football match, I did not get my arse kicked by Bastien.


This is the door I came from before giving the first lightning talk

After the core days, I organized a BoF (“Birds of a Feather”) session on “GNOME Shell extensions as 1st-class citizens” to make life easier for contributors, users and enterprise distributors:


Besides the work to finish the port to Wayland, I see extensions as being the “next big step”, the remaining high-profile hurdle to solve in GNOME Shell. This is a fairly big topic though, so I’ll keep this for a separate blog post where I will share the conclusions of the discussion from the BoF.

I took a handful of photos—nothing much compared to my previous trigger-happy years. I also created a wiki page where you can list your albums so we can avoid searching all across the Internet.

There were also some presentations, at the Pelagicore offices, on Tuesday night:


Now, if you would excuse me for a moment, I would like to tell you about my return trip. I find it to be a rather fascinating story. Read on, you won’t regret it.

As I woke on the day of my departure from Göteborg, I received a notification that my second flight (of my three-segment trip) would be delayed by one hour. When I arrived at Göteborg’s Landvetter airport, I therefore asked the check-in desk if they could rebook the third flight, as the one hour connection in Toronto would no longer be sufficient. They said that it could not be done from Göteborg, it had to be done by the Lufthansa crew in Frankfurt. Oh well, no matter! I gleefully embarked on flight to Frankfurt as I would be sitting with good friends from the GNOME community on adjacent seats.

Then I landed in Frankfurt.

How to survive a fire in the airport, a plane accident, and Lufthansa sandwiches

I basically had to speedhack my way through the whole airport. I’ll spare you some details and, to save some space, abbreviate “Service Desk” to “SD” as they kept referring me from one to another:

  1. SD 1 → SD 2 → SD 3 (service center)
  2. Get a number, sit for some time in a waiting area in front of six agents that seem to be playing Starcraft rather than actually processing customers, while noticing some odd barbecue smell and haze in the surrounding area.
  3. Notice a team of firefighters walking past you and walling off an entire section of the airport with a sliding metal door as an alarm goes off:
  4. Pinch your nose and time your respiration as you notice the smoke that continues permeating the hall and food court, observe the travelers apparently not noticing they are breathing carbon monoxyde
  5. As the Service Desk employees finish their Starcraft round, get told that we need to evacuate and that “You must go ask another service desk in that general direction”.
  6. Walk across the airport a bit, with travelers in the hall still not aware of what’s going on:
    fog zombies
  7. SD 4 → SD 6 (as directed), walking past SD 5.
  8. SD 6 tells me I should go to SD 7, “up that escalator and in the hall on the right”… At which point I decide to glitch my way into things and go to SD 5 instead, thinking, “I’ll be damned if the First Class service counter can’t sort this out when asked nicely”.
  9. Get rebooking from SD 5, thanks to my irresistible charm.
  10. Walk across the airport again in a whole new direction. Get in the disordered “line” for border control (Frankfurt is quite bad at that “space management and order” thing).
  11. Go through Frankfurt’s notoriously horrendous security bottleneckcheckpoint
  12. Finally get to the gate and board the one-hour-late plane.

“That went fairly well”, thought I, as I ate my leftover Lufthansa sandwich, “I won’t have to worry about the rest of the trip”.

The plane taxies to the runway. Ready for take off! Main engines turn on, take off every zig!! Full thrott—WHAM. The plane stops dead in its tracks after ten meters, as if we had hit a cow on the runway.

Turns out one of the engines went kaputt during take-off.

Back to the gate for investigation and evacuation… except that the airport gate was broken. I shit you not.

and then we told them we're going back to the gate

So, after sitting for roughly an hour in a broken plane in front of a broken boarding gate, technicians confirmed the engine dead and the plane unsafe to fly. We got the “evacuation by bus and tour across the whole airport” treatment, ending up in a different gate.


The tarmac party bus—can you spot the GNOME in that photo?

The tarmac party bus. Can you spot the gnome in that photo?

The passengers were surprisingly calm while the crew explained that we could not rebook here, that it was only possible at that flight’s destination (Toronto), where the ground team (hereafter known as “welcoming party”) would be waiting for us to deal with any issues. Therefore, my whole Frankfurt multi-desk rebooking hackventure was for naught.

While waiting in that odd terminal gate, we got some (sparkling!) apple juice and light snacks (I actually didn’t get any of those, the horde beat us to it). Eventually we hopped back onto the buses, back onto a plane, and took off into the glorious sun set.

Arriving in Toronto, there was of course no welcoming party, but I’d gotten pretty good at speedrunning airports anyway. The first service desk in Toronto printed my new (rebooked) boarding pass incorrectly and I had to go to another service desk (who rudely told me, “Why did you wait here? You should have done that at the gate you’re supposed to board! You’ll miss your plane!” — nevermind the fact that I had an hour ahead of me and that I’m used to transferring with as little as 20 minutes).

On the upside, if you’re seeing this post, it means I made it out alive 😃 Who said the job of being on the Board was without peril?

Like Freud once said, I can heartily recommend the Luft Hansa to anyone.

true story

When I summarized my story to an off-duty attendant in my final flight, she said, “Wow. Now I’m a little scared that you’ll be jinxing this flight too”, but I did arrive home safely in the end, albeit 7 hours later than expected.

Top 10 reasons to fly Frankfurt → Toronto

During the AGM session, I was amused by Rosanna’s “Top ten reasons to organise GUADEC in your city”. Turns out she was on the same flight as me, so I felt inspired to give you the top ten reasons to fly LH470 from Frankfurt to Toronto:

  1. You get free sparkling apple juice! (although Rosanna nearly choked and spilled hers when I said that)
  2. TWO free bus tours of the tarmac!
  3. Stay in shape by loading and unloading your hand luggage twice!
  4. Improve your over-the-counter charm and negotiation skills
  5. Discover unknown areas of the Frankfurt airport
  6. Mock the Germans and their legendary organization/efficiency
  7. Strengthen relationships with passengers when you carry a power strip
  8. Take pictures of people taking pictures in front of a plane’s turboreactor
  9. Justify higher-than-usual intake of alcohol on the plane
  10. Increased appreciation of your home, friends and family


by nekohayo at August 15, 2015 07:17 PM

August 04, 2015

Andy Wingodeveloping v8 with guix

(Andy Wingo)

a guided descent into hell

It all started off so simply. My primary development machine is a desktop computer that I never turn off. I suspend it when I leave work, and then resume it when I come back. It's always where I left it, as it should be.

I rarely update this machine because it works well enough for me, and anyway my focus isn't the machine, it's the things I do on it. Mostly I work on V8. The setup is so boring that I certainly didn't imagine myself writing an article about it today, but circumstances have forced my hand.

This machine runs Debian. It used to run the testing distribution, but somehow in the past I needed something that wasn't in testing so it runs unstable. I've been using Debian for some 16 years now, though not continuously, so although running unstable can be risky, usually it isn't, and I've unborked it enough times that I felt pretty comfortable.

Perhaps you see where this is going!

I went to install something, I can't even remember what it was now, and the downloads failed because I hadn't updated in a while. So I update, install the thing, and all is well. Except my instant messaging isn't working any more because there are a few moving parts (empathy / telepathy / mission control / gabble / dbus / whatwhat), and the install must have pulled in something that broke one of them. No biggie, this happens. Might as well go ahead and update the rest of the system while I'm at it and get a reboot to make sure I'm not running old software.

Most Debian users know that you probably shouldn't do a dist-upgrade from an old system -- you upgrade and then you dist-upgrade. Or perhaps this isn't even true, it's tribal lore to avoid getting eaten by the wild beasts of bork that roam around the village walls at night. Anyway that's what I did -- an upgrade, let it chunk for a while, then a dist-upgrade, check the list to make sure it didn't decide to remove one of my kidneys to satisfy the priorities of the bearded demon that lives inside apt-get, OK, let it go, all is well, reboot. Swell.

Or not! The computer restarts to a blank screen. Ha ha ha you have been bitten by a bork-beast! Switch to a terminal and try to see what's going on with GDM. It's gone! Ha ha ha! Your organs are being masticated as we speak! How does that feel! Try to figure out which package is causing it, happily with another computer that actually works. Surely this will be fixed in some update coming soon. Oh it's something that's going to take a few weeks!!!! Ninth level, end of the line, all passengers off!

my gods

I know how we got here, I love Debian, but it is just unacceptable and revolting that software development in 2015 is exposed to an upgrade process which (1) can break your system (2) by default and (3) can't be rolled back. The last one is the killer: who would design software this way? If you make a system like this in 2015 I'd say you're committing malpractice.

Well yesterday I resolved that this would be the last time this happens to me. Of course I could just develop in a virtual machine, and save and restore around upgrades, but that's kinda trash. Or I could use btrfs and be able to rewind changes to the file system, but then it would rewind everything, not just the system state.

Fortunately there is a better option in the form of functional package managers, like Nix and Guix. Instead of upgrading your system by mutating /usr, Nix and Guix store all files in a content-addressed store (/nix/store and /gnu/store, respectively). A user accesses the store via a "profile", which is a forest of symlinks into the store.

For example, on my machine with a NixOS system installation, I have:

$ which ls

$ ls -l /run/current-system/sw/bin/ls
lrwxrwxrwx 1 root nixbld 65 Jan  1  1970
  /run/current-system/sw/bin/ls ->

$ ldd /nix/store/wc472nw0kyw0iwgl6352ii5czxd97js2-coreutils-8.23/bin/ls (0x00007fff5d3c4000) => /nix/store/c2p56z920h4mxw12pjw053sqfhhh0l0y-acl-2.2.52/lib/ (0x00007fce99d5d000) => /nix/store/la5imi1602jxhpds9675n2n2d0683lbq-glibc-2.20/lib/ (0x00007fce999c0000) => /nix/store/jd3gggw5bs3a6sbjnwhjapcqr8g78f5c-attr-2.4.47/lib/ (0x00007fce997bc000)
  /nix/store/la5imi1602jxhpds9675n2n2d0683lbq-glibc-2.20/lib/ (0x00007fce99f65000)

Content-addressed linkage means that files in the store are never mutated: they will never be overwritten by a software upgrade. Never. Never will I again gaze in horror at the frozen beardcicles of a Debian system in the throes of "oops I just deleted all your programs, like that time a few months ago, wasn't that cool, it's really cold down here, how do you like my frozen facial tresses and also the horns".

At the same time, I don't have to give up upgrades. Paradoxically, immutable software facilitates change and gives me the freedom to upgrade my system without anxiety and lost work.

nix and guix

So, there's Nix and there's Guix. Both are great. I'll get to comparing them, but first a digression on the ways they can be installed.

Both Nix and Guix can be installed either as the operating system of your computer, or just as a user-space package manager. I would actually recommend to people to start with the latter way of working, and move on to the OS if you feel comfortable. The fundamental observation here is that because /nix/store doesn't depend on or conflict with /usr, you can run Nix or Guix as a user on a (e.g.) Debian system with no problems. You can have a forest of symlinks in ~/.guix-profile/bin that links to nifty things you've installed in the store and that's cool, you don't have to tell Debian.

and now look at me

In my case I wanted to also have the system managed by Nix or Guix. GuixSD, the name of the Guix OS install, isn't appropriate for me yet because it doesn't do GNOME. I am used to GNOME and don't care to change, so I installed NixOS instead. It works fine. There have been some irritations -- for example it just took me 30 minutes to figure out how to install dict, with a local wordnet dictionary server -- but mostly it has the packages I need. Again, I don't recommend starting with the OS install though.

GuixSD, the OS installation of Guix, is a bit harder even than NixOS. It has fewer packages, though what it does have tends to be more up-to-date than Nix. There are two big things about GuixSD though. One is that it aims to be fully free, including avoiding non-free firmware. Because they build deterministic build products from source, Nix and Guix can offer completely reproducible builds, which is swell for software reliability. Many reliability people also care a lot about software freedom and although Nix does support software freedom very well, it also includes options to turn on the Flash plugin, for example, and of course includes the Linux kernel with all of the firmware. Well GuixSD eschews non-free firmware, and uses the Linux-Libre kernel. For myself I have a local build on another machine that uses the stock Linux kernel with firmware for my Intel wireless device, and I was really discouraged from even sharing the existence of this hack. I guess it makes sense, it takes a world to make software freedom, but that particular part is not my fight.

The other thing about Guix is that it's really GNU-focused. This is great but also affects the product in some negative ways. They use "dmd" as an init system, for example, which is kinda like systemd but not. One consequence of this is that GuixSD doesn't have an implementation of the org.freedesktop.login1 seat management interface, which these days is implemented by part of systemd, which in turn precludes a bunch of other things GNOME-related. At one point I started working on a fork of systemd that pulled logind out to a separate project, which makes sense to me for distros that want seat management but not systemd, but TBH I have no horse in the systemd race and in fact systemd works well for me. But, a system with elogind would also work well for me. Anyway, the upshot is that unless you care a lot about the distro itself or are willing to adapt to e.g. Xfce or Xmonad or something, NixOS is a more pragmatic choice.

i'm on a horse

I actually like Guix's tools better than Nix's, and not just because they are written in Guile. Guix also has all the tools I need for software development, so I prefer it and ended up installing it as a user-space package manager on this NixOS system. Sounds bizarre but it actually works pretty well.

So, the point of this article is to be a little guide of how to build V8 with Guix. Here we go!

up and running with guix

First, check the manual. It's great and well-written and answers many questions and in fact includes all of this.

Now, I assume you're on an x86-64 Linux system, so we're going to use the awesome binary installation mechanism. Check it out: because everything in /gnu/store is linked directly to each other, all you have to do is to copy a reified /gnu/store onto a working system, then copy a sqlite thing into /var, and you've installed Guix. Sweet, eh? And actually you can take a running system and clone it onto other systems in that way, and Guix even provides a tool to generate such a tarball for you. Neat stuff.

cd /tmp
tar xf guix-binary-0.8.3.x86_64-linux.tar.xz
mv var/guix /var/ && mv gnu /

This Guix installation has a built-in profile for the root user, so let's go ahead and add a link from ~root to the store.

ln -sf /var/guix/profiles/per-user/root/guix-profile \

Since we're root, we can add the bin/ part of the Guix profile to our environment.

export PATH="$HOME/.guix-profile/bin:$HOME/.guix-profile/sbin:$PATH"

Perhaps we add that line to our ~root/.bash_profile. Anyway, now we have Guix. Or rather, we almost have Guix -- we need to start the daemon that actually manages the store. Create some users:

groupadd --system guixbuild

for i in `seq -w 1 10`; do
  useradd -g guixbuild -G guixbuild           \
          -d /var/empty -s `which nologin`    \
          -c "Guix build user $i" --system    \

And now run the daemon:

guix-daemon --build-users-group=guixbuild

If your host distro uses systemd, there's a unit that you can drop into the systemd folder. See the manual.

A few more things. One, usually when you go to install something, you'll want to fetch a pre-built copy of that software if it's available. Although Guix is fundamentally a build-from-source distro, Guix also runs a continuous builder service to make sure that binaries are available, if you trust the machine building the binaries of course. To do that, we tell the daemon to trust

guix archive --authorize < ~root/.guix-profile/share/guix/

as a user

OK now we have Guix installed. Running Guix commands will install things into the store as needed, and populate the forest of symlinks in the current user's $HOME/.guix-profile. So probably what you want to do is to run, as your user:

/var/guix/profiles/per-user/root/guix-profile/bin/guix \
  package --install guix

This will make Guix available in your own user's profile. From here you can begin to install software; for example, if you run

guix package --install emacs

You'll then have an emacs in ~/.guix-profile/bin/emacs which you can run. Pretty cool stuff.

back on the horse

So what does it mean for software development? Well, when I develop software, I usually want to know exactly what the inputs are, and to not have inputs to the build process that I don't control, and not have my build depend on unrelated software upgrades on my system. That's what Guix provides for me. For example, when I develop V8, I just need a few things. In fact I need these things:

;; Save as ~/src/profiles/v8.scm
(use-package-modules gcc llvm base python version-control less ccache)

 (list clang
       (list gcc-4.9 "lib")

This set of Guix packages is what it took for me to set up a V8 development environment. I can make a development environment containing only these packages and no others by saving the above file as v8.scm and then sourcing this script:

~/.guix-profile/bin/guix package -p ~/src/profiles/v8 -m ~/src/profiles/v8.scm
eval `~/.guix-profile/bin/guix package -p ~/src/profiles/v8 --search-paths`
export GYP_DEFINES='linux_use_bundled_gold=0 linux_use_gold_flags=0 linux_use_bundled_binutils=0'
export CXX='ccache clang++'
export CC='ccache clang'
export LD_LIBRARY_PATH=$HOME/src/profiles/v8/lib

Let's take this one line at a time. The first line takes my manifest -- the set of packages that collectively form my build environment -- and arranges to populate a symlink forest at ~/src/profiles/v8.

$ ls -l ~/src/profiles/v8/
total 44
dr-xr-xr-x  2 root guixbuild  4096 Jan  1  1970 bin
dr-xr-xr-x  2 root guixbuild  4096 Jan  1  1970 etc
dr-xr-xr-x  4 root guixbuild  4096 Jan  1  1970 include
dr-xr-xr-x  2 root guixbuild 12288 Jan  1  1970 lib
dr-xr-xr-x  2 root guixbuild  4096 Jan  1  1970 libexec
-r--r--r--  2 root guixbuild  4138 Jan  1  1970 manifest
lrwxrwxrwx 12 root guixbuild    59 Jan  1  1970 sbin -> /gnu/store/1g78hxc8vn7q7x9wq3iswxqd8lbpfnwj-glibc-2.21/sbin
dr-xr-xr-x  6 root guixbuild  4096 Jan  1  1970 share
lrwxrwxrwx 12 root guixbuild    58 Jan  1  1970 var -> /gnu/store/1g78hxc8vn7q7x9wq3iswxqd8lbpfnwj-glibc-2.21/var
lrwxrwxrwx 12 root guixbuild    82 Jan  1  1970 x86_64-unknown-linux-gnu -> /gnu/store/wq6q6ahqs9rr0chp97h461yj8w9ympvm-binutils-2.25/x86_64-unknown-linux-gnu

So that's totally scrolling off the right for you, that's the thing about Nix and Guix names. What it means is that I have a tree of software, and most directories contain a union of links from various packages. It so happens that sbin though just has links from glibc, so it links directly into the store. Anyway. The next line in my arranges to point my shell into that environment.

$ guix package -p ~/src/profiles/v8 --search-paths
export PATH="/home/wingo/src/profiles/v8/bin:/home/wingo/src/profiles/v8/sbin"
export CPATH="/home/wingo/src/profiles/v8/include"
export LIBRARY_PATH="/home/wingo/src/profiles/v8/lib"
export LOCPATH="/home/wingo/src/profiles/v8/lib/locale"
export PYTHONPATH="/home/wingo/src/profiles/v8/lib/python2.7/site-packages"

Having sourced this into my environment, my shell's ls for example now points into my new profile:

$ which ls

Neat. Next we have some V8 defines. On x86_64 on Linux, v8 wants to use some binutils things that it bundles itself, but oddly enough for months under Debian I was seeing spurious intermittent segfaults while linking with their bundled gold linker binary. I don't want to use their idea of what a linker is anyway, so I set some defines to make v8's build tool use Guix's linker. (Incidentally, figuring out what those defines were took spelunking through makefiles, to gyp files, to the source of gyp itself, to the source of the standard shlex Python module to figure out what delimiters shlex.split actually splits on... yaaarrggh!)

Then some defines to use ccache, then a strange thing: what's up with that LD_LIBRARY_PATH?

Well. I'm not sure. However the normal thing for dynamic linking under Linux is that you end up with binaries that are just linked against e.g., whereever the system will find That's not what we want in Guix -- we want to link against a specific version of every dependency, not just any old version. Guix's builders normally do this when building software for Guix, but somehow in this case I haven't managed to make that happen, so the binaries that are built as part of the build process can end up not specifying the path of the libraries they are linked to. I don't know whether this is an issue with v8's build system, that it doesn't want to work well with Nix / Guix, or if it's something else. Anyway I hack around it by assuming that whatever's in my artisanally assembled symlink forest ("profile") is the right thing, so I set it as the search path for the dynamic linker. Suggestions welcome here.

And from here... well it just works! I've gained the ability to precisely specify a reproducible build environment for the software I am working on, which is entirely separated from the set of software that I have installed on my system, which I can reproduce precisely with a script, and yet which is still part of my system -- I'm not isolated from it by container or VM boundaries (though I can be; see NixOps for more in that direction).

OK I lied a little bit. I had to apply this patch to V8:

$ git diff
diff --git a/build/standalone.gypi b/build/standalone.gypi
index 2bdd39d..941b9d7 100644
--- a/build/standalone.gypi
+++ b/build/standalone.gypi
@@ -98,7 +98,7 @@
         ['OS=="win"', {
           'gomadir': 'c:\\goma\\goma-win',
         }, {
-          'gomadir': '<!(/bin/echo -n ${HOME}/goma)',
+          'gomadir': '<!(/usr/bin/env echo -n ${HOME}/goma)',
         ['host_arch!="ppc" and host_arch!="ppc64" and host_arch!="ppc64le"', {
           'host_clang%': '1',

See? Because my system is NixOS, there is no /bin/echo. It does helpfully install a /usr/bin/env though, which other shell invocations in this build script use, so I use that instead. I mention this as an example of what works and what workarounds there are.

dpkg --purgatory

So now I have NixOS as my OS, and I mostly use Guix for software development. This is a new setup and we'll see how it works in practice.

Installing NixOS on top of Debian was a bit irritating. I ended up making a bootable USB installation image, then installing over to my Debian partition, happy in the idea that it wouldn't conflict with my system. But in that I forgot about /etc and /var and all that. So I copied /etc to /etc-debian, just as a backup, and NixOS appeared to install fine. However it wouldn't boot, and that's because some systemd state from my old /etc which was still in place conflicted with... something? In the end I redid the install, moving my old /usr, /etc and such directories to backup names and letting NixOS have control. That worked fine.

I have GuixSD on a laptop but I really don't recommend it right now -- not unless you have time and are willing to hack on it. But that's OK, install NixOS and you'll be happy on the system side, and if you want Guix you can install it as a user.

Comments and corrections welcome, and happy hacking!

by Andy Wingo at August 04, 2015 04:23 PM

July 28, 2015

Andy Wingoloop optimizations in guile

(Andy Wingo)

Sup peeps. So, after the slog to update Guile's intermediate language, I wanted to land some new optimizations before moving on to the next thing. For years I've been meaning to do some loop optimizations, and I was finally able to land a few of them.

loop peeling

For a long time I have wanted to do "loop peeling". Loop peeling means peeling off the first iteration of a loop. If you have a source program that looks like this:

while foo:

Loop peeling turns it into this:

if foo:
  while foo:

You wouldn't think that this is actually an optimization, would you? Well on its own, it's not. But if you combine it with common subexpression elimination, then it means that the loop body is now dominated by all effects and all loop-invariant expressions that must be evaluated for the expression to loop.

In dynamic languages, this is most useful when one source expression expands to a number of low-level steps. So for example if your language runtime implements top-level variable references in three parts, one where it gets a reference to a mutable box, then it checks if the box has a value, and and the third where it unboxes it, then we would have:

if foo:
  bar_location = lookup("bar")
  bar_value = dereference(bar_location)
  if bar_value is null: throw NotFound("bar")

  baz_location = lookup("baz")
  baz_value = dereference(baz_location)
  if baz_value is null: throw NotFound("baz")

  while foo:
    bar_value = dereference(bar_location)

    baz_value = dereference(baz_location)

The result is that we have hoisted the lookups and null checks out of the loop (if a box can never transition from full back to empty). It's a really powerful transformation that can even hoist things that traditional loop-invariant code motion can't, but more on that later.

Now, the problem with loop peeling is that usually values will escape your loop. For example:

while foo:
  x = qux()
  if x then return x

In this little example, there is a value x, and the return x statement is actually not in the loop. It's syntactically in the loop, but the underlying representation that the compiler uses looks more like this:

function qux(k):
  label loop_header():
    fetch(foo) -gt; loop_test
  label loop_test(foo_value):
    if foo_value then -> exit else -> body
  label body():
    fetch(x) -gt; have_x
  label have_x(x_value):
    if x_value then -> return_x else -> loop_header
  label return_x():
    values(x) -> k
  label exit():

This is the "CPS soup" I described in my last post. Only the bold parts are in the loop; notably, the return is outside the loop. Point being, if we peel off the first iteration, then there are two possible values for x that we would return:

if foo:
  x1 = qux()
  if x1 then return x1
  while foo:
    x2 = qux()
    if x2 then return x2

I have them marked as x1 and x2. But I've also duplicated the return x terms, which is not what we want. We want to peel off the first iteration, which will cause code growth equal to the size of the loop body, but we don't want to have to duplicate everything that's after the loop. What we have to do is re-introduce a join point that defines x:

if foo:
  x1 = qux()
  if x1 then join(x1)
  while foo:
    x2 = qux()
    if x2 then join(x2)
label join(x)
  return x

Here I'm playing fast and loose with notation because the real terms are too gnarly. What I'm trying to get across is that for each value that flows out of a loop, you need a join point. That's fine, it's a bit more involved, but what if your loop exits to two different points, but one value is live in both of them? A value can only be defined in one place, in CPS or SSA. You could re-place a whole tree of phi variables, in SSA parlance, with join blocks and such, but it's just too hard.

However we can still get the benefits of peeling in most cases if we restrict ourselves to loops that exit to only one continuation. In that case the live variable set is the intersection of all variables defined in the loop that are live at the exit points. Easy enough, and that's what we have in Guile now. Peeling causes some code growth but the loops are smaller so it should still be a win. Check out the source, if that's your thing.

loop-invariant code motion

Usually when people are interested in moving code out of loops they talk about loop-invariant code motion, or LICM. Contrary to what you might think, LICM is complementary to peeling: some things that peeling+CSE can hoist are not hoistable by LICM, and vice versa.

Unlike peeling, LICM does not cause code growth. Instead, for each expression in a loop, LICM tries to hoist it out of the loop if it can. An expression can be hoisted if all of these conditions are true:

  1. It doesn't cause the creation of an observably new object. In Scheme, the definition of "observable" is quite subtle, so in practice in Guile we don't hoist expressions that can cause any allocation. We could use alias analysis to improve this.

  2. The expression cannot throw an exception, or the expression is always evaluated for every loop iteration.

  3. The expression makes no writes to memory, or if it writes to memory, other expressions in the loop cannot possibly read from that memory. We use effects analysis for this.

  4. The expression makes no reads from memory, or if it reads from memory, no other expression in the loop can clobber those reads. Again, effects analysis.

  5. The expression uses only loop-invariant variables.

This definition is inductive, so once an expression is hoisted, the values it defines are then considered loop-invariant, so you might be able to hoist a whole chain of values.

Compared to loop peeling, this has the gnarly aspect of having to explicitly reason about loop invariance and manually move code, which is a pain. (Really LICM would be better named "artisanal code motion".) However it causes no code growth, which is a plus, though like peeling it can increase register pressure. But the big difference is that LICM can hoist effect-free expressions that aren't always executed. Consider:

while foo:
  x = qux() ? "hi" : "ho"

Here for some reason it could be faster to cache "hi" or "ho" in registers, which is what LICM allows:

hi, ho = "hi", "ho"
while foo:
  x = qux() ? hi : ho

On the other hand, LICM alone can't hoist the if baz is null checks in this example from above:

while foo:

The issue is that the call to bar() might not return, so the error that might be thrown if baz is null shouldn't be observed until bar is called. In general we can't hoist anything that might throw an exception past some non-hoisted code that might throw an exception. This specific situation happens in Guile but there are similar ones in any language, I think.

More formally, LICM will hoist effectful but loop-invariant expressions that postdominate the loop header, whereas peeling hoists those expressions that dominate all back-edges. I think? We'll go with that. Again, the source.

loop inversion

Loop inversion is a little hack to improve code generation, and again it's a little counterintuitive. If you have this loop:

while n < x:

Loop inversion turns it into:

if n < x:
  while n < x

The goal is that instead of generating code that looks like this:

  test n, x;
  branch-if-greater-than-or-equal done;
  x = x + 1
  goto header

You make something that looks like this:

  test n, x;
  branch-if-greater-than-or-equal done;
  x = x + 1
  test n, x;
  branch-if-less-than header;

The upshot is that the loop body now contains one branch instead of two. It's mostly helpful for tight loops.

It turns out that you can express this transformation on CPS (or SSA, or whatever), but that like loop peeling the extra branch introduces an extra join point in your program. If your loop exits to more than one label, then we have the same problems as loop peeling. For this reason Guile restricts loop inversion (which it calls "loop rotation" at the moment; I should probably fix that) to loops with only one exit continuation.

Loop inversion has some other caveats, but probably the biggest one is that in Guile it doesn't actually guarantee that each back-edge is a conditional branch. The reason is that usually a loop has some associated loop variables, and it could be that you need to reshuffle those variables when you jump back to the top. Mostly Guile's compiler manages to avoid shuffling, allowing inversion to have the right effect, but it's not guaranteed. Fixing this is not straightforward, since the shuffling of values is associated with the predecessor of the loop header and not the loop header itself. If instead we reshuffled before the header, that might work, but each back-edge might have a different shuffling to make... anyway. In practice inversion seems to work out fine; I haven't yet seen a case where it doesn't work. Source code here.

loop identification

One final note: what is a loop anyway? Turns out this is a somewhat hard problem, especially once you start trying to identify nested loops. Guile currently does the simple thing and just computes strongly-connected components in a function's flow-graph, and says that a loop is a non-trivial SCC with a single predecessor. That won't tease apart loop nests but oh wells! I spent a lot of time last year or maybe two years ago with that "Loop identification via D-J graphs" paper but in the end simple is best, at least for making incremental steps.

Okeysmokes, until next time, loop on!

by Andy Wingo at July 28, 2015 08:10 AM

July 27, 2015

Andy Wingocps soup

(Andy Wingo)

Hello internets! This blog goes out to my long-time readers who have followed my saga hacking on Guile's compiler. For the rest of you, a little history, then the new thing.

In the olden days, Guile had no compiler, just an interpreter written in C. Around 8 years ago now, we ported Guile to compile to bytecode. That bytecode is what is currently deployed as Guile 2.0. For many reasons we wanted to upgrade our compiler and virtual machine for Guile 2.2, and the result of that was a new continuation-passing-style compiler for Guile. Check that link for all the backstory.

So, I was going to finish documenting this intermediate language about 5 months ago, in preparation for making the first Guile 2.2 prereleases. But something about it made me really unhappy. You can catch some foreshadowing of this in my article from last August on common subexpression elimination; I'll just quote a paragraph here:

In essence, the scope tree doesn't necessarily reflect the dominator tree, so not all transformations you might like to make are syntactically valid. In Guile 2.2's CSE pass, we work around the issue by concurrently rewriting the scope tree to reflect the dominator tree. It's something I am seeing more and more and it gives me some pause as to the suitability of CPS as an intermediate language.

This is exactly the same concern that Matthew Fluet and Stephen Weeks had back in 2003:

Thinking of it another way, both CPS and SSA require that variable definitions dominate uses. The difference is that using CPS as an IL requires that all transformations provide a proof of dominance in the form of the nesting, while SSA doesn't. Now, if a CPS transformation doesn't do too much rewriting, then the partial dominance information that it had from the input tree is sufficient for the output tree. Hence tree splicing works fine. However, sometimes it is not sufficient.

As a concrete example, consider common-subexpression elimination. Suppose we have a common subexpression x = e that dominates an expression y = e in a function. In CPS, if y = e happens to be within the scope of x = e, then we are fine and can rewrite it to y = x. If however, y = e is not within the scope of x, then either we have to do massive tree rewriting (essentially making the syntax tree closer to the dominator tree) or skip the optimization. Another way out is to simply use the syntax tree as an approximation to the dominator tree for common-subexpression elimination, but then you miss some optimization opportunities. On the other hand, with SSA, you simply compute the dominator tree, and can always replace y = e with y = x, without having to worry about providing a proof in the output that x dominates y (i.e. without putting y in the scope of x)

[MLton-devel] CPS vs SSA

To be honest I think all this talk about dominators is distracting. Dominators are but a lightweight flow analysis, and I usually find myself using full-on flow analysis to compute the set of optimizations that I can do on a piece of code. In fact the only use I had for dominators in the nested CPS language was to rewrite scope trees! The salient part of Weeks' observation is that nested scope trees are the problem, not that dominators are the solution.

So, after literally years of hemming and hawing about this, I finally decided to remove nested scope trees from Guile's CPS intermediate language. Instead, a function is now a collection of labelled continuations, with one distinguished entry continuation. There is no more $letk term to nest continuations in each other. A program is now represented as a "soup" -- basically a map from labels to continuation bodies, again with a distinguished entry. As an example, consider this expression:

  return add(x, 1)

If we rewrote it in continuation-passing style, we'd give the function a name for its "tail continuation", ktail, and annotate each expression with its continuation:

function(ktail, x):
  add(x, 1) -> ktail

Here the -> ktail means that the add expression passes its values to the continuation ktail.

With nested CPS, it could look like:

function(ktail, x):
  letk have_one(one): add(x, one) -> ktail
    load_constant(1) -> have_one

Here the label have_one is in a scope, as is the value one. With "CPS soup", though, it looks more like this:

function(ktail, x):
  label have_one(one): add(x, one) -> ktail
  label main(x): load_constant(1) -> have_one

It's a subtle change, but it took a few months to make so it's worth pointing out what's going on. The difference is that there is no scope tree for labels or variables any more. A variable can be used at a label if it flows to the label, in a flow analysis sense. Indeed, determining the set of variables that can be used at a label requires flow analysis; that's what Weeks was getting at in his 2003 mail about the advantages of SSA, which are really the advantages of an intermediate language without nested scope trees.

The question arises, though, now that we've decided on CPS soup, how should we represent a program as a value? We've gone from a nested term to a graph term, and we need to find a way to represent it somehow that facilitates looking up labels by name, and facilitates tree rewrites.

In Guile's IR, labels and variables are both integers, so happily enough, we have such a data structure: Clojure-style maps specialized for integer keys.

Friends, if there has been one realization or revolution for me in the last year, it has been Clojure-style data structures. Here's why. In compilers, I often have to build up some kind of analysis, then use that analysis to transform data. Often I need to keep the old term around while I build a new one, but it would be nice to share state between old and new terms. With a nested tree, if a leaf changed you'd have to rebuild all surrounding terms, which is gnarly. But with Clojure-style data structures, more and more I find myself computing in terms of values: build up this value, transform this map to that set, fold over this map -- and yes, you can fold over Guile's intmaps -- and so on. By providing an expressive data structure for which I can control performance characteristics by using transients if needed, these data structures make my programs more about data and less about gnarly machinery.

As a concrete example, the old contification pass in Guile, I didn't have the mental capacity to understand all the moving parts in such a way that I could compute an optimal contification from the beginning; instead I had to iterate to a fixed point, as Kennedy did in his "Compiling with Continuations, Continued" paper. With the new CPS soup language and with Clojure-style data structures, I could actually fit more of the algorithm into my head, with the result that Guile now contifies optimally while avoiding the fixed-point transformation. Also, the old pass used hash tables to represent the analysis, which I found incredibly confusing to reason about -- I totally buy Rich Hickey's argument that place-oriented programming is the source of many evils in programs, and hash tables are nothing if not a place party. Using functional maps let me solve harder problems because they are easier for me to reason about.

Contification isn't an isolated case, either. For example, we are able to do the complete set of optimizations from the "Optimizing closures in O(0) time" paper, including closure sharing, which I think makes Guile unique besides Chez Scheme. I wasn't capable of doing it on the old representation because it was just too hard for me to think about, because my data structures weren't right.

This new "CPS soup" language is still a first-order CPS language in that each term specifies its continuation, and that variable names appear in the continuation of a definition, not the definition itself. This effectively makes every variable a phi variable, in the sense of SSA, and you have to do some work to get to a variable's definition. It could be that still this isn't the right number of names; consider this function:

function foo(k, x):
  label have_y(y) bar(y) -> k
  label y_is_two() load_constant(2) -> have_y
  label y_is_one() load_constant(1) -> have_y
  label main(x) if x -> y_is_one else -> y_is_two

Here there is no distinguished name for the value load_constant(1) versus load_constant(2): both are possible values for y. If we ended up giving them names, we'd have to reintroduce actual phi variables for the joins, which would basically complete the transformation to SSA. Until now though I haven't wanted those names, so perhaps I can put this off. On the other hand, every term has a label, which simplifies many things compared to having to contain terms in basic blocks, as is usually done in SSA. Yet another chapter in CPS is SSA is CPS is SSA, it seems.

Welp, that's all the nerdery for right now. Talk at yall later!

by Andy Wingo at July 27, 2015 02:43 PM

July 23, 2015

Thiago SantosCaps negotiation analysis with GstTracer


GstTracer is an yet to be merged (post 1.6) new feature to GStreamer core to expose events happening in the pipeline to tracer plugins. Examples of such events are:

  • Events/buffers/queries being pushed on pads
  • Elements/pads being created
  • Messages posted on the bus

The notification for those events happens live during the pipeline execution and the plugins can react to it. GstTracer discussion is happening at bugzilla and the latest version, while not merged, can be found here.

stats, for example, is a tracer plugin that logs to GST_DEBUG all those notifications using a structured output. This output can be very useful to do post-analysis of a pipeline execution.

Analyzing caps negotiation

By using the stats output it was possible to analyze the caps related queries performed in the pipeline and organize this information for 2 purposes:

  • Create caps query calls tree: the sequence of caps and accept-caps queries can be put up together in a tree of calls and it is possible to check how a caps query travels and transforms itself from element to element in the pipeline.
  • Count the number of repeated caps queries: Queries have a filter caps and a result caps, it is possible to identify how many of those are repeated on the pads.

The results:

The caps queries trees are printed to stdout and, while still not beautiful, already depict the sequence of queries made:

To help understanding the output, the caps strings were replaced in the following piece of output. Notice that the lines contain the timestamp when the query happened, the type of query, the pad that received the query and then the query parameters and result. Indented lines mean the inner query was a result of the outer query.

0:00:00.136669408 : accept-caps : avdec_aac0(15):sink(26) - caps: A : res: True  
    0:00:00.136730979 : query-caps : avdec_aac0(15):sink(26) - filter: A : res: A
        0:00:00.136789273 : query-caps : --4294967295--(4294967295):proxypad3(21) - filter: B
            0:00:00.136834046 : query-caps : pulsesink0(13):sink(23) - filter: B

If you want to see how it really looks, here it is:

0:00:00.136669408 : accept-caps : avdec_aac0(15):sink(26) - caps: audio/mpeg, mpegversion=(int)4, framed=(boolean)true, stream-format=(string)raw, level=(string)4, base-profile=(string)lc, profile=(string)lc, codec_data=(buffer)11b0, rate=(int)48000, channels=(int)6, channel-mask=(bitmask)0x0000000000000000 : res: True  
    0:00:00.136730979 : query-caps : avdec_aac0(15):sink(26) - filter: audio/mpeg, mpegversion=(int)4, framed=(boolean)true, stream-format=(string)raw, level=(string)4, base-profile=(string)lc, profile=(string)lc, codec_data=(buffer)11b0, rate=(int)48000, channels=(int)6, channel-mask=(bitmask)0x0000000000000000 : res: audio/mpeg, rate=(int)48000, channels=(int)6, mpegversion=(int)4, stream-format=(string)raw, framed=(boolean)true, level=(string)4, base-profile=(string)lc, profile=(string)lc, codec_data=(buffer)11b0, channel-mask=(bitmask)0x0000000000000000
        0:00:00.136789273 : query-caps : --4294967295--(4294967295):proxypad3(21) - filter: audio/x-raw, rate=(int)48000, channels=(int)6 : res: audio/x-raw, rate=(int)48000, channels=(int)6, format=(string){ S16LE, S16BE, F32LE, F32BE, S32LE, S32BE, S24LE, S24BE, S24_32LE, S24_32BE, U8 }, layout=(string)interleaved
            0:00:00.136834046 : query-caps : pulsesink0(13):sink(23) - filter: audio/x-raw, rate=(int)48000, channels=(int)6 : res: audio/x-raw, rate=(int)48000, channels=(int)6, format=(string){ S16LE, S16BE, F32LE, F32BE, S32LE, S32BE, S24LE, S24BE, S24_32LE, S24_32BE, U8 }, layout=(string)interleaved

All the caps query call trees are output as a result of the analysis tool. The second part of the result is the count of the repeated caps queries per pad and looks like this:

    filter: NULL
    caps: audio/x-raw, format=(string){ S16LE, S16BE, F32LE, F32BE, S32LE, S32BE, S24LE, S24BE, S24_32LE, S24_32BE, U8 }, layout=(string)interleaved, rate=(int)[ 1, 2147483647 ], channels=(int)[ 1, 32 ]; audio/x
-alaw, rate=(int)[ 1, 2147483647 ], channels=(int)[ 1, 32 ]; audio/x-mulaw, rate=(int)[ 1, 2147483647 ], channels=(int)[ 1, 32 ]
    res: True
    Repeated: 11 (total time: 21341790ns)

So pulsesink received 11 times the same caps query with filter=NULL and returned the same caps as a result. Even though the total time is of 21ms it is a bit concerning that the same operation was repeated 11 times. Those tests were run on a Samsung Series 5 Ultra, this time might have a greater impact on slower devices.

How to use it

Step 1: getting a trace: First of all, let's get a tracer stats log out of a pipeline run.

GST_TRACE="stats" GST_DEBUG=GST_TRACER:9 gst-launch-1.0 playbin uri=<someuri> > gsttracer_stats.log 2>&1

This will run playbin enabling the tracer stats and will dump it to GST_DEBUG output (that is only enabled for the tracer logs). The result will be in the gsttracer_stats.log file.

Step 2: parsing the trace: For that, a python script was written and can be found here. The script is named This code is still under development but is already useful to generate the above results.

More work

One thing still not solved is how to track the caps event. The caps and accept-caps queries are not serialized with the data flow so they are always on the same thread, making it easy to track those. The caps event is serialized and will change threads with the data flow. How to track it? Another good addition would be to analyze cases of not-negotiated failure and point exactly the caps query tree that failed.

Feel free to stop at github or talk to me on IRC (thiagoss in #gstreamer @ freenode) if you have any comments or suggestions.

by Thiago Santos at July 23, 2015 06:12 PM

July 17, 2015

GStreamerGStreamer Conference 2015 - Call for Papers


This is a formal call for papers (talks) for the GStreamer Conference 2015, which will take place on 8-9 October 2015 in Dublin (Ireland), and will be co-hosted with the Embedded Linux Conference Europe (ELCE) and LinuxCon Europe.

The GStreamer Conference is a conference for developers, community members, decision-makers, industry partners, and anyone else interested in the GStreamer multimedia framework and open source multimedia.

The call for papers is now open and talk proposals can be submitted.

You can find more details about the conference on the GStreamer Conference 2015 web page.

Talk slots will be available in varying durations from 20 minutes up to 45 minutes. Whatever you're doing or planning to do with GStreamer, we'd like to hear from you!

We also plan on having another session with short lightning talks / demos / showcase talks for those who just want to show what they've been working on or do a mini-talk instead of a full-length talk.

The deadline for talk submissions is Sunday 9 August 2015.

We hope to see you in Dublin!

July 17, 2015 12:00 PM

July 03, 2015

Andy WingoPfmatch, a packet filtering language embedded in Lua

(Andy Wingo)

Greets, hackers! I just finished implementing a little embedded language in Lua and wanted to share it with you. First, a bit about the language, then some notes on how it works with Lua to reach the high performance targets of Snabb Switch.

the pfmatch language

Pfmatch is a language designed for filtering, classifying, and dispatching network packets in Lua. Pfmatch is built on the well-known pflang packet filtering language, using the fast pflua compiler for LuaJIT.

Here's an example of a simple pfmatch program that just divides up packets depending on whether they are TCP, UDP, or something else:

match {
   tcp => handle_tcp
   udp => handle_udp
   otherwise => handle_other

Unlike pflang filters written for such tools as tcpdump, a pfmatch program can dispatch packets to multiple handlers, potentially destructuring them along the way. In contrast, a pflang filter can only say "yes" or "no" on a packet.

Here's a more complicated example that passes all non-IP traffic, drops all IP traffic that is not going to or coming from certain IP addresses, and calls a handler on the rest of the traffic.

match {
   not ip => forward
   ip src => incoming_ip
   ip dst => outgoing_ip
   otherwise => drop

In the example above, the handlers after the arrows (forward, incoming_ip, outgoing_ip, and drop) are Lua functions. The part before the arrow (not ip and so on) is a pflang expression. If the pflang expression matches, its handler will be called with two arguments: the packet data and the length. For example, if the not ip pflang expression is true on the packet, the forward handler will be called.

It's also possible for the handler of an expression to be a sub-match:

match {
   not ip => forward
   ip src => {
      tcp => incoming_tcp(&ip[0], &tcp[0])
      udp => incoming_udp(&ip[0], &ucp[0])
      otherwise => incoming_ip(&ip[0])
   ip dst => {
      tcp => outgoing_tcp(&ip[0], &tcp[0])
      udp => outgoing_udp(&ip[0], &ucp[0])
      otherwise => outgoing_ip(&ip[0])
   otherwise => drop

As you can see, the handlers can also have additional arguments, beyond the implicit packet data and length. In the above example, if not ip doesn't match, then ip src matches, then tcp matches, then the incoming_tcp function will be called with four arguments: the packet data as a uint8_t* pointer, its length in bytes, the offset of byte 0 of the IP header, and the offset of byte 0 of the TCP header. An argument to a handler can be any arithmetic expression of pflang; in this case &ip[0] is actually an extension. More on that later. For language lawyers, check the syntax and semantics over in our source repo.

Thanks especially to my colleague Katerina Barone-Adesi for long backs and forths about the language design; they really made it better. Fistbump!

pfmatch and lua

The challenge of designing pfmatch is to gain expressiveness, compared to writing filters by hand, while not endangering the performance targets of Pflua and Snabb Switch. These days Snabb is on target to give ASIC-driven network appliances a run for their money, so anything we come up with cannot sacrifice speed.

In practice what this means is compile, don't interpret. Using the pflua compiler allows us to generalize the good performance that we have gotten on pflang expressions to a multiple-dispatch scenario. It's a pretty straightword strategy. Naturally though, the interface with Lua is more complex now, so to understand the performance we should understand the interaction with Lua.

How does one make two languages interoperate, anyway? With pflang it's pretty clear: you compile pflang to a Lua function, and call the Lua function to match on packets. It returns true or false. It's a thin interface. Indeed with pflang and pflua you could just match the clauses in order:

not_ip = pf.compile('not ip')
incoming = pf.compile('ip src')
outgoing = pf.compile('ip dst')

function handle(packet, len)
   if not_ip(packet, len) then return forward(packet, len)
   elseif incoming(packet, len) then return incoming_ip(packet, len)
   elseif outgoing(packet, len) then return outgoing_ip(packet, len)
   else return drop(packet, len) end

But not only is this tedious, you don't get easy access to the packet itself, and you're missing out on opportunities for optimization. For example, if the packet fails the not_ip check, we don't need to check if it's an IP packet in the incoming check. Compiling a pfmatch program takes advantage of pflua's optimizer to produce good code for the match expression as a whole.

If this were Scheme I would make the right-hand side of an arrow be an expression and implement pfmatch as a macro; see Racket's match documentation for an example. In Lua or other languages that's harder to do; you would have to parse Lua, and it's not clear which parts of the production as a whole are the host language (Lua) and which are the embedded language (pfmatch).

Instead, I think embedding host language snippets by function name is a fine solution. It seems fairly clear that incoming_ip, for example, is some kind of function. It's easy to parse identifiers in an embedded language, both for humans and for programs, so that takes away a lot of implementation headache and cognitive overhead.

We are left with a few problems: how to map names to functions, what to do about the return value of match expressions, and how to tie it all together in the host language. Again, if this were Scheme then I'd use macros to embed expressions into the pfmatch term, and their names would be scoped into whatever environment the match term was defined. In Lua, the best way to implement a name/value mapping is with a table. So we have:

local handlers = {
   forward = function(data, len)
   drop = function(data, len)
   incoming_ip = function(data, len)
   outgoing_ip = function(data, len)

Then we will pass the handlers table to the matcher function, and the matcher function will call the handlers by name. LuaJIT will mostly take care of the overhead of the table dispatch. We compile the filter like this:

local match = require('pf.match')

local dispatcher = match.compile([[match {
   not ip => forward
   ip src => incoming_ip
   ip dst => outgoing_ip
   otherwise => drop

To use it, you just invoke the dispatcher with the handlers, data, and length, and the return value is whatever the handler returns. Here let's assume it's a boolean.

function loop(self)
   local i, o = self.input.input, self.output.output
   while not link.empty() do
      local pkt = link.receive(i)
      if dispatcher(handlers,, pkt.length) then
         link.transmit(o, pkt)

Finally, we're ready for an example of a compiled matcher function. Here's what pflua does with the match expression above:

local cast = require("ffi").cast
return function(self,P,length)
   if length < 14 then return self.forward(P, len) end
   if cast("uint16_t*", P+12)[0] ~= 8 then return self.forward(P, len) end
   if length < 34 then return self.drop(P, len) end
   if P[23] ~= 6 then return self.drop(P, len) end
   if cast("uint32_t*", P+26)[0] == 67305985 then return self.incoming_ip(P, len) end
   if cast("uint32_t*", P+30)[0] == 134678021 then return self.outgoing_ip(P, len) end
   return self.drop(P, len)

The result is a pretty good dispatcher. There are always things to improve, but it's likely that the function above is better than what you would write by hand, and it will continue to get better as pflua improves.

Getting back to what I mentioned earlier, when we write filtering code by hand, we inevitably end up writing interpreters for some kind of filtering language. Network functions are essentially linguistic in nature: static appliances are no good because network topologies change, and people want solutions that reflect their problems. Usually this means embedding an interpreter for some embedded language, for example BPF bytecode or iptables rules. Using pflua and pfmatch expressions, we can instead compile a filter suited directly for the problem at hand -- and while we're at it, we can forget about worrying about pesky offsets, constants, and bit-shifts.


I'm optimistic about pfmatch or something like it being a success, but there are some challenges too.

One challenge is that pflang is pretty weird. For example, attempting to access ip[100] will abort a filter immediately on a packet that is less than 100 bytes long, not including L2 encapsulation. It's wonky semantics, and in the context of pfmatch, aborting the entire pfmatch program would obviously be the wrong thing. That would abort too much. Instead it should probably just fail the pflang test in which that packet access appears. To this end, in pfmatch we turn those aborts into local expression match failures. However, this leads to an inconsistency with pflang. For example in (ip[100000] == 0 or (1==1)), instead of ip[100000] causing the whole pflang match to fail, it just causes the local test to fail. This leaves us with 1==1, which passes. We abort too little.

This inconsistency is probably a bug. We want people to be able to test clauses with vanilla pflang expressions, and have the result match the pfmatch behavior. Due to limitations in some of pflua's intermediate languages, it's likely to persist for a while. It is the only inconsistency that I know of, though.

Pflang is also underpowered in many ways. It has terrible IPv6 support; for example, tcp[0] only matches IPv4 packets, and at least as implemented in libpcap, most payload access on IPv6 packets does the wrong thing regarding chained extension headers. There is no facility in the language for binding names to intermediate results, there is no linguistic facility for talking about fragmentation, no ability to address IP source and destination addresses in arithmetic expressions by name, and so on. We can solve these in pflua with extensions to the language, but that introduces incompatibilities with pflang.

You might wonder why to stick with pflang, after all of this. If this is you, Juho Snellman wrote a great article on this topic, just for you: What's wrong with pcap filters.

Pflua's optimizer has mostly helped us, but there have been places where it could be more helpful. When compiling just one expression, you can often end up figuring out which branches are dead-ends, which helps the rest of the optimization to proceed. With more than one successful branch, we had to make a few improvements to the optimizer to actually get decent results. We also had to relax one restriction on the optimizer: usually we only permit transformations that make the code smaller. This way we know we're going in the right direction and will eventually terminate. However because of reasons™ we did decide to allow tail calls to be duplicated, so instead of having just one place in the match function that tail-calls a handler, you can end up with multiple calls. I suspect using a tracing compiler will largely make this moot, as control-flow splits effectively lead to trace duplication anyway, and making sure control-flow joins later doesn't effectively counter that. Still, I suspect that the resulting trace shape will rejoin only at the loop head, instead of in some intermediate point, which is probably OK.


With all of these concerns, is pfmatch still a win? Yes, probably! We're going to start using it when building Snabb apps, and will see how it goes. We'll probably end up adding a few more pflang extensions before we're done. If it's something you're in to, snabb-devel is the place to try it out, and see you on the bug tracker. Happy packet hacking!

by Andy Wingo at July 03, 2015 11:05 AM

June 26, 2015

GStreamerGStreamer development release binary builds


Pre-built binary images of the 1.5.2 development release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

June 26, 2015 10:30 PM

June 24, 2015

GStreamerGStreamer Core, Plugins, RTSP Server, Python, Editing Services, Validate 1.5.2 development release


The GStreamer team is pleased to announce the second release of the unstable 1.5 release series. The 1.5 release series is adding new features on top of the 1.0, 1.2 and 1.4 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework. The unstable 1.5 release series will lead to the stable 1.6 release series in the next weeks, and newly added API can still change until that point.

Binaries for Android, iOS, Mac OS X and Windows will be provided separately during the unstable 1.5 release series.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, or gst-validate, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, or

Check the release announcement mail for details and the release notes above for a list of changes.

June 24, 2015 11:59 PM

June 19, 2015

GStreamerGStreamer Conference 2015 Announced


The GStreamer project is happy to announce that this year's GStreamer Conference will take place on Thursday/Friday 8-9 October 2015 in Dublin, Ireland.

It will be co-hosted with the Embedded Linux Conference Europe and LinuxCon Europe.

You can (soon) find more details about the conference on the GStreamer Conference 2015 web page.

A call for papers will be sent shortly, and registration will also open soon. We will send another mail to gstreamer-announce for both, and will send updates there as well.

Talk slots will be available in varying durations from 20 minutes up to 45 minutes. Whatever you're doing or planning to do with GStreamer, we'd like to hear from you!

We also plan on having another session with short lightning talks / demos / showcase talks for those who just want to show what they've been working on or do a mini-talk instead of a full-length talk.

There will also be a social event again on Thursday evening.

We hope to see you in Dublin!

June 19, 2015 02:00 PM

June 18, 2015

Andy Wingoarrow functions coming to chrome 45!

(Andy Wingo)

It's been a long time coming, but I just flipped the bit in V8 that will ship arrow functions in Chrome 45! Woo hoo!

You probably know, but arrow functions are a new way to write functions in JavaScript. They look like this:

// Two arguments, body implicitly returned.
(x, y) => x + y

// With just one argument, no parentheses needed.
x => x * 2

// Body can have braces too; in that case use "return".
x => { return x * 2 }

Relative to the other kind of function that is written like function (x) { return x * 2 }, arrow functions don't define this or arguments in their bodies, instead capturing these values from the environment. There are a couple of other minor differences, too, but instead of writing about them here I'll just point to the great article by Jason Orendorff of the SpiderMonkey team.

Arrow functions are part of the JavaScript language standard that was called "ECMAScript 6" or ES6, and I guess you could still call it that. It seems like a silly thing for the committee to do to throw away all their branding like that but they decided to rename it ECMAScript 2015, which I'm sure is a link that the pedants are glad I have included. The upshot is that the standard is now final, gold master, etched in stone, which from an implementor's perspective is a relief. You can practically feel the anxiety ebbing away by the happy rate at which commits bubble out of source repositories and into shipping browsers, free from the fear that some spec change will force the hack-stream to change course.

From the V8 side, our arrow function implementation has also been a long time coming. My colleague Adrián Pérez did the first half of the work, and I picked up on the back end of things. It seems like such a small feature and in many ways it is, but still it took a long time. Now I know that my readers are a bunch of nerds and many of you like implementing languages, so you might appreciate these nargish points.

One of the first bits is that arrow functions are hard to parse. Consider, this is a valid JavaScript expression:


It's a "comma expression" that will evaluate x then y and its result will be the result of evaluating y. But add an arrow on after the end and you get not an expression but a formal parameter list:


Now you might think, well OK, when you see an arrow, rewind the input stream and parse in "arrow function mode". Indeed that would be fine, but not in combination with some additional ES6 features, optional and destructuring arguments. Optional arguments look like this:


The =42 part is the expression that will be evaluated to give x a value, if the function is called with no arguments. Note that this bit is still under implementation in V8 so you can't try it in your browser. An optional argument initializer is an expression and not a value, so you can also have:


Combined, this makes rewinding the token stream a proposition of exponential complexity, which is a no-go for a production JavaScript parser. Parsers are on the hot path for page-load times and no browser vendor wants to introduce a pathological case into their page load.

Instead, V8 does something I hadn't seen before. It keeps an open mind about whether something is a comma expression or a formal parameter list of an arrow function, and only makes a decision when it sees the => (or not). As it parses, V8 records places that it would signal an error for either a parameter list or for an expression, and then when that superimposed wave function collapses it checks that the production is valid, signalling the appropriate error if not. I thought this was a really neat trick, so if you're into that thing see expression classifier to see those details.

The other thing that's tricky about arrow functions is the this binding. In JavaScript, this is basically a hidden parameter passed to a function when it is called. Calling a function like o.f() passes the value of o to f as its this parameter. If instead f() is called directly, like with no dot before the call, then undefined is passed as this. Also for sloppy-mode functions, if the passed this value isn't an object, then the global object instead is assigned to this. Finally outside a function, this is bound to the global object.

OK, I know all of you know these things. Thing is, you always have a this, and although it's like a variable it's not a valid variable name, and before ES6 nothing could capture its value, because each function has its own this value. Perhaps you see where I'm going with this (ahem) now. Arrow functions introduce a function scope that doesn't have a this value, and that indeed might capture some other scope's this value, forcing it to be context-allocated. Other parts of ES6 can actually force assignment to this, like a super call, and that assignment can actually come from within an arrow function. Zounds! A simple concept, but there was a lot of incidental complexity in V8 around the implementation. Between Adrián and myself it took like three months to fix this usage in V8 to always just go through the (possibly context-allocated) variable, and there are still probably some devtools bugs to find in the upcoming weeks.

Performance-wise, arrow functions are just like functions. They should be just as fast as if you wrote them with function. So use them with joy, use them with abandon, use them judiciously -- however you decide you use them, don't let perf influence your decision one way or the other.

That's about it! Like all of my JS engine work over the past couple years, this hacking was sponsored by fabulous folks over at Bloomberg, so big ups to them. From me and Adrián at Igalia, until next time! We leave you to puzzle out what this bit of JavaScript evaluates to:


Happy hacking!

by Andy Wingo at June 18, 2015 04:41 PM

June 11, 2015

Jean-François Fortin TamThe War Against Deadlocks, part 1: The story of our new thread-safe mixing elements implementation

Let me tell you of a story that was lost and forgotten amidst Pitivi’s development battlegrounds last fall, a manuscript that I recovered from a Moldy Tome in a stony field. According to my historical data, the original author was a certain “Dorian Leger”, a French messenger that went missing from the vicinity of Paris.

Moldy Tome

The Moldy Tome as I found it

I am taking the liberty of altering fairly substantially this manuscript to clarify some parts while restoring its intent and style according to the historical context. It will serve as the first part of an epic tale (the second part is yet to be written, it will come in the next blog post, though it will probably have a more “modern” writing style), about our war against deadlocks, vile creatures that have been threatening the stability of our application for much too long. Technically, we’ve always been at war with EastasiaDeadlocks; you can see that even in the noble title of our 0.13.2 release, from a time when a different squad of maintainers roamed this land.

Previous maintainers fighting the Fomors

Previous maintainers fighting the Fomors, in the 0.8-0.10 GSt Era.

Without further ado, here is my transcription of the report:

Paris, le vingt-huit septembre, MMXIV

Dear supporters of the Video Editing Liberation Front,

Over the last month and a half, we have made major strides debugging and rewriting important backend code that Pitivi depends on. At the edge of the land of Pitivi, we are approaching the 0.94 milestone, which we plan on liberating in the coming weeks. I have been discussing with sieur Duponchelle to enquire about a particular piece of work the Company has been preparing for that purpose. He said, “We have torn out a large chunk of bug-ridden code in GStreamer and replaced it with a brand new videomixing element that we can finally show with pride and confidence. It will be a tremendous help in our battle against the Deadlocks; hopefully, it will give us stable and bug-free seeking in the timeline at last.”

Indeed, I have heard tales of previous Pitivi versions consistently crashing when seeking in a section of cross-faded (overlapped) clips. In other words, when we tried to select a frame that contained a cross-fade from one clip to another, Pitivi would freeze up and need to be put by the sword. Needless to say, this bug was killing not only the user experience, but also the morale of our troops, and needed to be dealt with as swiftly and efficiently as possible.

The technical problem behind this nuisance was a powerful piece of equipment in the GStreamer artillery: the GstElement videomixer. This contraption was trying to deal with threads other elements were throwing at it, which was by design extremely complex and error-prone—to the point where some have said it to be the work of the Devil itself.

Ming Dynasty eruptor proto-cannon

When we inspected the machine, we found the diagram above. Transcription of the odd scriptures in that diagram leads to the following interpretation of how it operated:

“To make this machine worketh, thou wilt receive buffers from all them sinkpads in different threads. Therefore, thou wilt wait for all thy pads to get a buffer to decide to mix & push the result on thy srcpad; hence thou shall be pushing buffers from the thread on which thee hath received thy last buffer. Eke, make sure not to stand in front of the machine when operating it.” — Dante, son of Sparda

Multithreading, if you recall your scholarship with the monks of Shaolin, is a difficult art to master. It allows running multimedia processing tasks in the background and enables several tasks to be executed simultaneously. A multi-threaded approach is essential to us, but also requires tedious management of variables shared by different threads (these variable usually describe audio data and video pixelization, in the case of the GstElement videomixer machine). As simultaneous threads often work on the same variables, the backend developer, proficient in the ancient C language, needs to ensure these threads do not simultaneously edit a variable, and the developer must carefully manage how threads give each other the signal to edit a variable.

shaolin tiger style

In the case of the strange machine that was causing those problems, we destroyed it with fire and rebuilt it with simplicity and harmony in mind. I cannot tell for sure, but I have been told that over ten thousand lines of ancient codes were rewritten through the exquisite art of multithreading kung fu. The new videomixer machine now has the srcpad running its own thread, and we aggregate and push buffers on the srcpad from that thread. This technique makes us much stronger against the Deadlocks.

As you have certainly seen for yourselves, previous Pitivi versions—particularly due to the mixing elements in GStreamer—were plagued with bugs causing threads to wait for each other indefinitely. To make this easier to imagine, let’s take a modern analogy: the previous videomixer implementation looked like a city full of cars at stop-sign intersections, waiting for the each other to go, causing endless traffic jams behind them. The good news is, after rewriting over 10,000 lines of code, the stop-signs were replaced by a much simpler and reliable system in 0.94, which means our videomixing element is now thread-friendly and ‘bug-free’. This required a complete rework of our mixing stack (by writing a new baseclass to substitute to collectpads2). It was quite an involved process.

We’re quite happy with what we have achieved there, but the Deadlocks are not so easily vanquished, and the story doesn’t end there. The rest of the manuscript is fairly short and consisted mostly of predictions for events that have now occurred since then, which I will be covering in the next blog post when I find more time, as it requires further analysis and expansion.

Thank you for reading, commenting and sharing! This blog post is part of a série of articles tracking progress made with work related to the 2014 Pitivi fundraiser. Researching and writing quality articles takes a lot of time, so please be patient and enjoy the ride! (◠‿◠)
  1. An update from the 2014 summer battlefront
  2. The 0.94 release
  3. The War Against Deadlocks, part 1: The story of our new thread-safe mixing elements reimplementation
  4. The War Against Deadlocks, part 2: GNonLin is dead, long live NLE
  5. The GTK+ timeline and sink
  6. The 0.95 release
  7. Measuring quality/reliability through time (clarifying what gst-validate is)
  8. Our all-in-one binaries building infrastructure, and why it matters
  9. Samples, “scenario” files and you: how you can help us reproduce (almost) any bug very easily
  10. The 1.0 release and closure of the fundraiser

by nekohayo at June 11, 2015 11:54 PM

June 07, 2015

GStreamerGStreamer Core and Plugins 1.5.1 development release


The GStreamer team is pleased to announce the first release of the unstable 1.5 release series. The 1.5 release series is adding new features on top of the 1.0, 1.2 and 1.4 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework. The unstable 1.5 release series will lead to the stable 1.6 release series in the next weeks, and newly added API can still change until that point.

Binaries for Android, iOS, Mac OS X and Windows will be provided separately during the unstable 1.5 release series.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, or gst-rtsp-server, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, or gst-rtsp-server, or

Check the release announcement mail for details and the release notes above for a list of changes.

Also available are binaries for Android, iOS, Mac OS X and Windows.

June 07, 2015 12:00 PM

May 31, 2015

Sebastian PölsterlUsing IPython for parallel computing on an MPI cluster using SLURM

I assume that you already familiar with using IPython.parallel, otherwise have a look at the documentation. Just one point of caution, if you move code that you want to run in parallel to a package or module, you might stumble upon the problem that Python cannot find the function you imported. The problem is that IPython.parallel only looks in the global namespace. The solution is using @interactive from IPython.parallel as described in this Stack Overflow post.

Although, IPython.parallel comes with built-in support for distributing work using MPI, it is going to create MPI tasks itself. However, usually compute clusters come with a job scheduler like SLURM that manages all resources. Consequently, the scheduler determines how many resources you have access to. In the case of SLURM, you have to define the number of tasks you want to process in parallel and the maximum time our job will require when adding your job to the queue.

#SBATCH -J ipython-parallel-test
#SBATCH --ntasks=112
#SBATCH --time=00:10:00

The above script gives the job a name, requests resources to 112 tasks and sets the maximum required time to 10 minutes.

Normally, you would use ipcluster start -n 112, but since we are not allowed to create MPI tasks ourselves, we have to start the individual pieces manually via ipcontroller and ipengine. The controller provides a single point of contact the engines connect to and the engines take commands and execute them.

echo "Creating profile ${profile}"
ipython profile create ${profile}
echo "Launching controller"
ipcontroller --ip="*" --profile=${profile} --log-to-file &
sleep 10
echo "Launching engines"
srun ipengine --profile=${profile} --location=$(hostname) --log-to-file &
sleep 45

First of all, we create a new IPython profile, which will contain log files and temporary files that are necessary to establish the communication between controller and engines. To avoid clashes with other jobs, the name of the profile contains the job's ID and the hostname of the machine it is executed on. This will create a folder named profile_job_XYZ_hostname in the ~/.ipython folder.

Next, we start the controller and instruct it to listen on all available interfaces, use the newly create profile and write output to log files residing in the profile's directory. Note that this command is executed only on a single node, thus we only have a single controller per job.

Finally, we can create the engines, one one for each task, and instruct them to connect to the correct controller. Explicitly specifying the location of the controller is necessary if engines are spread across multiple physical machines and machines have multiple Ethernet interfaces. Removing this option, engines running on other machines are likely to fail connecting to the controller because they might look for the controller at the wrong location (usually localhost). You can easily find out whether this is the case by looking at the ipengine log files in the ~/.ipython/profile_job_XYZ_hostname/log directory.

Finally, we can start our Python script that uses IPython.parallel to distribute work across multiple nodes.

echo "Launching job"
python --profile ${profile}

To make things more interesting, I created an actual script that approximates the number Pi in parallel.

import argparse
from IPython.parallel import Client
import numpy as np
import os
def compute_pi(n_samples):
        s = 0
        for i in range(n_samples):
                x = random()
                y = random()
                if x * x + y * y <= 1:
                        s += 1
        return 4. * s / n_samples
def main(profile):
        rc = Client(profile=profile)
        views = rc[:]
        with views.sync_imports():
                from random import random
        results = views.apply_sync(compute_pi, int(1e9))
        my_pi = np.sum(results) / len(results)
        filename = "result-job-{0}.txt".format(os.environ["SLURM_JOB_ID"])
        with open(filename, "w") as fp:
                fp.write("%.20f\n" % my_pi)
if __name__ == '__main__':
        parser = argparse.ArgumentParser()
        parser.add_argument("-p", "--profile", required=True,
                help="Name of IPython profile to use")
        args = parser.parse_args()

I ran this on the Linux cluster of the Leibniz Supercomputing Centre. Using 112 tasks, it took a little more than 8 minutes, and the result I got was 3.14159637160714266813. You can download the SLURM script and Python code from above here.

by sebp at May 31, 2015 01:28 PM

May 22, 2015

Bastien Noceraiio-sensor-proxy 1.0 is out!

(Bastien Nocera) Modern (and some less modern) laptops and tablets have a lot of builtin sensors: accelerometer for screen positioning, ambient light sensors to adjust the screen brightness, compass for navigation, proximity sensors to turn off the screen when next to your ear, etc.


We've supported accelerometers in GNOME/Linux for a number of years, following work on the WeTab. The accelerometer appeared as an input device, and sent kernel events when the orientation of the screen changed.

Recent devices, especially Windows 8 compatible devices, instead export a HID device, which, under Linux, is handled through the IIO subsystem. So the first version of iio-sensor-proxy took readings from the IIO sub-system and emulated the WeTab's accelerometer: a few too many levels of indirection.

The 1.0 version of the daemon implements a D-Bus interface, which means we can support more than accelerometers. The D-Bus API, this time, is modelled after the Android and iOS APIs.


Accelerometers will work in GNOME 3.18 as well as it used to, once a few bugs have been merged[1]. If you need support for older versions of GNOME, you can try using version 0.1 of the proxy.

Orientation lock in action

As we've adding ambient light sensor support in the 1.0 release, time to put in practice best practice mentioned by Owen's post about battery usage. We already had code like that in gnome-power-manager nearly 10 years ago, but it really didn't work very well.

The major problem at the time was that ambient light sensor reading weren't in any particular unit (values had different meanings for different vendors) and the user felt that they were fighting against the computer for the control of the backlight.

Richard fixed that though, adapting work he did on the ColorHug ALS sensor, and the brightness is now completely in the user's control, and adapts to the user's tastes. This means that we can implement the simplest of UIs for its configuration.

Power saving in action

This will be available in the upcoming GNOME 3.17.2 development release.

Looking ahead

For future versions, we'll want to export the raw accelerometer readings, so that applications, including games, can make use of them, which might bring up security issues. SDL, Firefox, WebKit could all do with being adapted, in the near future.

We're also looking at adding compass support (thanks Elad!), which Geoclue will then export to applications, so that location and heading data is collected through a single API.

Richard and Benjamin Tissoires, of fixing input devices fame, are currently working on making the ColorHug-ALS compatible with Windows 8, meaning it would work out of the box with iio-sensor-proxy.


We're currently using GitHub for bug and code tracking. Releases are mirrored on, as GitHub is known to mangle filenames. API documentation is available on

[1]: gnome-settings-daemon, gnome-shell, and systemd will need patches

by Bastien Nocera ( at May 22, 2015 06:31 PM

May 20, 2015

Arun RaghavanGNOME Asia 2015

I was in Depok, Indonesia last week to speak at GNOME Asia 2015. It was a great experience — the organisers did a fantastic job and as a bonus, the venue was incredibly pretty!

View from our room

View from our room

My talk was about the GNOME audio stack, and my original intention was to talk a bit about the APIs, how to use them, and how to choose which to use. After the first day, though, I felt like a more high-level view of the pieces would be more useful to the audience, so I adjusted the focus a bit. My slides are up here.

Nirbheek and I then spent a couple of days going down to Yogyakarta to cycle around, visit some temples, and sip some fine hipster coffee.

All in all, it was a week well spent. I’d like to thank the GNOME Foundation for helping me get to the conference!

Sponsored by GNOME!

Sponsored by GNOME!

by Arun at May 20, 2015 08:08 AM

May 14, 2015

Sebastian DrögePTP network clock support in GStreamer

(Sebastian Dröge)

In the last days I was working at Centricular on adding PTP clock support to GStreamer. This is now mostly done, and the results of this work are public but not yet merged into the GStreamer code base. This will need some further testing and code review, see the related bug report here.

You can find the current version of the code here in my GIT repository. See at the very bottom for some further hints at how you can run it.

So what does that mean, how does it relate to GStreamer?

Precision Time Protocol

PTP is the Precision Time Protocol, which is a network protocol standardized by the IEEE (IEEE1588:2008) to synchronize the clocks between different devices in a network. It’s similar to the better-known Network Time Protocol (NTP, IETF RFC 5905), which is probably used by millions of computers down there to automatically set the local clock. Different to NTP, PTP promises to give much more accurate results, up to microsecond (or even nanosecond with PTP-aware network hardware) precision inside appropriate networks. PTP is part of a few broadcasting and professional media standards, like AES67, RAVENNA, AVB, SMPTE ST 2059-2 and others for inter-device synchronization.

PTP comes in 3 different versions, the old PTPv1 (IEEE1588-2002), PTPv2 (IEEE1588-2008) and IEEE 802.1AS-2011. I’ve implemented PTPv2 via UDPv4 for now, but this work can be extended to other variants later.

GStreamer network synchronization support

So what does that mean for GStreamer? We are now able to synchronize to a PTP clock in the network, which allows multiple devices to accurately synchronize media to the same clock. This is useful in all scenarios where you want to play the same media on different devices, and want them all to be completely synchronized. You can probably imagine quite a few use cases for this yourself now, especially in the context of the “Internet of Things” but also for more normal things like video walls or just having multiple screens display the same thing in the same room.

This was already possible previously with the GStreamer network clock, but that clock implements a custom protocol that only other GStreamer applications can understand currently. See for example here, here or here. With the PTP clock we now get another network clock that speaks a standardized protocol and can interoperate with other software and hardware.

Performance, WiFi and other unreliable networks

When running the code, you will probably notice that PTP works very well in controlled and reliable networks (2-20 microseconds accuracy is what I got here). But it’s not that accurate in wireless networks or in general unreliable networks. It seems like in those networks the custom GStreamer network clock protocol works more reliable currently, partially by design.


As a next step, at Centricular we’re going to look at implementing support for RFC7273 in GStreamer, which allows to signal media clocks for RTP. This is part of e.g. AES67 and RAVENNA and would allow multiple RTP receivers to be perfectly synchronized against a PTP clock without any further configuration. And just for completeness, we’re probably going to release a NTP based GStreamer clock in the near future too.

Running the code

If you want to test my code, you can run it for example against PTPd. And if you want to test the accuracy of the clock, you can measure it with the ptp-clock-reflector (or here, instructions in the README) that I wrote for testing. The latter allows you to measure the accuracy, and in a local wired network I got around 2-20 microseconds accuracy. A GStreamer example application can be found here, which just prints the local and remote PTP clock times. Other than that you can use it just like any other clock on any GStreamer pipeline you can imagine.

by slomo at May 14, 2015 05:44 PM

May 07, 2015

Jean-François Fortin TamPresident’s Report — The State of the GNOME Foundation

As I hinted in my retrospective in February, 2014 has been crazy busy on a personal level. Let’s now take a look at 2014-2015 from a GNOME perspective.

When I offered my candidacy for the GNOME Foundation‘s Board of Directors in May last year, I knew that there would be plenty of issues to tackle if elected. As I was elected president afterwards, I was aware that I was getting into a demanding role that would not only test my resolve but also make use of my ability to set a clear direction and keep us moving forward through tough times. But even if someone tries to describe what’s involved in all this, it remains difficult to truly grasp the amount of work involved before you’ve experienced it yourself.

For one thing, I can say that running a branding & management consulting business at the same time as you’re steering an established public charity like the GNOME Foundation is definitely not easy.


Pictured: my calendar during the month of March

Throughout the year, I went through moments of great joy and periods of deep exhaustion where I cursed Firefox’s bug 60455 — working everyday until 1-2 AM (and waking up 5-6 hours later), for months on end, to get things done. Since 2015, my GTG todo list has consistently been at 4x my normal “healthy” quota. For example, in March, I was at 190 actionable tasks and a total of 520 tasks. Whew! So, in the name of sanity, I had to slow down some of my business activities and withdrew almost all of my involvement in the Pitivi project this term (I’ll be writing a news update blog post soon, I promise!).

I did not compromise on my involvement with the GNOME Foundation because I felt a huge responsibility towards my teammates and towards the Foundation Membership who elected us. Most of the board members, in addition to their daily work, underwent significant personal challenges during the year: relocating, career changes, family matters, all sorts of things that can affect one’s life. And yet, with the limited bandwidth we had, the Board soldiered on and accomplished many feats. I consider myself lucky to have had such a competent and deeply caring team of people to work through one of the busiest years GNOME has had yet!

2014-2015 GNOME Foundation board

What also keeps me motivated is the incredible strength of our community, the technical excellence of our platform and the fundamental need for a GNOME “desktop” (or GNOME OS) to exist. More than ever, we need Free and affordable computing for everyone. If proprietary vendors, DRM, the industry shift towards “renting” (rather than owning) software and the Snowden revelations taught us anything over the years, it’s that we need to be the truly free system that people can trust for all their computing needs, online and offline. Many have their heads in the clouds, but we need to keep our feet on the ground and be the bridge between the sky and earth—the safe base where people will come back to.

The Space Elevator, by Dusty Crosley

The Space Elevator, by Dusty Crosley

For that reason, I’m pretty excited by our friends at Endless who are shipping a radically different desktop computer running GNOME and a set of applications that will run offline, designed to make the lives of millions (billions!) of people easier in the developing world. I’m proud of our little cousin, elementary, for shipping a new version of their OS—even as an established project with lots of momentum, we can still learn a lot from what they’re doing, and we certainly appreciate their involvement in our shared technologies. Fedora Workstation, with its refined focus, is something else I’m pretty happy about. With sandboxing, OSTree and Builder in the works, I’m looking forward to GNOME OS becoming a reality. We need something rock solid and for which we can sculpt the user experience from the ground up, something which also serves as a reference and entrypoint for new contributors willing to create applications for the exciting GNOME ecosystem.

We’ve made major strides towards creating a stable and refined platform over the past few years. We have our work cut out for us in a number of areas and I look forward to us tackling them as a community. For example, one thing I’m passionate about is having a “bulletproof” OS that can handle the most demanding creative workloads, without the user needing to worry about the system’s resource usage. I should be able to have Firefox (or Web/Epiphany) running at the same time as GIMP, Inkscape and Pitivi without an exabyte of RAM or having the kernel/graphics subsystem go unresponsive due to one application hoarding resources. I know we can do better in this space. With our unparalleled ability to oversee changes through the whole stack and upcoming technologies like containers & sandboxing, we have the potential to be the most advanced OS in the world—we just need to seize this opportunity.

There are also new fields of computing that we are poised to explore as a free desktop: virtual reality—bringing a new meaning to the term “virtual desktops”—is certainly the next big step in “office computing” (including productivity and creative work, entertainment, etc.—not just gaming). We should investigate VR as the next big evolution of the desktop. Imagine getting rid of the limitations imposed by computer multi-head monitor frames…

ghost in the shell VR

We should tackle these things one step at a time, together. It takes many small efforts to steer a ship this big, and the Foundation is there to support the community every step of the way.

Here is a snapshot of what the Foundation’s Board of Directors were up to this year:

  • Dealing with over 3700 emails
  • Held 25+ regular board meetings, on the phone or in person
    • In addition to those, we held a few “special meetings” for topics like adboard outreach and ED search to drain the swamp. Therefore, in practice, we have been meeting more often than the already fast-paced bi-weekly meetings schedule.
  • Exchanged over 24,000 lines of IRC discussion within the board
  • Resolved the cash-flow problem (a.k.a. financial/success crisis) that occurred in the spring of 2014. We collected on every single outstanding invoice for OPW and will be announcing more about this soon.
  • Dealt with two very serious complaints brought to our attention — one of them is not fully resolved yet, but we’re working on it.
  • Represented GNOME at various conferences (GUADEC, SFD, GSoC Mentors Summit, OSCON, FOSDEM, LGM, GNOME.Asia, LinuxCon North America, LinuxCon Europe, and probably a bunch of others I’m forgetting).
  • Negotiated with Groupon for six months before the trademark opposition filing deadline. As we reached the deadline and could wait no longer, we prepared and launched a public fundraiser and awareness campaign. This initiative worked above all expectations, with more than 100K USD raised within a day and Groupon immediately capitulating upon seeing the incredible public support we were able to muster. We got coverage in a number of media outlets, including the World Trademark Review magazine. We hope that our experience stands for the proposition that companies must respect free software communities, and we’re already seeing our situation held up as an example to support that.
  • Reached out to current (and past) advisory board members on the phone or in person.
  • Sought new sponsorship opportunities
  • Set up a kanban system to keep track of the “big picture” of long-running projects and action items.
  • Started the hunt for an Executive Director, including forming a search & hiring committee
  • Reviewed & approved budgets and reimbursements for various events
  • Reviewed & approved various trademark use requests
  • Started work on prospecting new sources of funding for the GNOME sysadmin role
  • Provided advice on fundraising for the Telder font
  • Signed two legal agreements with the Software Freedom Conservancy for the transfer of Outreachy — more news on this later
  • Administered OPW (including legal and financial aspects), until the migration to Outreachy under the SFC was completed
  • Worked on various aspects of codes of conduct
  • Initiated work on a Privacy policy
  • Provided support for GNOME conferences, including GUADEC, GNOME.Asia, the Boston Summit and the West Coast Summit
  • Signed a deal with the WHS for handling funds in Europe — more news on this later
  • Various ongoing financial and legal tasks
  • Transferred the ownership of various assets (including domain names such as
  • Responded to various press or events organization inquiries, phone calls, etc.
  • Apologized to people for not moving fast enough on some matters ;)

I can tell you, like anyone who has worked on a board of directors without an Executive Director for the entire term, that I have developed a tremendous amount of respect and patience towards the work done by each volunteer and team in the GNOME community. There is so much that needs to be done to keep the GNOME Project running, it would not be possible without your help. Thank you, everyone!

by nekohayo at May 07, 2015 11:37 PM

April 27, 2015

Jan SchmidtNew gst-rpicamsrc features

(Jan Schmidt)

I’ve pushed some new changes to my Raspberry Pi camera GStreamer wrapper, at

These bring the GStreamer element up to date with new features added to raspivid since I first started the project, such as adding text annotations to the video, support for the 2nd camera on the compute module, intra-refresh and others.

Where possible, you can now dynamically update any of the properties – where the firmware supports it. So you can implement digital zoom by adjusting the region-of-interest (roi) properties on the fly, or update the annotation or change video effects and colour balance, for example.

The timestamps produced are now based on the internal STC of the Raspberry Pi, so the audio video sync is tighter. Although it was never terrible, it’s now more correct and slightly less jittery.

The one major feature I haven’t enabled as yet is stereoscopic handling. Stereoscopic capture requires 2 cameras attached to a Raspberry Pi Compute Module, so at the moment I have no way to test it works.

I’m also working on GStreamer stereoscopic handling in general (which is coming along). I look forward to releasing some of that code soon.


by thaytan at April 27, 2015 02:43 PM

April 05, 2015

GStreamerOutreachy Internship Opportunity


GStreamer has secured a spot in the May-August round of Outreachy (former OPW). The program aims to help people from groups underrepresented in free and open source software getting involved by offering focused internship opportunities with a number of free software organizations twice a year.

The current round of outreachy internship opportunities is open to women (cis and trans), trans men, genderqueer people, and all participants of the Ascend Project regardless of gender. The organization plans to expand the program to more participants from underrepresented backgrounds in the future. You can find more information about Outreachy here.

GStreamer application instructions and a list of mentored projects (you can always suggest your own) can be found at the GStreamer-Outreachy landing page. If you are interested on applying to an internship position with us, please take a look at the project ideas and get in touch by subscribing to our development list and sending an email about your selected project idea. Please include [Outreachy 2015] in the subject so we can easily spot it.

GStreamer participation in this round of the program it's being sponsored by Samsung's Open Source Group. Deadline for applications is April 10.

April 05, 2015 06:00 PM

April 02, 2015

Bastien NoceraJdLL 2015

(Bastien Nocera) Presentation and conferencing

Last week-end, in the Salle des Rancy in Lyon, GNOME folks (Fred Peters, Mathieu Bridon and myself) set up our booth at the top of the stairs, the space graciously offered by Ubuntu-FR and Fedora being a tad bit small. The JdLL were starting.

We gave away a few GNOME 3.14 Live and install DVDs (more on that later), discussed much-loved features, and hated bugs, and how to report them. A very pleasant experience all-in-all.

On Sunday afternoon, I did a small presentation about GNOME's 15 years. Talking about the upheaval, dragging kernel drivers and OS components kicking and screaming to work as their APIs say they should, presenting GNOME 3.16 new features and teasing about upcoming GNOME 3.18 ones.

During the Q&A, we had a few folks more than interested in support for tablets and convertible devices (such as the Microsoft Surface, and Asus T100). Hopefully, we'll be able to make the OS support good enough for people to be able to use any Linux distribution on those.

Sideshow with the Events box

Due to scheduling errors on my part, we ended up with the "v1" events box for our booth. I made a few changes to the box before we used it:

  • Removed the 17" screen, and replaced it with a 21" widescreen one with speakers builtin. This is useful when we can't setup the projector because of the lack of walls.
  • Upgraded machine to 1GB of RAM, thanks to my hoarding of old parts.
  • Bought a French keyboard and removed the German one (with missing keys), cleaned up the UK one (which still uses IR wireless).
  • Threw away GNOME 3.0 CDs (but kept the sleeves that don't mention the minor version). You'll need to take a sharpie to the small print on the back of the sleeve if you don't fill it with an OpenSUSE CD (we used Fedora 21 DVDs during this event).
  • Triaged the batteries. Office managers, get this cheap tester!
  • The machine's Wi-Fi was unstable, causing hardlocks (please test again if you use a newer version of the kernel/distributions). We tried to get onto the conference network through the wireless router, and installed DD-WRT on it as the vendor firmware didn't allow that.
  • The Nokia N810 and N800 tablets will going to kernel developers that are working on Nokia's old Linux devices and upstreaming drivers.
The events box is still in Lyon, until I receive some replacement hardware.

The machine is 7 years-old (nearly 8!) and only had 512MB of RAM, after the 1GB upgrade, the machine was usable, and many people were impressed by the speed of GNOME on a legacy machine like that (probably more so than a brand new one stuttering because of a driver bug, for example).

This makes you wonder what the use for "lightweight" desktop environments is, when a lot of the features are either punted to helpers that GNOME doesn't need or not implemented at all (old CPU and no 3D driver is pretty much the only use case for those).

I'll be putting it in a small SSD into the demo machine, to give it another speed boost. We'll also be needing a new padlock, after an emergency metal saw attack was necessary on Sunday morning. Five different folks tried to open the lock with the code read off my email, to no avail. Did we accidentally change the combination? We'll never know.

New project, ish

For demo machines, especially newly installed ones, you'll need some content to demo applications. This is my first attempt at uniting GNOME's demo content for release notes screenshots, with some additional content that's free to re-distribute. The repository will eventually move to, obviously.


The new keyboard and mouse, monitor, padlock, and SSD (and my time) were graciously sponsored by Red Hat.

by Bastien Nocera ( at April 02, 2015 01:58 PM

March 29, 2015

Sebastian PölsterlMPI-based Nested Cross-Validation for scikit-learn

If you are working with machine learning, at some point you have to choose hyper-parameters for your model of choice and do cross-validation to estimate how well the model generalizes to unseen data. Usually, you want to avoid over-fitting on your data when selecting hyper-parameters to get a less biased estimate of the model's true performance. Therefore, the data you do hyper-parameter search on has to be independent from data you use to assess a model's performance. If you want to do know what happens if you perform both tasks on the same data, have a look at the chapter The Wrong and Right Way to Do Cross-validation in the excellent book The Elements of Statistical Learning.

For instance, scikit-learn's Support Vector Regression class has at least two hyper-parameters, the penalty weight C and which kernel to use. Depending on the kernel, additional hyper-parameters are to be considered. Traditionally, people do an exhaustive grid search over a pre-defined set of values for each parameters and choose the setting that performed best. In fact, that is exactly what sklearn.grid_search.GridSearchCV does. In the end, what you get is the average score across the hold-out data with the best parameters. However, you don't want to report that number, because you essentially cheated by repeatedly using the hold-out data with different parameter settings to evaluate your model's performance, which is over-fitting too.

It is important that the numbers you report in the end were retrieved from data you only used once to measure performance. To avoid the pitfalls of GridSearchCV, you essentially have to nest GridSearchCV within another cross-validation such as StratifiedKFold. That way, the grid search only uses the training data of the outer cross-validation loop and results are reported on the test set, which was not used for the grid search.

Obviously, this can become computationally very demanding, e.g. if you do 10-fold cross-validation with 3-fold cross-validation for the grid-search, you need to train a total of 30 models for each configuration of parameters. Luckily, inner and outer cross-validation can be easily parallelized. GridSearchCV can do this with the n_jobs parameter. However, for large-scale analysis you want to use a cluster to process data in parallel.

This is where the Message Passing Interface (MPI) comes in. It is a standardized protocol for parallel computing. In MPI terms, all processes are organized in groups, which are managed by a communicator and each MPI process gets an ID or rank. Usually, the node with rank zero is used as the master that distributes the work and collects it again.

I implemented nested grid search for scikit-learn classes that distributes work using MPI. I'm not an expert in MPI, so there might be more efficient solutions, but it gets the job done for me. Using it is very similar to GridSearchCV:

from mpi4py import MPI
import numpy
from sklearn.datasets import load_boston
from sklearn.svm import SVR
from grid_search import NestedGridSearchCV
data = load_boston()
X = data['data']
y = data['target']
estimator = SVR(max_iter=1000, tol=1e-5)
param_grid = {'C': 2. ** numpy.arange(-5, 15, 2),
              'gamma': 2. ** numpy.arange(3, -15, -2),
              'kernel': ['poly', 'rbf']}
nested_cv = NestedGridSearchCV(estimator, param_grid, 'mean_absolute_error',
                               cv=5, inner_cv=3), y)
if MPI.COMM_WORLD.Get_rank() == 0:
    for i, scores in enumerate(nested_cv.grid_scores_):
        scores.to_csv('grid-scores-%d.csv' % (i + 1), index=False)

To run this example you execute mpiexec python and it uses all available MPI processors to distribute the work among. Your final result is stored in the best_params_ attribute, which is a pandas data frame that contains the selected hyper-parameters, the average performance across all inner cross-validation folds (score (Validation)), and the performance on the outer testing fold (score (Test)).

score (Validation) C gamma kernel score (Test)
1 -7.252490 0.5 0.000122 rbf -4.178257
2 -5.662221 128.0 0.000122 rbf -5.445915
3 -5.582780 32.0 0.000122 rbf -7.066123
4 -6.306561 0.5 0.000122 rbf -6.059503
5 -6.174779 128.0 0.000122 rbf -6.606218

Complete results of the grid search are stored in the grid_scores_ attributes, which is a list of data frames, one for each outer cross-validation fold.

The code is available at and has the following dependencies.

Note that I only tried this out with python 3.4. For further details, please check out the inline documentation.

by sebp at March 29, 2015 12:43 PM

March 25, 2015

Bastien NoceraGNOME 3.16 is out!

(Bastien Nocera) Did you see?

It will obviously be in Fedora 22 Beta very shortly.

What happened since 3.14? Quite a bit, and a number of unfinished projects will hopefully come to fruition in the coming months.

Hardware support

After quite a bit of back and forth, automatic rotation for tablets will not be included directly in systemd/udev, but instead in a separate D-Bus daemon. The daemon has support for other sensor types, Ambient Light Sensors (ColorHug ALS amongst others) being the first ones. I hope we have compass support soon too.

Support for the Onda v975w's touchscreen and accelerometer are now upstream. Work is on-going for the Wi-Fi driver.

I've started some work on supporting the much hated Adaptive keyboard on the X1 Carbon 2nd generation.

Technical debt

In the last cycle, I've worked on triaging gnome-screensaver, gnome-shell and gdk-pixbuf bugs.

The first got merged into the second, the second got plenty of outdated bugs closed, and priorities re-evaluated as a result.

I wrangled old patches and cleaned up gdk-pixbuf. We still have architectural problems in the library for huge images, but at least we're up to a state where we know what the problems are, not being buried in Bugzilla.

Foundation building

A couple of projects got started that didn't reached maturation yet. I'm pretty happy that we're able to use gnome-books (part of gnome-documents) today to read Comic books. ePub support is coming!

Grilo saw plenty of activity. The oft requested "properties" page in Totem is closer than ever, so is series grouping.

In December, Allan and I met with the ABRT team, and we've landed some changes we discussed there, including a simple "Report bugs" toggle in the Privacy settings, with a link to the OS' privacy policy. The gnome-abrt application had a facelift, but we got somewhat stuck on technical problems, which should get solved in the next cycle. The notifications were also streamlined and simplified.

I'm a fan

Of the new overlay scrollbars, and the new gnome-shell notification handling. And I'm cheering on co-new app in 3.16, GNOME Calendar.

There's plenty more new and interesting stuff in the release, but I would just be duplicating much of the GNOME 3.16 release notes.

by Bastien Nocera ( at March 25, 2015 04:23 PM

March 20, 2015

Bastien Nocera"GNOME à 15 ans" aux JdLL de Lyon

(Bastien Nocera)

Le week-end prochain, je vais faire une petite présentation sur les quinze ans de GNOME aux JdLL.

Si les dieux de la livraison sont cléments, GNOME devrait aussi avoir une présence dans le village associatif.

by Bastien Nocera ( at March 20, 2015 05:25 PM

February 26, 2015

Bastien NoceraAnother fake flash story

(Bastien Nocera) I recently purchased a 64GB mini SD card to slot in to my laptop and/or tablet, keeping media separate from my home directory pretty full of kernel sources.

This Samsung card looked fast enough, and at 25€ include shipping, seemed good enough value.

Hmm, no mention of the SD card size?

The packaging looked rather bare, and with no mention of the card's size. I opened up the packaging, and looked over the card.

Made in Taiwan?

What made it weirder is that it says "made in Taiwan", rather than "Made in Korea" or "Made in China/PRC". Samsung apparently makes some cards in Taiwan, I've learnt, but I didn't know that before getting suspicious.

After modifying gnome-multiwriter's fake flash checker, I tested the card, and sure enough, it's an 8GB card, with its firmware modified to show up as 67GB (67GB!). The device (identified through the serial number) is apparently well-known in swindler realms.

Buyer beware, do not buy from "carte sd" on, and always check for fake flash memory using F3 or h2testw, until udisks gets support for this.

Amazon were prompt in reimbursing me, but the Comité national anti-contrefaçon and Samsung were completely uninterested in pursuing this further.

In short:

  • Test the storage hardware you receive
  • Don't buy hardware from Damien Racaud from Chaumont, the person behind the "carte sd" seller account

by Bastien Nocera ( at February 26, 2015 10:57 AM

February 23, 2015

Christian SchallerReliable BIOS updates in Fedora

(Christian Schaller)

Some years ago I bought myself a new laptop, deleted the windows partition and installed Fedora on the system. Only to later realize that the system had a bug that required a BIOS update to fix and that the only tool for doing such updates was available for Windows only. And while some tools and methods have been available from a subset of vendors, BIOS updates on Linux has always been somewhat of hit and miss situation. Well luckily it seems that we will finally get a proper solution to this problem.
Peter Jones, who is Red Hat’s representative to the UEFI working group and who is working on making sure we got everything needed to support this on Linux, approached me some time ago to let me know of the latest incoming update to the UEFI standard which provides a mechanism for doing BIOS updates. Which means that any system that supports UEFI 2.5 will in theory be one where we can initiate the BIOS update from Linux. So systems supporting this version of the UEFI specs is expected to become available through the course of this year and if you are lucky your hardware vendor might even provide a BIOS update bringing UEFI 2.5 support to your existing hardware, although you would of course need to do that one BIOS update in the old way.

So with Peter’s help we got hold of some prototype hardware from our friends at Intel which already got UEFI 2.5 support. This hardware is currently in the hands of Richard Hughes. Richard will be working on incorporating the use of this functionality into GNOME Software, so that you can do any needed BIOS updates through GNOME Software along with all your other software update needs.

Peter and Richard will as part of this be working to define a specification/guideline for hardware vendors for how they can make their BIOS updates available in a manner we can consume and automatically check for updates. We will try to align ourselves with the requirements from Microsoft in this area to allow the vendors to either use the exact same package for both Windows and Linux or at least only need small changes to them. We can hopefully get this specification up on for wider consumption once its done.

I am also already speaking with a couple of hardware vendors to see if we can pilot this functionality with them, to both encourage them to support UEFI 2.5 as quickly as possible and also work with them to figure out the finer details of how to make the updates available in a easily consumable fashion.

Our hope here is that you eventually can get almost any hardware and know that if you ever need a BIOS update you can just fire up Software and it will tell you what if any BIOS updates are available for your hardware, and then let you download and install them. For people running Fedora servers we have had some initial discussions about doing BIOS updates through Cockpit, in addition of course to the command line tools that Peter is writing for this.

I mentioned in an earlier blog post that one of our goals with the Fedora Workstation is to drain the swamp in terms of fixing the real issues making using a Linux desktop challenging, well this is another piece of that puzzle and I am really glad we had Peter working with the UEFI standards group to ensure the final specification was useful also for Linux users.

Anyway as soon as I got some data on concrete hardware that will support this I will make sure to let you know.

by uraeus at February 23, 2015 04:17 PM