November 28, 2015

Jean-François Fortin TamInking an old friend

When I was young, I read a lot of comic books. One of my favorite séries was Cubitus:cubitus, chien sans accroc

Over fifteen years ago, Michel Grant, a local comic book artist passionate about teaching, made a quick sketch of Cubitus & Sénéchal for me, on a big sheet of paper. I liked it enough to have it laminated and kept in my room for nearly two decades. I don’t think Mr. Grant would have expected me to keep it so long and so preciously. It was drawn with a big, unrefined permanent marker (certainly not a Sakura micron or something of the sort), and here came the problem: after so many years, even if it was laminated and not put in direct sunlight, the ink had faded out significantly:


Recently, my mother suggested I just turn that piece of art into a coffee table, “Why not paint it entirely black?”. Yeah.

are you kidding me

So one afternoon, I whipped up a Sharpie marker and started tracing the drawing.


The laminated surface (and overall lack of dynamic range of permanent markers) proved challenging for some parts like Cubitus’ nose:


But it went well overall.


Now, the drawing is contrasty enough to be hung up on a wall again. The speech balloon (at the top-right) and artist’s signature (bottom-right) were left untouched, for the vintage feel and to emphasize the characters. Quite a stark difference.


by nekohayo at November 28, 2015 09:25 PM

November 19, 2015

Jean-François Fortin TamPitivi 0.95 — Enfant Suisse

Hey everyone! It’s time for a new Pitivi release, 0.95. This one packs a lot of bugfixes and architectural work to further stabilize the GES backend. In this blog post, I’ll give you an overview of the new and interesting stuff this release brings, coming out from a year of hard work. It’s pretty epic and you’re in for a few surprises, so I suggest listening to this song while you’re reading this blog post.

Engine rework: completed.

Those of you who attended my talk at GUADEC 2013 might remember this particular slide:

kill gnonlin

Well, it’s done now. It’s dead and buried.

This is something I’ve had on my mind for so long, I was even having nightmares about it—literally. To give you an idea just how ancient gnonlin was from an architectural standpoint, it was created fourteen years ago, merely six months after the first release of GStreamer itself. Well, y’know, a lot of stuff happens in 13-14 years.

So, over the past year, Mathieu and Thibault gradually refactored GNonLin into NLE, the new non-linear engine inside GES. For details, see the previous two blog posts about our War Against Deadlocks: the story about the mixing elements and the story about the new engine using them (replacing gnonlin).

The resulting improvements in reliability are not only palpable in daily use, they are actually quantifiable with the results our GES gst-validate test suite runs:

  • In the 1.4 series: 154 tests pass out of 198 (22.2% failures)
  • With the 1.6 release: 198 tests pass out of 198

— “What’s going on? Give me a sitrep!”
— “The tests… they all pass!”
— “What?!”

Now 100% GTK, with new horizons

pitivi 0.95

We were hitting various limitations and bugs (such as this) in Clutter, the library we used to display and animate the project’s timeline. Eventually we came to a point where we had to change strategy and port the timeline to use pure GTK+ widgets, with Matplotlib for drawing the keyframes on clips. Quite some work went into the new timeline.

The viewer (the widget that shows the main video preview, above the timeline) using glimagesink was causing too many problems related to embedding in the X window. We switched to the new GtkSink instead, which also allowed us to test gtkglsink at the same time, as they are compatible.

Thanks to the new GTK timeline, we have a little surprise to show here: technically, Pitivi can also work on Mac OS X now. This is not an April Fool’s joke.

Some notes about the experiment are sitting there if you’re curious. At this time, we are not supporting the Mac OS version officially, because we don’t have the resources for that (yet?). I was told that we should be able to make something available for testing a Mac build once we reach 1.0. Want to make it happen sooner? Feel free to join us and to work on that.

Wait, that’s not all. These changes also allow us to make Pitivi work with the GDK Broadway backend, meaning we can even run Pitivi in a web browser! Yep, you heard that right. Pitivi in a web browser. What could possibly go wrong? ;)

Spit polishing

An improvement we’re quite happy about is that you can finally drag & drop a file from a different app directly to the timeline, to create a clip.

The layers’ representation changed somewhat. Previously, an audio-video clip would be displayed as two separate clips in the timeline, one for video and one for audio, on two separate layers. At times it was pretty awkward. While porting the timeline, Thibault simplified the layers model to have the notion of generic layers, in which audio-video clips are represented as a unified clip object. This also means that there is no more wasted space if the layer has only video or only audio.

Also worth mentioning:

  • We have resurrected the transformation box feature, but the UI is currently very primitive. See the Clip properties > Transformation section when a clip is selected on the timeline. You can also drag the image in the viewer to change the position of the selected clip at the current position and you can use the mouse wheel to zoom in/out.
  • While editing a project, every operation is saved in a scenario file. These can be used when reporting bugs. See how to use scenarios for reporting complicated bugs easily (or if you’re feeling geeky, the details about how the scenarios are used to automatically test the GES backend).
  • You can now copy/paste clips in the timeline.
  • We’re now compatible with smaller screen resolutions (such as 1024×768) again
  • We removed a bunch of widgets in the layer controls. They were placeholders for future features, we should put them back once the features actually become available.
  • Undo/redo has been disabled until we add unit tests and make sure it works properly. Until then you can Ctrl+S regularly.
  • See also the release notes for 0.95.

Infrastructure changes

  • The Pitivi team migrated from Bugzilla to Phabricator for bug/task tracking.
  • We now have a script to setup the development environment from the latest daily bundle. This hybrid approach makes it very easy for new developers to start hacking on Pitivi’s Python side without needing to build the rest.
  • It was difficult for us to keep using Dogtail, so we moved all the integration tests to GstValidate.
  • Some of you have suggested that we compress the all-in-one bundles using XZ, and so we did. Our packages are now 20% lighter than uncompressed tarballs, so they will take less time to download (which is nice if you’re using the dailies to test).
  • With some help from Jeffrey Schroeder, I have finally upgraded our MediaWiki instance to the latest stable release. We hadn’t upgraded it in four years (thankfully it was fairly locked down so we did not run into trouble), in big part because it was not version-controlled and thus was a pain in the butt to manage. I should be able to do a better job at keeping it up-to-date from now on.

Where do we stand on the fundraiser?

In terms of donations, less than the fundraiser’s first milestone was reached. Therefore, instead of working full-time and burning through the money in a matter of a few months, Thibault and Mathieu decided to work at a slower rate while simultaneously providing professional multimedia consulting services to put food on the table.

Nonetheless, they eventually reached the point where they had worked through all the donated funds, and so they continued in their free time. The GTK+ Timeline and GtkSink work, for example, is one of the big architectural changes that Thibault had to do on his spare time, without monetary compensation whatsoever.

Now is still a good time to let others know and ask those around you to donate! We appreciate it.

A call for ruthless testing

As it is much more stable already, we recommend all users to upgrade to Pitivi 0.95 and help us find remaining issues, if any. Until this release trickles down into distributions, you can download our all-in-one bundle and try out 0.95, right here and now. Enjoy!

You’re in luck: I already spent a lot of my (very limited) spare time testing and investigating the most serious issues. In fact, one of the reasons why it’s been so long since the last release is that I have been Thibault’s worse nightmare for months (there’s a reason why my name strikes fear in the hearts of GStreamer developers):


Every two weeks or so, Thibault would come to me and say, “Hey look, I fixed all your bugs, how about we release now?”. I would then spend a day testing and return with ten more bugs. Then he would fix them all, and I would find ten other bugs in different areas. Then he would fix them, and I would find another batch that I couldn’t test last time. And so on and so forth, from spring to autumn. For example, these are the bugs I’ve found just for the GTK Timeline. Can’t believe I haven’t killed that poor guy.

Now that the blocker issues are solved, I’m quite impressed with how much more reliable this version of Pitivi is shaping out to be now. But hey, we’re not perfect, maybe there are bugs we’ve overlooked, so please grab 0.95 and try to break it as hard as you can, reporting the issues you find (especially freezes, crashes, incorrect output, etc.). We want it to be solid. Go wild.

office space printer

Thank you for reading, commenting and sharing! This blog post is part of a série of articles tracking progress made with work related to the 2014 Pitivi fundraiser. Researching and writing quality articles takes a lot of time, so please be patient and enjoy the ride! 😉
  1. An update from the 2014 summer battlefront
  2. The 0.94 release
  3. The War Against Deadlocks, part 1: The story of our new thread-safe mixing elements reimplementation
  4. The War Against Deadlocks, part 2: GNonLin's reincarnation
  5. The 0.95 release, the GTK+ timeline and sink
  6. Measuring quality/reliability through time (clarifying what gst-validate is)
  7. Our all-in-one binaries building infrastructure, and why it matters
  8. Samples, “scenario” files and you: how you can help us reproduce (almost) any bug very easily
  9. The 1.0 release and closure of the fundraiser

by nekohayo at November 19, 2015 10:42 PM

November 09, 2015

Andy Wingoembracing conway's law

(Andy Wingo)

Most of you have heard of "Conway's Law", the pithy observation that the structure of things that people build reflects the social structure of the people that build them. The extent to which there is coordination or cohesion in a system as a whole reflects the extent to which there is coordination or cohesion among the people that make the system. Interfaces between components made by different groups of people are the most fragile pieces. This division goes down to the inner life of programs, too; inside it's all just code, but when a program starts to interface with the outside world we start to see contracts, guarantees, types, documentation, fixed programming or binary interfaces, and indeed faults as well: how many bug reports end up in an accusation that team A was not using team B's API properly?

If you haven't heard of Conway's law before, well, welcome to the club. Inneresting, innit? And so thought I until now; a neat observation with explanatory power. But as aspiring engineers we should look at ways of using these laws to build systems that take advantage of their properties.

in praise of bundling

Most software projects depend on other projects. Using Conway's law, we can restate this to say that most people depend on things built by other people. The Chromium project, for example, depends on many different libraries produced by many different groups of people. But instead of requiring the user to install each of these dependencies, or even requiring the developer that works on Chrome to have them available when building Chrome, Chromium goes a step further and just includes its dependencies in its source repository. (The mechanism by which it does this isn't a direct inclusion, but since it specifies the version of all dependencies and hosts all code on Google-controlled servers, it might as well be.)

Downstream packagers like Fedora bemoan bundling, but they ignore the ways in which it can produce better software at lower cost.

One way bundling can improve software quality is by reducing the algorithmic complexity of product configurations, when expressed as a function of its code and of its dependencies. In Chromium, a project that bundles dependencies, the end product is guaranteed to work at all points in the development cycle because its dependency set is developed as a whole and thus uniquely specified. Any change to a dependency can be directly tested against the end product, and reverted if it causes regressions. This is only possible because dependencies have been pulled into the umbrella of "things the Chromium group is responsible for".

Some dependencies are automatically pulled into Chrome from their upstreams, like V8, and some aren't, like zlib. The difference is essentially social, not technical: the same organization controls V8 and Chrome and so can set the appropriate social expectations and even revert changes to upstream V8 as needed. Of course the goal of the project as a whole has technical components and technical considerations, but they can only be acted on to the extent they are socially reified: without a social organization of the zlib developers into the Chromium development team, Chromium has no business automatically importing zlib code, because the zlib developers aren't testing against Chromium when they make a release. Bundling zlib into Chromium lets the Chromium project buffer the technical artifacts of the zlib developers through the Chromium developers, thus transferring responsibility to Chromium developers as well.

Conway's law predicts that the interfaces between projects made by different groups of people are the gnarliest bits, and anyone that has ever had to maintain compatibility with a wide range of versions of upstream software has the scar tissue to prove it. The extent to which this pain is still present in Chromium is the extent to which Chromium, its dependencies, and the people that make them are not bound tightly enough. For example, making a change to V8 which results in a change to Blink unit tests is a three-step dance: first you commit a change to Blink giving Chromium a heads-up about new results being expected for the particular unit tests, then you commit your V8 change, then you commit a change to Blink marking the new test result as being the expected one. This process takes at least an hour of human interaction time, and about 4 hours of wall-clock time. This pain would go away if V8 were bundled directly into Chromium, as you could make the whole change at once.

forking considered fantastic

"Forking" sometimes gets a bad rap. Let's take the Chromium example again. Blink forked from WebKit a couple years ago, and things have been great in both projects since then. Before the split, the worst parts in WebKit were the abstraction layers that allowed Google and Apple to use the dependencies they wanted (V8 vs JSC, different process models models, some other things). These abstraction layers were the reified software artifacts of the social boundaries between Google and Apple engineers. Now that the social division is gone, the gnarly abstractions are gone too. Neither group of people has to consider whether the other will be OK with any particular change. This eliminates a heavy cognitive burden and allows both projects to move faster.

As a pedestrian counter-example, Guile uses the libltdl library to abstract over the dynamic loaders of different operating systems. (Already you are probably detecting the Conway's law keywords: uses, library, abstract, different.) For years this library has done the wrong thing while trying to do the right thing, ignoring .dylib's but loading .so's on Mac (or vice versa, I can't remember), not being able to specify soversions for dependencies, throwing a stat party every time you load a library because it grovels around for completely vestigial .la files, et cetera. We sent some patches some time ago but the upstream project is completely unmaintained; the patches haven't been accepted, users build with whatever they have on their systems, and though we could try to take over upstream it's a huge asynchronous burden for something that should be simple. There is a whole zoo of concepts we don't need here and Guile would have done better to include libltdl into its source tree, or even to have forgone libltdl and just written our own thing.

Though there are costs to maintaining your own copy of what started as someone else's work, people who yammer on against forks usually fail to recognize their benefits. I think they don't realize that for a project to be technically cohesive, it needs to be socially cohesive as well; anything else is magical thinking.

not-invented-here-syndrome considered swell

Likewise there is an undercurrent of smarmy holier-than-thou moralism in some parts of the programming world. These armchair hackers want you to believe that you are a bad person if you write something new instead of building on what has already been written by someone else. This too is magical thinking that comes from believing in the fictional existence of a first-person plural, that there is one "we" of "humanity" that is making linear progress towards the singularity. Garbage. Conway's law tells you that things made by different people will have different paces, goals, constraints, and idiosyncracies, and the impedance mismatch between you and them can be a real cost.

Sometimes these same armchair hackers will shake their heads and say "yeah, project Y had so much hubris and ignorance that they didn't want to bother understanding what X project does, and they went and implemented their own thing and made all their own mistakes." To which I say, so what? First of all, who are you to judge how other people spend their time? You're not in their shoes and it doesn't affect you, at least not in the way it affects them. An armchair hacker rarely understands the nature of value in an organization (commercial or no). People learn more when they write code than when they use it or even when they read it. When your product has a problem, where will you find the ability to fix it? Will you file a helpless bug report or will you be able to fix it directly? Assuming your software dependencies model some part of your domain, are you sure that their models are adequate for your purpose, with the minimum of useless abstraction? If the answer is "well, I'm sure they know what they're doing" then if your organization survives a few years you are certain to run into difficulties here.

One example. Some old-school Mozilla folks still gripe at Google having gone and created an entirely new JavaScript engine, back in 2008. This is incredibly naïve! Google derives immense value from having JS engine expertise in-house and not having to coordinate with anyone else. This control also gives them power to affect the kinds of JavaScript that gets written and what goes into the standard. They would not have this control if they decided to build on SpiderMonkey, and if they had built on SM, they would have forked by now.

As a much more minor, insignificant, first-person example, I am an OK compiler hacker now. I don't consider myself an expert but I do all right. I got here by making a bunch of mistakes in Guile's compiler. Of course it helps if you get up to speed using other projects like V8 or what-not, but building an organization's value via implementation shouldn't be discounted out-of-hand.

Another point is that when you build on someone else's work, especially if you plan on continuing to have a relationship with them, you are agreeing up-front to a communications tax. For programmers this cost is magnified by the degree to which asynchronous communication disrupts flow. This isn't to say that programmers can't or shouldn't communicate, of course, but it's a cost even in the best case, and a cost that can be avoided by building your own.

When you depend on a project made by a distinct group of people, you will also experience churn or lag drag, depending on whether the dependency changes faster or slower than your project. Depending on LLVM, for example, means devoting part of your team's resources to keeping up with the pace of LLVM development. On the other hand, depending on something more slow-moving can make it more difficult to work with upstream to ensure that the dependency actually suits your use case. Again, both of these drag costs are magnified by the asynchrony of communicating with people that probably don't share your goals.

Finally, for projects that aim to ship to end users, depending on people outside your organization exposes you to risk. When a security-sensitive bug is reported on some library that you use deep in your web stack, who is responsible for fixing it? If you are responsible for the security of a user-facing project, there are definite advantages for knowing who is on the hook for fixing your bug, and knowing that their priorities are your priorities. Though many free software people consider security to be an argument against bundling, I think the track record of consumer browsers like Chrome and Firefox is an argument in favor of giving power to the team that ships the product. (Of course browsers are terrifying security-sensitive piles of steaming C++! But that choice was made already. What I assert here is that they do well at getting security fixes out to users in a timely fashion.)

to use a thing, join its people

I'm not arguing that you as a software developer should never use code written by other people. That is silly and I would appreciate if commenters would refrain from this argument :)

Let's say you have looked at the costs and the benefits and you have decided to, say, build a browser on Chromium. Or re-use pieces of Chromium for your own ends. There are real costs to doing this, but those costs depend on your relationship with the people involved. To minimize your costs, you must somehow join the community of people that make your dependency. By joining yourself to the people that make your dependency, Conway's law predicts that the quality of your product as a whole will improve: there will be fewer abstraction layers as your needs are taken into account to a greater degree, your pace will align with the dependency's pace, and colleagues at Google will review for you because you are reviewing for them. In the case of Opera, for example, I know that they are deeply involved in Blink development, contributing significantly to important areas of the browser that are also used by Chromium. We at Igalia do this too; our most successful customers are those who are able to work the most closely with upstream.

On the other hand, if you don't become part of the community of people that makes something you depend on, don't be surprised when things break and you are left holding both pieces. How many times have you heard someone complain the "project A removed an API I was using"? Maybe upstream didn't know you were using it. Maybe they knew about it, but you were not a user group they cared about; to them, you had no skin in the game.

Foundations that govern software projects are an anti-pattern in many ways, but they are sometimes necessary, born from the need for mutually competing organizations to collaborate on a single project. Sometimes the answer for how to be able to depend on technical work from others is to codify your social relationship.

hi haters

One note before opening the comment flood: I know. You can't control everything. You can't be responsible for everything. One way out of the mess is just to give up, cross your fingers, and hope for the best. Sure. Fine. But know that there is no magical first-person-plural; Conway's law will apply to you and the things you build. Know what you're actually getting when you depend on other peoples' work, and know what you are paying for it. One way or another, pay for it you must.

by Andy Wingo at November 09, 2015 01:48 PM

November 06, 2015

Bastien NoceraGadget reviews

(Bastien Nocera) Not that I'm really running after more gadgets, but sometimes, there is a need that could only be soothed through new hardware.

Bluetooth UE roll

Got this for my wife, to play music when staying out on the quays of the Rhône, playing music in the kitchen (from a phone or computer), or when she's at the photo lab.

It works well with iOS, MacOS X and Linux. It's very easy to use, with whether it's paired, connected completely obvious, and the charging doesn't need specific cables (USB!).

I'll need to borrow it to add battery reporting for those devices though. You can find a full review on Ars Technica.

Sugru (!)

Not a gadget per se, but I bought some, used it to fix up a bunch of cables, repair some knickknacks, and do some DIY. Highly recommended, especially given the current price of their starter packs.

15-pin to USB Joystick adapter

It's apparently from Ckeyin, but you'll find the exact same box from other vendors. Made my old Gravis joystick work, in the hope that I can make it work with DOSBox and my 20-year old copy of X-Wing vs. Tie Fighter.

Microsoft Surface ARC Mouse

That one was given to me, for testing, works well with Linux. Again, we'll need to do some work to report the battery. I only ever use it when travelling, as the batteries last for absolute ages.

Logitech K750 keyboard

Bought this nearly two years ago, and this is one of my best buys. My desk is close to a window, so it's wireless but I never need to change the batteries or think about charging it. GNOME also supports showing the battery status in the Power panel.

Logitech T650 touchpad

Got this one in sale (17€), to replace my Logitech trackball (one of its buttons broke...). It works great, and can even get you shell gestures when run in Wayland. I'm certainly happy to have one less cable running across my desk, and reuses the same dongle as the keyboard above.

If you use more than one devices, you might be interested in this bug to make it easier to support multiple Logitech "Unifying" devices.

ClicLite charger

Got this from a design shop in Berlin. It should probably have been cheaper than what I paid for it, but it's certainly pretty useful. Charges up my phone by about 20%, it's small, and charges up at the same time as my keyboard (above).

Dell S2340T

Bought about 2 years ago, to replace the monitor I had in an all-in-one (Lenovo all-in-ones, never buy that junk).

Nowadays, the resolution would probably be considered a bit on the low side, and the touchscreen mesh would show for hardcore photography work. It's good enough for videos though and the speaker reaches my sitting position.

It's only been possible to use the USB cable for graphics for a couple of months, and it's probably not what you want to lower CPU usage on your machine, but it works for Fedora with this RPM I made. Talk to me if you can help get it into RPMFusion.

Shame about the huge power brick, but a little bonus for the builtin Ethernet adapter.

Surface 3

This is probably the biggest ticket item. Again, I didn't pay full price for it, thanks to coupons, rewards, and all. The work to getting Linux and GNOME to play well with it is still ongoing, and rather slow.

I won't comment too much on Windows either, but rather as what it should be like once Linux runs on it.

I really enjoy the industrial design, maybe even the slanted edges, but one as to wonder why they made the USB power adapter not sit flush with the edge when plugged in.

I've used it a couple of times (under Windows, sigh) to read Pocket as I do on my iPad 1 (yes, the first one), or stream videos to the TV using Flash, without the tablet getting hot, or too slow either. I also like the fact that there's a real USB(-A) port that's separate from the charging port. The micro SD card port is nicely placed under the kickstand, hard enough to reach to avoid it escaping the tablet when lugged around.

The keyboard, given the thickness of it, and the constraints of using it as a cover, is good enough for light use, when travelling for example, and the layout isn't as awful as on, say, a Thinkpad Carbon X1 2nd generation. The touchpad is a bit on the small side though it would have been hard to make it any bigger given the cover's dimensions.

I would however recommend getting a Surface Pro if you want things to work right now (or at least soon). The one-before-last version, the Surface Pro 3, is probably a good target.

by Bastien Nocera ( at November 06, 2015 09:00 AM

November 03, 2015

Andy Wingotwo paths, one peak: a view from below on high-performance language implementations

(Andy Wingo)

Ohmigod it's November. Time flies amirite. Eck-setra. These are not actually my sentiments but sometimes I do feel like a sloth or a slow loris, grasping out at quarter-speed. Once I get a hold it's good times, but hoo boy. The tech world churns and throws up new languages and language implementations every year and how is it that in 2015, some 20 years after the project was started, Guile still doesn't do native compilation?

Though I've only been Guiling for the last 10 years or so, this article aims to plumb those depths; and more than being an apology or a splain I want to ponder the onward journey from the here and the now. I was going to write something like "looking out from this peak to the next higher peak" but besides being a cliché that's exactly what I don't mean to do. In Guile performance work I justify my slow loris grip by a mistrust of local maxima. I respect and appreciate the strategy of going for whatever gains you the most in the short term, especially in a commercial context, but with a long view maybe this approach is a near win but a long lose.

That's getting ahead of myself; let's get into this thing. We started byte-compiling Guile around 2008 or so. Guile is now near to native compilation. Where are we going with this thing?

short term: template jit

The obvious next thing to do for Guile would be to compile its bytecodes to machine code using a template JIT. This strategy just generates machine code for each bytecode instruction without regard to what comes before or after. It's dead simple. Guile's bytecode is quite well-suited to this technique, even, in the sense that an instruction doesn't correspond to much code. As Guile has a register-based VM, its instructions will also specialize well against their operands when compiled to native code. The only global state that needs to be carried around at runtime is the instruction pointer and the stack pointer, both of which you have already because of how modern processors work.

Incidentally I have long wondered why CPython doesn't have a template JIT. Spiritually I am much more in line with the PyPy project but if I were a CPython maintainer, I would use a template JIT on the bytecodes I already have. Using a template JIT preserves the semantics of bytecode, including debugging and introspection. CPython's bytecodes are at a higher level than Guile's though, with many implicit method/property lookups (at least the last time I looked at them), and so probably you would need to add inline caches as well; but no biggie. Why aren't the CPython people doing this? What is their long-term perf story anyway -- keep shovelling C into the extension furnace? Lose to PyPy?

In the case of Guile we are not yet grasping in this direction because we don't have (direct) competition from PyPy :) But also there are some problems with a template JIT. Once you internalize the short-term mentality of a template JIT you can get stuck optimizing bytecode, optimizing template JIT compilation, and building up a baroque structure that by its sheer mass may prevent you from ever building The Right Thing. You will have to consider how a bytecode-less compilation pipeline interacts with not only JITted code but also bytecode, because it's a lose to do a template JIT for code that is only executed once.

This sort of short-term thinking is what makes people also have to support on-stack replacement (OSR), also known as hot loop transfer. The basic idea is that code that executes often has to be JITted to go fast, but you can't JIT everything because that would be slow. So you wait to compile a function until it's been called a few times; fine. But with loops it could be that a function is called just once but a loop in the function executes many times. You need to be able to "tier up" to the template JIT from within a loop. This complexity is needed at the highest performance level, but if you choose to do a template JIT you're basically committing to implementing OSR early on.

Additionally the implementation of a template JIT compiler is usually a bunch of C or C++ code. It doesn't make sense to include a template JIT in a self-hosted system that compiles to bytecode, because it would be sad to have the JIT not be written in the source language (Guile Scheme in our case).

Finally in Scheme we have tail-call and delimited continuation considerations. Currently in Guile all calls happen in the Guile bytecode interpreter, which makes tail calls easy -- the machine frame stays the same and we just have to make a tail call on the Scheme frame. This is fine because we don't actually control the machine frame (the C frame) of the bytecode interpreter itself -- the C compiler just does whatever it does. But how to tail call between the bytecode interpreter and JIT-compiled code? You'd need to add a trampoline beneath both the C interpreter and any entry into compiled code that would trampoline to the other implementation, depending on how the callee "returns". And how would you capture stack slices with delimited continuations? It's possible (probably -- I don't know how to reinstate a delimited continuation with both native and interpreted frames), but something of a headache, and is it really necessary?

if you compile ahead-of-time anyway...

The funny thing about CPython is that, like Guile, it is actually an ahead-of-time compiler. While the short-term win would certainly be to add a template JIT, because the bytecode is produced the first time a script is run and cached thereafter, you might as well compile the bytecode to machine code ahead-of-time too and skip the time overhead of JIT compilation on every run. In a template JIT, the machine code is only a function of the bytecode (assuming the template JIT doesn't generate code that depends on the shape of the heap).

Compiling native code ahead of time also saves on memory usage, because you can use file-backed mappings that can be lazily paged in and shared between multiple processes, and when these pages are in cache that translates also to faster startup too.

But if you're compiling bytecode ahead of time to native code, what is the bytecode for?

(not) my beautiful house

At some point you reach a state where you have made logical short-term decisions all the way and you end up with vestigial organs of WTF in your language runtime. Bytecode, for example. A bytecode interpreter written in C. Object file formats for things you don't care about. Trampolines. It's time to back up and consider just what it is that we should be building.

The highest-performing language implementations will be able to compile together the regions of code in which a program spends most of its time. Ahead-of-time compilers can try to predict these regions, but you can't always know what the behavior of a program will be. A program's run-time depends on its inputs, and program inputs are late-bound.

Therefore these highest-performing systems will use some form of adaptive optimization to apply run-time JIT compilation power on whatever part of a program turns out to be hot. This is the peak performance architecture, and indeed in the climb to a performant language implementation, there is but one peak that I know of. The question becomes, how to get there? What path should I take, with the priorities I have and the resources available to me, which lets me climb the farthest up the hill while always leaving the way clear to the top?

guile's priorities

There are lots of options here, and instead of discussing the space as a whole I'll just frame the topic with some bullets. Here's what I want out of Guile:

  1. The project as a whole should be pleasing to hack on. As much of the system as possible should be written in Scheme, as little as possible in C or assembler, and dependencies on outside projects should be minimized.

  2. Guile users should be able to brag about startup speed to their colleagues. We are willing to trade away some peak throughput for faster startup, if need be.

  3. Debuggability is important -- a Guile hacker will always want to be able to get stack traces with actual arguments and local variable values, unless they stripped their compiled Guile binaries, which should be possible as well. But we are willing to give up some debuggability to improve performance and memory use. In the same way that a tail call replaces the current frame in its entirety, we're willing to lose values of dead variables in stack frames that are waiting on functions to return. We're also OK with other debuggability imprecisions if the performance gains are good enough. With macro expansion, Scheme hackers expect a compilation phase; spending time transforming a program via ahead-of-time compilation is acceptable.

Call it the Guile Implementor's Manifesto, or the manifesto of this implementor at least.

beaucoup bucks

Of course if you have megabucks and ace hackers, then you want to dial back on the compromises: excellent startup time but also source-level debugging! The user should be able to break on any source position: the compiler won't even fold 1 + 1 to 2. But to get decent performance you need to be able to tier up to an optimizing compiler soon, and soon in two senses: soon after starting the program, but also soon after starting your project. It's an intimidating thing to build when you are just starting on a language implementation. You need to be able to tier down too, at least for debugging and probably for other reasons too. This strategy goes in the right direction, performance-wise, but it's a steep ascent. You need experienced language implementors, and they are not cheap.

The usual strategy for this kind of implementation is to write it all in C++. The latency requirements are too strict to do otherwise. Once you start down this road, you never stop: your life as an implementor is that of a powerful, bitter C++ wizard.

The PyPy people have valiently resisted this trend, writing their Python implementation in Python itself, but they concede to latency by compiling their "translated interpreter" into C, which then obviously can't itself be debugged as Python code. It's self-hosting, but staged into C. Ah well. Still, a most valiant, respectable effort.

This kind of language implementation usually has bytecode, as it's a convenient reification of the source semantics, but it doesn't have to. V8 is a good counterexample, at least currently: it treats JavaScript source code as the canonical representation of program semantics, relying on its ability to re-parse source text to an AST in the same way every time as needed. V8's first-tier implementation is actually a simple native code compiler, generated from an AST walk. But things are moving in the bytecode direction in the V8 world, reluctantly, so we should consider bytecode as the backbone of the beaucoup-bucks language implementation.

shoestring slim

If you are willing to relax on source-level debugging, as I am in Guile, you can simplify things substantially. You don't need bytecode, and you don't need a template JIT; in the case of Guile, probably the next step in Guile's implementation is to replace the bytecode compiler and interpreter with a simple native code compiler. We can start with the equivalent of a template JIT, but without the bytecode, and without having to think about the relationship between compiled and (bytecode-)interpreted code. (Guile still has a traditional tree-oriented interpreter, but it is actually written in Scheme; that is a story for another day.)

There's no need to stop at a simple compiler, of course. Guile's bytecode compiler is already fairly advanced, with interprocedural optimizations like closure optimization, partial evaluation, and contification, as well as the usual loop-invariant code motion, common subexpression elimination, scalar replacement, unboxing, and so on. Add register allocation and you can have quite a fine native compiler, and you might even beat the fabled Scheme compilers on the odd benchmark. They'll call you plucky: high praise.

There's a danger in this strategy though, and it's endemic in the Scheme world. Our compilers are often able to do heroic things, but only on the kinds of programs they can fully understand. We as Schemers bend ourselves to the will of our compilers, writing only the kinds of programs our compilers handle well. Sometimes we're scared to fold, preferring instead to inline the named-let iteration manually to make sure the compiler can do its job. We fx+ when we should +; we use tagged vectors when we should use proper data structures. This is déformation professionelle, as the French would say. I gave a talk at last year's Scheme workshop on this topic. PyPy people largely don't have this problem, for example; their langauge implementation is able to see through abstractions at run-time to produce good code, but using adaptive optimization instead of ahead-of-time trickery.

So, an ahead-of-time compiler is perhaps a ridge, but it is not the peak. No amount of clever compilation will remove the need for an adaptive optimizer, and indeed too much cleverness will stunt the code of your users. The task becomes, how to progress from a decent AOT native compiler to a system with adaptive optimization?

Here, as far as I know, we have a research problem. In Guile we have mostly traced the paths of history, re-creating things that existed before. As Goethe said, quoted in the introduction to The Joy of Cooking: "That which thy forbears have bequeathed to thee, earn it anew if thou wouldst possess it." But finally we find here something new, or new-ish: I don't know of good examples of AOT compilers that later added adaptive optimization. Do you know of any, dear reader? I would be delighted to know.

In the absence of a blazed trail to the top, what I would like to do is to re-use the AOT compiler to do dynamic inlining. We might need to collect type feedback as well, though inlining is the more important optimization. I think we can serialize the compiler's intermediate representation into a special section in the ELF object files that Guile produces. A background thread or threads can monitor profiling information from main threads. If a JIT thread decides two functions should be inlined, it can deserialize compiler IR and run the standard AOT compiler. We'd need a bit of mutability in the main program in which to inject such an optimization; an inline cache would do. If we need type feedback, we can make inline caches do that job too.

All this is yet a ways off. The next step for Guile, after the 2.2 release, is a simple native compiler, then register allocation. Step by step.

but what about llvmmmmmmmmmmmmm

People always ask about LLVM. It is an excellent compiler backend. It's a bit big, and maybe you're OK with that, or maybe not; whatever. Using LLVM effectively depends on your ability to deal with churn and big projects. But if you can do that, swell, you have excellent code generation. But how does it help you get to the top? Here things are less clear. There are a few projects using LLVM effectively as a JIT compiler, but that is a very recent development. My hubris, desire for self-hosting, and lack of bandwidth for code churn makes it so that I won't use LLVM myself but I have no doubt that a similar strategy to that which I outline above could work well for LLVM. Serialize the bitcode into your object files, make it so that you can map all optimization points to labels in that bitcode, and you have the ability to do some basic dynamic inlining. Godspeed!


If you're interested, I gave a talk a year ago on the state of JavaScript implementations, and how they all ended up looking more or less the same. This common architecture was first introduced by Self; languages implementations in this category include HotSpot and any of the JavaScript implementations.

Some notes on how PyPy produces interpreters from RPython.

and so I bid you good night

Guile's compiler has grown slowly, in tow of my ballooning awareness of ignorance and more slowly inflating experience. Perhaps we could have done the native code compilation thing earlier, but I am happy with our steady progress over the last five years or so. We had to scrap one bytecode VM and one or two compiler intermediate representations, and though that was painful I think we've done pretty well as far as higher-order optimizations go. If we had done native compilation earlier, I can't but think the inevitably wrong decisions we would have made on the back-end would have prevented us from having the courage to get the middle-end right. As it is, I see the way to the top, through the pass of ahead-of-time compilation and thence to a dynamic inliner. It will be some time before we get there, but that's what I signed up for :) Onward!

by Andy Wingo at November 03, 2015 11:47 PM

October 30, 2015

GStreamerGStreamer Core, Plugins, RTSP Server, Editing Services, Python 1.6.1 stable release


The GStreamer team is proud to announce the first bugfix release in the stable 1.6 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it is safe to update from 1.6.0. For a full list of bugfixes see Bugzilla.

See for the full release notes.

Binaries for Android, iOS, Mac OS X and Windows will be provided separately by the GStreamer project.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, or gst-editing-services, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, or gst-editing-services.

October 30, 2015 03:00 PM

Bastien NoceraC.H.I.P. flashing on Fedora

(Bastien Nocera) You might have heard of the C.H.I.P., the 9$ computer. After contributing to their Kickstarter, and with no intent on hacking on more kernel code than is absolutely necessary, I requested the "final" devices, when chumps like me can read loads of docs and get accessories for it easily.

Turns out that our old friend the Realtek 8723BS chip is the Wi-Fi/Bluetooth chip in the nano computer. NextThingCo got in touch, and sent me a couple of early devices (as well as to the "Kernel hacker" backers), with their plan being to upstream all the drivers and downstream hacks into the upstream kernel.

Before being able to hack on the kernel driver though, we'll need to get some software on it, and find a way to access it. The docs website has instructions on how to flash the device using Ubuntu, but we don't use that here.

You'll need a C.H.I.P., a jumper cable, and the USB cable you usually use for charging your phone/tablet/e-book reader.

First, let's install a few necessary packages:

dnf install -y sunxi-tools uboot-tools python3-pyserial moserial

You might need other things, like git and gcc, but I kind of expect you to already have that installed if you're software hacking. You will probably also need to get sunxi-tools from Koji to get a new enough version that will support the C.H.I.P.

Get your jumper cable out, and make the connection as per the NextThingCo docs. I've copied the photo from the docs to keep this guide stand-alone.

Let's install the tools, modified to work with Fedora's newer, upstreamer, version of the sunxi-tools.

$ git clone
$ cd CHIP-tools
$ make
$ sudo ./ -d

If you've followed the instructions, you haven't plugged in the USB cable yet. Plug in the USB cable now, to the micro USB power supply on one end, and to your computer on the other.

You should see the little "OK" after the "waiting for fel" message:

== upload the SPL to SRAM and execute it ==
waiting for fel........OK

At this point, you can unplug the jumper cable, something not mentioned in the original docs. If you don't do that, when the device reboots, it will reboot in flashing mode again, and we obviously don't want that.

At this point, you'll just need to wait a while. It will verify the installation when done, and turn off the device. Unplug, replug, and launch moserial as root. You should be able to access the C.H.I.P. through /dev/ttyACM0 with a baudrate of 115200. The root password is "chip".

Obligatory screenshot of our new computer:

Next step, testing out our cleaned up Realtek driver, Fedora on the C.H.I.P., and plenty more.

by Bastien Nocera ( at October 30, 2015 02:52 PM

Arun RaghavanPulseAudio 7.1 is out

We just rolled out a minor bugfix release. Quick changelog:

  • Fix a crasher when using srbchannel
  • Fix a build system typo that caused symlinks to turn up in /
  • Make Xonar cards work better
  • Other minor bug fixes and improvements

More details on the mailing list.

Thanks to everyone who contributed with bug reports and testing. What isn’t generally visible is that a lot of this happens behind the scenes downstream on distribution bug trackers, IRC, and so forth.

by Arun at October 30, 2015 01:31 PM

October 29, 2015

Andy Wingotype folding in guile

(Andy Wingo)

A-hey hey hey, my peeps! Today's missive is about another optimization pass in Guile that we call "type folding". There's probably a more proper name for this, but for the moment we go with "type folding" as it's shorter than "abstract constant propagation, constant folding, and branch folding based on flow-sensitive type and range analysis".

on types

A word of warning to the type-system enthusiasts among my readers: here I'm using "type" in the dynamic-languages sense, to mean "a property about a value". For example, whether a value is a vector or a pair is a property of that value. I know that y'all use that word for other purposes, but there are other uses that do not falute so highly, and it's in the more pedestrian sense that I'm interested here.

To back up a bit: what are the sources of type information in dynamic languages? In Guile, there are three ways the compiler can learn about a value's type.

One source of type information is the compiler's knowledge of the result types of expressions in the language, especially constants and calls to the language's primitives. For example, in the Scheme definition (define y (vector-length z)), we know that y is a non-negative integer, and we probably also know a maximum value for z too, given that vectors have a maximum size.

Conditional branches with type predicates also provide type information. For example, in consider this Scheme expression:

(lambda (x)
  (if (pair? x)
      (car x)
      (error "not a pair" x)))

Here we can say that at the point of the (car x) expression, x is definitely a pair. Conditional branches are interesting because they add a second dimension to type analysis. The question is no longer "what is the type of all variables", but "what is the type of all variables at all points in the program".

Finally, we have the effect of argument type checks in function calls. For example in the (define y (vector-length z)) definition, after (vector-length z) has been evaluated, we know that z is indeed a vector, because if it weren't, then the primitive call would raise an exception.

In summary, the information that we would like to have is what type each variable has at each program point (label). This information comes from where the variables are defined (the first source of type information), conditional branches and control-flow joins (the second source), and variable use sites that imply type checks (the third). It's a little gnarly but in essence it's a classic flow analysis. We treat the "type" of a variable as a set of possible types. A solution to the flow equations results in a set of types for each variable at each label. We use the intmap data structures to share space between the solution at different program points, resulting in an O(n log n) space complexity.

In Guile we also solve for the range of values a variable may take, at the same time as solving for type. I started doing this as part of the Compost hack a couple years ago, where I needed to be able to prove that the operand to sqrt was non-negative in order to avoid sqrt producing complex numbers. Associating range with type turns out to generalize nicely to other data types which may be thought of as having a "magnitude" -- for example a successful (vector-ref v 3) implies that v is at least 4 elements long. Guile can propagate this information down the flow graph, or propagate it in the other way: if we know the vector was constructed as being 10 elements long, then a successful (vector-ref v n) can only mean that n is between 0 and 9.

what for the typing of the things

Guile's compiler uses type analysis in a few ways at a few different stages. One basic use is in dead code elimination (DCE). An expression can be eliminated from a program if its value is never used and if it causes no side effects. Guile models side effects (and memory dependencies between expressions) with effects analysis. I quote:

We model four kinds of effects: type checks (T), allocations (A), reads (R), and writes (W). Each of these effects is allocated to a bit. An expression can have any or none of these effects.

In an expression like (vector-ref v n), type analysis may compute that in fact v is indeed a vector and n is an integer, and also that n is within the range of valid indexes of v. In that case we can remove the type check (T) bit from the expression's effects, opening up the expression for DCE.

Getting back to the topic of this article, Guile's "type folding" pass uses type inference in three ways.

The first use of type information is if we determine that, at a given use site, a variable has precisely one type and one value. In that case we can do constant folding over that expression, replacing its uses with its value. For example, let's say we have the expression (define len (vector-length v)). If we know that v is a vector of length length 5, we can replace any use of len with the constant, 5. As an implementation detail we actually keep the definition of len in place and let DCE remove it later. We can consider this to be abstract constant propagation: abstract in the sense that it folds over abstract values, represented just as type sets and ranges, and which materializes a concrete value only if it is able to do so. Since ranges propagate through operators as well, it can also be considered as abstract constant folding; the type inference operators act as constant folders.

Another use of type information is in branches. If Guile sees (if (< n (vector-length v)) 1 2) and n and v have the right types and disjoint ranges, then we can fold the test and choose 1 or 2 depending on how the test folds.

Finally type information can enable strength reduction. For example it's a common compiler trick to want to reduce (* n 16) to (ash n 4), but if n isn't an integer this isn't going to work. Likewise, (* n 0) can be 0, 0.0, 0.0+0.0i, something else, or an error, depending on the type of n and whether the * operator has been extended to apply over non-number types. Type folding uses type information to reduce the strength of operations like these, but only where it can prove that the transformation is valid.

So that's type folding! It's a pretty neat pass that does a few things as once. Code here, and code for the type inference itself here.

type-driven unboxing

Guile uses type information in one other way currently, and that is to determine when to unbox floating-point numbers. The current metric is that whenever an arithmetic operation will produce a floating-point number -- in Scheme parlance, an inexact real -- then that operation should be unboxed, if it has an unboxed counterpart. Unboxed operations on floating-point numbers are advantageous because they don't have to allocate space on the garbage-collected heap for their result. Since an unboxed operation like the fadd floating-point addition operator takes raw floating-point numbers as operands, it also will never cause a type check, unlike the polymorphic add instruction. Knowing that fadd has no effects lets the compiler do a better job at common subexpression elimination (CSE), dead code elimination, loop-invariant code motion, and so on.

To unbox an operation, its operands are unboxed, the operation itself is replaced with its unboxed counterpart, and the result is then boxed. This turns something like:

(+ a b)


(f64->scm (fl+ (scm->f64 a) (scm->f64 b)))

You wouldn't think this would be an optimization, except that the CSE pass can eliminate many of these conversion pairs using its scalar elimination via fabricated expressions pass.

A proper flow-sensitive type analysis is what enables sound, effective unboxing. After arithmetic operations have been unboxed, Guile then goes through and tries to unbox loop variables and other variables with more than one definition ("phi' variables, for the elect). It mostly succeeds at this. The results are great: summing a packed vector of 10 million 32-bit floating-point values goes down from 500ms to 130ms, on my machine, with no time spent in the garbage collector. Once we start doing native compilation we should be up to about 5e8 or 10e8 floats per second in this microbenchmark, which is totally respectable and about what gcc's -O0 performance gets.


This kind of type inference works great in tight loops, and since that's how many programs spend most of their time, that's great. Of course, this situation is also a product of programmers knowing that tight loops are how computers go the fastest, or at least of how compilers do the best on their code.

Where this approach to type inference breaks down is at function boundaries. There are no higher-order types and no higher-order reasoning, and indeed no function types at all! This is partially mitigated by earlier partial evaluation and contification passes, which open up space in which the type inferrer can work. Method JIT compilers share this weakness with Guile; tracing JIT runtimes like LuaJIT and PyPy largely do not.

up summing

So that's the thing! I was finally motivated to dust off this draft given the recent work on unboxing in a development branch of Guile. Happy hacking and we promise we'll actually make releases so that you can use it soon soon :)

by Andy Wingo at October 29, 2015 10:13 PM

October 28, 2015

Arun RaghavanPSA: Breaking webrtc-audio-processing API

I know it’s been ages, but I am now working on updating the webrtc-audio-processing library. You might remember this as the code that we split off from the code to use in the PulseAudio echo cancellation module.

This is basically just the AudioProcessing module, bundled as a standalone library so that we can use the fantastic AEC, AGC, and noise suppression implementation from that code base. For packaging simplicity, I made a copy of the necessary code, and wrote an autotools-based build system around that.

Now since I last copied the code, the library API has changed a bit — nothing drastic, just a few minor cleanups and removed API. This wouldn’t normally be a big deal since this code isn’t actually published as external API — it’s mostly embedded in the Chromium and Firefox trees, probably other projects too.

Since we are exposing a copy of this code as a standalone library, this means that there are two options — we could (a) just break the API, and all dependent code needs to be updated to be able to use the new version, or (b) write a small wrapper to try to maintain backwards compatibility.

I’m inclined to just break API and release a new version of the library which is not backwards compatible. My rationale for this is that I’d like to keep the code as close to what is upstream as possible, and over time it could become painful to maintain a bunch of backwards-compatibility code.

A nicer solution would be to work with upstream to make it possible to build the AudioProcessing module as a standalone library. While the folks upstream seemed amenable to the idea when this came up a few years ago, nobody has stepped up to actually do the work for this. In the mean time, a number of interesting features have been added to the module, and it would be good to pull this in to use in PulseAudio and any other projects using this code (more about this in a follow-up post).

So if you’re using webrtc-audio-processing, be warned that the next release will probably break API, and you’ll need to update your code. I’ll try to publish a quick update guide when releasing the code, but if you want to look at the current API, take a look at the current audio_processing.h.

p.s.: If you do use webrtc-audio-processing as a dependency, I’d love to hear about it. As far as I know, PulseAudio is the only user of this library at the moment.

by Arun at October 28, 2015 05:35 PM

October 26, 2015

Jean-François Fortin TamIn which I turn into an international shipping operation

In months prior to the GUADEC 2015 conference, both the board of directors and engagement team were kept busy with an above-average workload, so the GNOME Foundation‘s Annual Report had to wait until things settled a bit. After the core days of GUADEC, we held an all-day meeting among members of the Engagement team (and whoever was interested in joining the fun, really):


Among the topics was the annual report. We devised a plan of action, aiming to publish it by the end of September. Given that we were already near mid-August, that’s fairly ambitious; those of you who have already worked on magazines or quarterly/annual reports can probably attest to the complexity of the process—now, imagine how tricky it gets with a team of volunteers spread across the world with a short timeframe to do everything. So we spent the next month and a half writing articles (from scratch), revising (multiple times) and laying out the contents of the report. In the end, we managed to publish close to our set deadline.

Oh hi, I heard you like logistics

It was now time for printing and shipping. I spent some time getting quotes from various places in the USA and Canada, then making calculations and scenarios: I was trying to balance the cost of printing, the cost of shipping, convenience and the time to delivery (I was asked, for instance, if we could have copies shipped in time for the Boston Summit).

At the end of my analysis, it became apparent that the cheapest and simplest option was to have me print & ship everything from Montréal. There are a few factors involved:

  • Since 2010 or so, the annual report was typically being printed by GUADEC organizers, which meant the layout was made for A4 paper size.
    • Most USA printing shops can’t handle that (“What do you mean, it’s not US Letter? Pfff. Give me that in fractions of inches and furlongs, we don’t do this metric thing here, punk!”)
    • Good printers in Montréal are used to being in-between the Old and New continents (because that’s Montréal, by definition). My local printer offered to print in A4, saving us the headaches.
  • The Canadian dollar is currently quite cheap compared to the US dollar. With the friendly local printing company, I was able to undercut all the other offers.
  • Shipping to the US and anywhere in Europe is pretty convenient (again due to the geographical location).
  • I had enough time and insanity available to do it.

Whatsmore, the paper and print quality offered by my printer was above average (seriously, the result is stunning). I also trusted them to do the job with care and competence, which is not something that can be said of all printing shops.

With the estimates on hand, I requested authorization from the board for the budget, fronted the money to place the order, and got the reports out of the printing press. Here they are:


Notice how the covers match the color of my turf. Amazing.

Some more pictures, up close:

2015-10-11--15.58.07 2015-10-11--15.59.17 2015-10-11--16.00.08


Interestingly enough, this time we’re snail-mailing more reports than we ever shipped in the past… and yet the whole operation will cost less than in previous years (at least 2007, 2008 and 2009—don’t know for years where it was printed at GUADEC). This is in part because I made an exhaustive list of people and organizations that we should be targetting this year, so the scope is simultaneously broader and more focused. Picture that.


My groundbreaking, high-tech tracking system for letters and report shipping

Since that was clearly too easy so far, I decided to crank it up a notch and add a personal touch:


For almost every report being shipped, I wrote a custom, hand-written letter, using the exquisite art of the Jinhao Five Point Exploding Fountain Pen technique. A handful of recipients that I know well got a short note instead of the longer introduction, and 2-3 people got a printed letter (in the cases where my handwriting would not be compact enough). So, in total, I wrote somewhere between 30 and 40 letters. It was actually pretty enjoyable. No, I don’t have a television, why do you ask?


Pictured: cheating.

The first batch to be shipped looked like this (you can see the letters sitting on top of individual envelopes before being sealed):


Once the most urgent ones were shipped in the first batch, I prepared a second batch, which I’m planning to ship this week:


I have some stock left, so we can look at reaching out to additional prospects if we want. There might be some that I didn’t think of. Are you related to an organization/individual that may be interested in sponsoring GNOME events or getting involved through the GNOME Advisory Board? Send me a quick email and let me know, I’ll see if we can send a printed copy. You can also share the PDF version of the report yourself, if you prefer.

by nekohayo at October 26, 2015 03:23 AM

October 19, 2015

Zeeshan AliLessons on being a good maintainer

(Zeeshan Ali)
What makes for a good maintainer? Although I do not know of a definite answer to this important but vague and controversial  question, maintaining various free software projects over the last many years, I've been learning some lessons on how to strive to be a good maintainer; some self-taught through experience, some from my colleagues (especially our awesome GNOME designers) and some from my GSoC students.

I wanted to share these lessons with everyone so I arranged a small BoF at GUADEC and thought it would be nice to share it on planet GNOME as well. Some points only apply to UIs here, some only to libraries (or D-Bus service or anything with a public API really) and some to any kind of project. Here goes:

Only accept use cases

There are no valid feature requests without a proper use case behind them. What's a proper use case you ask? In my opinion, it's based on what the user needs, rather than what they desire or think they need. "I want a button X that does Y" is not a use case, let alone a proper one. "I need to accomplish X" is potentially one.

Even when given a proper use case, does not necessarily mean that you should implement it. You still need to consider the following points before deciding to accept the feature request:
  • How many users do you think this impacts?
  • What's the impact of having this feature to user?
  • What's the impact on users that do not need that feature?
  • How does the expected number of users who need this feature compare to ones that do not.
  • How much work do you think this will be and do you think you (or anyone else in the team) will have the time and motivation to implement it?

    Get a thicker skin

    Everyone wants software to be tailored for them so unless you have only a few users of your software, you can not possibly satisfy all users. Sometimes users even demand contradictory features so if you are going to be a slave of user demands, you'll not last very long and your software will soon look like my bedroom: random stuff in random places and hard to find what you are looking for.

    So don't be afraid of WONTFIX bug resolution. I do agree that this sounds harsh but I think the most important thing is to be honest with your users and not to give them false hopes.

    A good API maintainer is a slave of apps

    Your library or D-Bus service is as useful and important as the applications that use it. Never forget that while making decisions about public APIs.

    Furthermore, if possible, try your best to be involved in at least one of the applications that use your API. Even better if you'd be maintaining one such application. There has been a few occasions where I had to had long debates with library developers about how their API could do much better and I felt that the debate could have been avoided if they had more insights about the applications that use their API. Also, they'd likely care more if they'd experience the pain of the problematic part of their API first hand.

    History is important!

    VCS (which translates to git for most of us these days) history, that is. I think this is something most developers would readily agree on and some readers must be thinking why do I need to even mention this. However, I've seen that while many would agree in principle to this, in practice they don't care too much. I've seen so many projects out there, where it's very hard or even impossible to find out why a particular line of code was changed in a particular way. Not only it makes maintenance hard, but also discourages new contributors since they'd not feel confident about changing an LOC if they can't be sure why it's how it is and not already what they think it should be like.

    So kids, please try to follow some sane commit log rules. We have some here and Lasse has created an extensive version of that document with rationale for each point, for his project here.

    Quality of code

    This is a bit related to the previous point.  To be very honest, if you don't care about quality enough, you really should not be even working on software that effects others, let alone maintaining them.

    How successful you are at maintaining high quality is another thing, and sometimes even not in your hands entirely, but you should always strive for highest quality. The two most important sub-goals in that direction in my opinion, are:


    [Insert cliché Einstein quote about simple solutions here.] Each time you come up with a solution (or receive one in the form of patches), ask yourself how it can be done with fewer lines of code. The fewer lines of code you have, the fewer lines of code you'd need to maintain.


    Come up with a (or adopt an existing) coding style with specific set of rules to follow and try your very best to follow them. Many contributors would simply dive directly into your project's source code and not read any coding style manual you provide and there is nothing wrong with that. If you are consistent in your code, they'll figure out at least most of your coding style while hacking on your sources.  Also chances are that your coding style would even grow on them and that'll save you a lot of time during your reviews of their patches. That's unlikely to happen if you are not very consistent with your coding style.


    None, what so ever. Do what you think is right. This blog post is nothing more than my personal opinions so take it or leave it, it's all up to you!

    October 19, 2015 11:11 AM

    October 15, 2015

    Jean-François Fortin TamThe War Against Deadlocks, part 2

    Heads up, citizens of the video editing world! Our war correspondent Alexandru has taken some of my battered notes, done some more research and published a fine report on the second part of Pitivi‘s War Against Deadlocks. Go read it now! 😃

    Thank you for reading, commenting and sharing! This blog post is part of a série of articles tracking progress made with work related to the 2014 Pitivi fundraiser. Researching and writing quality articles takes a lot of time, so please be patient and enjoy the ride! 😉
    1. An update from the 2014 summer battlefront
    2. The 0.94 release
    3. The War Against Deadlocks, part 1: The story of our new thread-safe mixing elements reimplementation
    4. The War Against Deadlocks, part 2: GNonLin's reincarnation
    5. The 0.95 release, the GTK+ timeline and sink
    6. Measuring quality/reliability through time (clarifying what gst-validate is)
    7. Our all-in-one binaries building infrastructure, and why it matters
    8. Samples, “scenario” files and you: how you can help us reproduce (almost) any bug very easily
    9. The 1.0 release and closure of the fundraiser

    by nekohayo at October 15, 2015 03:36 PM

    October 14, 2015

    Zeeshan AliGeoclue convenience library just got even simpler

    (Zeeshan Ali)
    After writing the blog post about the new Geoclue convenience library, I felt that while the helper API was really simple for single-shot usage, it still wasn't as simple as it should be for most applications, that would need to monitor location updates. They'll still need to make async calls (they could do it synchronously too but that is hardly ever a good idea) to create proxy for location objects on location updates.

    So yesterday, I came up with even simpler API that should make interacting with Geoclue as simple as possible. I'll demonstrate through some gjs code that simply awaits for location updates forever and prints the location on console each time there is a location update:

    const Geoclue =;
    const MainLoop = imports.mainloop;

    let onLocationUpdated = function(simple) {
    let location = simple.get_location ();

    print("Location: " +
    location.latitude + "," +

    let onSimpleReady = function(object, result) {
    let simple = Geoclue.Simple.new_finish (result);
    simple.connect("notify::location", onLocationUpdated);

    onLocationUpdated (simple);
    }; ("geoclue-where-am-i", /* Let's cheat */

    Yup, that easy! If I had chosen to use the synchronous API, it would be even simpler. I have already provided a patch for Maps to take advantage of this and I'm planning to provide patches for other apps too.

    October 14, 2015 06:09 PM

    October 09, 2015

    Zeeshan AliNew in Geoclue: Location sharing

    (Zeeshan Ali)
    Apart from many fixes, Geoclue recently gained some new features as well.

    Sharing location from phones

    If you read planet GNOME, you must have seen my GSoC student, Ankit already posting about this. Basically his work enabled Geoclue to search for, and make use of any NMEA providers on the local network. The second part of this project, involved implementation of such a service for Android devices. I'm pleased that he managed to get the project working in time and even went the extra mile to fix issues with his code, after GSoC.

    This is useful since GPS-based location from android is almost always going to be more accurate than WiFi-based one (assuming neighbouring WiFi networks are covered by Mozilla Location Service). This is especially useful for desktop machines since they typically do not have even WiFi hardware on them and have until now been limited to GeoIP, which at best gives city-level accurate location.

    This feature was included in release 2.3.0 and you can download the Android app from here.

     Conveniece library

    Almost since the beginning of Geoclue2 project, many people complained that using the new API is far from easy and simple, as it should be. While we have good reasons to keep D-Bus API as it is now, the fact that a lot of time passed since I got around to doing anything about this, meant that it was best if D-Bus API was not changed, Geoclue being a system service.

    So this week, I took up the task of implementing a client-side library, that not only exposes gdbus-codegen generated API to communicate with the service but also added a convenience helper API to make things very simple. Basically, you just have to call a few functions now if you simply want to get a location fix quickly and don't care much about accuracy nor interested in subsequent location updates.

    I only pushed the changes today to git master so if you have any input, now would be the best time to speak up. I wouldn't want to change API after release.

    October 09, 2015 08:11 PM

    September 27, 2015

    GStreamerGStreamer Core, Plugins, RTSP Server, Editing Services, Python, Validate 1.6.0 stable release (binaries)


    Pre-built binary images of the 1.6.0 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

    The builds are available for download from: Android, iOS, Mac OS X and Windows.

    September 27, 2015 12:00 PM

    September 25, 2015

    GStreamerGStreamer Core, Plugins, RTSP Server, Editing Services, Python, Validate 1.6.0 stable release


    The GStreamer team is proud to announce a new major feature release in the stable 1.x API series of your favourite cross-platform multimedia framework!

    This release has been in the works for more than a year and is packed with new features, bug fixes and other improvements.

    See for the full list of changes.

    Binaries for Android, iOS, Mac OS X and Windows will be provided separately by the GStreamer project.

    Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, or gst-validate, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, or gst-validate.

    September 25, 2015 11:00 PM

    Bastien NoceraPhilips Wireless, modernised

    (Bastien Nocera) I've wanted a stand-alone radio in my office for a long time. I've been using a small portable radio, but it ate batteries quickly (probably a 4-pack of AA for a bit less of a work week's worth of listening), changing stations was cumbersome (hello FM dials) and the speaker was a bit teeny.

    A couple of years back, I had a Raspberry Pi-based computer on pre-order (the Kano, highly recommended for kids, and beginners) through a crowd-funding site. So I scoured « brocantes » (imagine a mix of car boot sale and antiques fair, in France, with people emptying their attics) in search of a shell for my small computer. A whole lot of nothing until my wife came back from a week-end at a friend's with this:

    Photo from Radio Historia

    A Philips Octode Super 522A, from 1934, when SKUs were as superlative-laden and impenetrable as they are today.

    Let's DIY

    I started by removing the internal parts of the radio, without actually turning it on. When you get such old electronics, they need to be checked thoroughly before being plugged, and as I know nothing about tube radios, I preferred not to. And FM didn't exist when this came out, so not sure what I would have been able to do with it anyway.

    Roomy, and dirty. The original speaker was removed, the front buttons didn't have anything holding them any more, and the nice backlit screen went away as well.

    To replace the speaker, I went through quite a lot of research, looking for speakers that were embedded, rather than get a speaker in box that I would need to extricate from its container. Visaton make speakers that can be integrated into ceiling, vehicles, etc. That also allowed me to choose one that had a good enough range, and would fit into the one hole in my case.

    To replace the screen, I settled on an OLED screen that I knew would work without too much work with the Raspberry Pi, a small AdaFruit SSD1306 one. Small amount of soldering that was up to my level of skills.

    It worked, it worked!

    Hey, soldering is easy. So because of the size of the speaker I selected, and the output power of the RPi, I needed an amp. The Velleman MK190 kit was cheap (€10), and should just be able to work with the 5V USB power supply I planned to use. Except that the schematics are really not good enough for an electronics starter. I spent a couple of afternoons verifying, checking on the Internet for alternate instructions, re-doing the solder points, to no avail.

    'Sup Tiga!

    So much wasted time, and got a cheap car amp with a power supply. You can probably find cheaper.

    Finally, I got another Raspberry Pi, and SD card, so that the Kano, with its super wireless keyboard, could find a better home (it went to my godson, who seemed to enjoy the early game of Pong, and being a wizard).

    Putting it all together

    We'll need to hold everything together. I got a bit of help for somebody with a Dremel tool for the piece of wood that will hold the speaker, and another one that will stick three stove bolts out of the front, to hold the original tuning, mode and volume buttons.

    A real joiner

    I fast-forwarded the machine by a couple of years with a « Philips » figure-of-8 plug at the back, so machine's electrics would be well separated from the outside.

    Screws into the side panel for the amp, blu-tack to hold the OLED screen for now, RPi on a few leftover bits of wood.


    My first attempt at getting something that I could control on this small computer was lcdgrilo. Unfortunately, I would have had to write a Web UI for it (remember, my buttons are just stuck on, for now at least), and probably port the SSD1306 OLED screen's driver from Python, so not a good fit.

    There's no proper Fedora support for Raspberry Pis, and while one can use a nearly stock Debian with a few additional firmware files on Raspberry Pis, Fedora chose not to support that slightly older SoC at all, which is obviously disappointing for somebody working on Fedora as a day job.

    Looking for other radio retrofits, and there are plenty of quality ones on the Internet, and for various connected speakers backends, I found PiMusicBox. It's a Debian variant with Mopidy builtin, and a very easy to use initial setup: edit a settings file on the SD card image, boot and access the interface via a browser. Tada!

    Once I had tested playback, I lowered the amp's volume to nearly zero, raised the web UI's volume to the maximum, and raised the amp's volume to the maximum bearable for the speaker. As I won't be able to access the amp's dial, we'll have this software only solution.

    Wrapping up

    I probably spent a longer time looking for software and hardware than actually making my connected radio, but it was an enjoyable couple of afternoons of work, and the software side isn't quite finished.

    First, in terms of hardware support, I'll need to make this OLED screen work, how lazy of me. The audio setup is currently just the right speaker, as I'd like both the radios and AirPlay streams to be downmixed.

    Secondly, Mopidy supports plugins to extend its sources, uses GStreamer, so would be a right fit for Grilo, making it easier for Mopidy users to extend through Lua.

    Do note that the Raspberry Pi I used is a B+ model. For B models, it's recommended to use a separate DAC, because of the bad audio quality, even if the B+ isn't that much better. Testing out use the HDMI output with an HDMI to VGA+jack adapter might be a way to cut costs as well.

    Possible improvements could include making the front-facing dials work (that's going to be a tough one), or adding RFID support, so I can wave items in front of it to turn it off, or play a particular radio.

    In all, this radio cost me:
    - 10 € for the radio case itself
    - 36.50 € for the Raspberry Pi and SD card (I already had spare power supplies, and supported Wi-Fi dongle)
    - 26.50 € for the OLED screen plus various cables
    - 20 € for the speaker
    - 18 € for the amp
    - 21 € for various cables, bolts, planks of wood, etc.

    I might also count the 14 € for the soldering iron, the 10 € for the Velleman amp, and about 10 € for adapters, cables, and supplies I didn't end up using.

    So between 130 and 150 €, and a number of afternoons, but at the end, a very flexible piece of hardware that didn't really stretch my miniaturisation skills, and a completely unique piece of furniture.

    In the future, I plan on playing with making my own 3-button keyboard, and making a remote speaker to plug in the living room's 5.1 amp with a C.H.I.P computer.

    Happy hacking!

    by Bastien Nocera ( at September 25, 2015 09:00 AM

    September 23, 2015

    Andy Wingoamores prohibidos

    (Andy Wingo)

    It was with anxious trepidation that today, after having been officially resident in Spain for 10 years, working and paying taxes all that time, I went to file a request for Spanish nationality.

    See, being a non-European resident in Europe is a precarious thing. If ever something happens "back home" with your family or to those that you love and you need to go help out, you might not be able to come back. Sure, if you keep your official residence in Europe maybe you can make it fly under the radar, but officially to keep your right of residence you need to reside, continually. It doesn't matter that you have all your life in Spain, or France, or wheresoever: if you have to leave for a year, you start over at day 1, if you are able to get back in.

    In my case I moved away from the US when I was 22. I worked in Namibia for a couple years after college teaching in a middle school, and moved directly from there to Barcelona when a company started up around a free software project I had been working on. It was a more extreme version of the established practice of American diaspora: you go to college far away from home to be away from your parents, then upon graduation your first job takes you far away again, and as the years go by you have nothing left to go back to. Your parents move into a smaller house, perhaps in a different town, your town changes, everyone moved away anyway, and where is home? What makes a home? What am I doing here and if I stopped, is there somewhere to go back to, or is it an ever-removing onward?

    I am 35 now. While it's true that there will always be something in my soul that pines for the smell of a mountain stream bubbling down an Appalachian hollow, there's another part of my heart that is twined to Europe: where I spent the all of my working life up to now, where I lived and found love and ultimately married. I say Europe and not specifically Barcelona because... well. My now-wife was living in Paris when we got together. I made many, many journeys on the overnight Talgo train in those days. She moved down to Barcelona with me for a couple years, and when her studies as an interpreter from Spanish and French moved her back to France, I went with her.

    That move was a couple years ago. Since we didn't actually know how much time would be required there or if we would be in Switzerland or France I kept my official residence in Spain, and kept on as a Spanish salaried worker. I was terrified of the French paperwork to set up as a freelancer, even though with the "long-term residency-EU" permit it would at least be possible to make that transition. We lived a precarious life in Geneva for a while before finally settling in France.

    A note about that. We put 12 months of rent (!!!) in an escrow account, as a guarantee that allowed us to be able to rent our house. In France this is illegal: a landlord is only allowed to ask for a couple months or so. However in France you usually have a co-signer on a lease, and usually it's against your parent's house. So even if you are 45, you often have your parents signing off on your lease. We wouldn't have been able to find anything if we weren't willing to do this -- one of many instances of the informal but very real immigrant tax.

    All this time I was a salaried Spanish worker. This made it pretty weird for me in France. I had to pretend I was there on holiday to get covered by health care, and although there is a European health card, it's harder to get if you are an immigrant: the web page seems to succeed but then they email you an error and don't tell why. The solution is to actually pass by the office with your residence permit, something that nationals don't need. And anyway this doesn't cover having a family doctor, despite the fact that I was paying for it in Spain.

    This is one instance of the general pattern of immigrants using the health care system less than nationals. If you are British, say, then you know your rights and you know how the NHS works and you make it work for you. If you are an immigrant, maybe English is your second language, probably you're poor, you're ignorant of the system, you don't have family members or a big support system to tell you how the system works, you might not speak or write the language well, and probably all your time is spent working anyway because that's why you're there.

    In my case I broke my arm a couple years ago while snowboarding in France. (Sounds posh but it's not really.) If all my papers were in order and I understood the system I would probably have probably walked out without paying anything. As it was I paid some thousands of euros out of my pocket, and that is my sole interaction with health care over the course of the last 5 years I think. I still have to get the plate taken out of my arm; should have done that a year ago. It hurts sometimes.

    There is a popular idea about immigrants scrounging on benefits, and as a regular BBC radio 1 listener I hear that phrase in the voice of their news presenters inciting their listeners to ignorant resentment of immigrants with their racist implications that we are somehow "here" for "their" things. Beyond being implausible that an immigrant would actually receive benefits at all, it's unlikely that they would be able to continue to do so, given that residence is predicated on work.

    In the US where there are no benefits the phrase is usually reduced to "immigrants are stealing our jobs", a belief encouraged by the class of people that employ immigrants: the owners. If you encourage a general sentiment of "immigrants are bad, let's make immigrants' life difficult", you will have cheaper, more docile workers. The extreme form of this is the American H1B visa, in which if you quit your job, for whatever reason, even if your boss was sexually harrassing you, you have only one week to find another job or you're deported back to your "home". Whatever "home" means.

    And besides, owners only hire workers if they produce surplus value. If the worker doesn't pay off, you fire them. Wealth transfer from workers to owners is in general from immigrants to nationals, because if you are national, maybe you inherited your house and could spend your money starting your business. Maybe you know how to get the right grants. You speak the language and have the contacts. Maybe you inherited the business itself.

    I go through all this detail because when you were born in a place and grew up in a place and have never had to deal with what it is like being an immigrant, you don't know. You hear a certain discourse, almost always of the form "the horde is coming", but you don't know. And those that are affected the most have no say in the matter.

    Of course, it would be nice to pass over to the other side, to have EU citizenship. Spanish would do, but any other Schengen citizenship would at least take away that threat of deportation or, what is equivalent, denial of re-entry. So I assembled all the documentation: my birth certificate from the US, with its apostille, and the legal Spanish translation. My criminal record check in the US, with its apostille, and the legal translation. The certificates that I had been continually resident, my social security payments, my payslips, the documents accrediting me as a co-owner of my company, et cetera.

    All prepared, all checked, I go to the records department to file it, and after a pleasantly short half-hour wait I give the documents to the official.

    Who asks if I have an appointment -- but I thought the papers could be presented and then they'd give me an appointment for the interview?

    No matter, she could give me an appointment -- for May.


    And then some months later there would be a home visit by the police.

    And then they'd assess my answers on a test to determine that I had sufficient "cultural integration", but because it was a new measure they didn't have any details on what that meant yet.

    And then they'd give me a number some 6 months later.

    And then maybe they would decide after some months.

    So, 2018? 2019, perhaps?

    This morning the streets of Barcelona were packed with electoral publicity, almost all of it urging a vote for independence. After the shock and the sadness of the nationality paperwork things wore off, I have been riding the rest of the day on a burning anger. I've never, never been able to vote in a local election, and there is no near prospect of my ever being able to do so.

    As kids we are sold on a story of a fictional first-person-plural, the "we" of state, and we look forward to coming of age as if told by some benevolent patriarch, arm outstretched, "Some day, this will all be yours." Today was the day that this was replaced in my mind by the slogan pasted all around Barcelona a few years ago, "no vas a tener una casa en la puta vida" (you'll never own a house in your fucking life). It's profoundly sad. My wife and I will probably be between the two countries for many years, but being probably forever third-class non-citizens: "in no day will you ever belong to a place."

    I should note before finishing that I don't want to hear "it could be worse" or anything else from non-immigrants. We have much less political power than you do and I doubt that you understand what it is like. What needs to happen is a revaluing of the nature of citizenship: countries are for the people that are in them, not for some white-pride myth of national identity or only for those that were born there or even for people who identify with the country but don't live there. Anything else is inhuman. 10+ years to simply *be* is simply wrong.

    As it is, I need to reduce the precarious aspect of my life so I will probably finally change my domicile to France. It's a loss to me: I lose the Spanish nationality process, all my familiarity with the Spanish system, the easy life of being a salaried employee. I know my worth and it's a loss to Spain too. Probably I'll end up cutting all ties there; too bad. And I count myself lucky to be able to do this, due to the strange "long term-EU" residency permit I got a few years ago. But I'm trading a less precarious life for having to set up a business, figure out social security, all in French -- and the nationality clock starts over again.

    At least I won't have to swear allegiance to a king.

    by Andy Wingo at September 23, 2015 10:12 PM

    Bastien NoceraGNOME 3.18, here we go

    (Bastien Nocera) As I'm known to do, a focus on the little things I worked on during the just released GNOME 3.18 development cycle.

    Hardware support

    The accelerometer support in GNOME now uses iio-sensor-proxy. This daemon also now supports ambient light sensors, which Richard used to implement the automatic brightness adjustment, and compasses, which are used in GeoClue and gnome-maps.

    In kernel-land, I've fixed the detection of some Bosch accelerometers, added support for another Kyonix one, as used in some tablets.

    I've also added quirks for out-of-the-box touchscreen support on some cheaper tablets using the goodix driver, and started reviewing a number of patches for that same touchscreen.

    With Larry Finger, of Realtek kernel drivers fame, we've carried on cleaning up the Realtek 8723BS driver used in the majority of Windows-compatible tablets, in the Endless computer, and even in the $9 C.H.I.P. Linux computer.

    Bluetooth UI changes

    The Bluetooth panel now has better « empty states », explaining how to get Bluetooth working again when a hardware killswitch is used, or it's been turned off by hand. We've also made receiving files through OBEX Push easier, and builtin to the Bluetooth panel, so that you won't forget to turn it off when done, and won't have trouble finding it, as is the case for settings that aren't used often.


    GNOME Videos has seen some work, mostly in the stabilisation, and bug fixing department, most of those fixes were also landed in the 3.16 version.

    We've also been laying the groundwork in grilo for writing ever less code in C for plugin sources. Grilo Lua plugins can now use gnome-online-accounts to access keys for specific accounts, which we've used to re-implement the Pocket videos plugin, as well as the cover art plugin.

    All those changes should allow implementing OwnCloud support in gnome-music in GNOME 3.20.

    My favourite GNOME 3.18 features

    You can call them features, or bug fixes, but the overall improvements in the Wayland and touchpad/touchscreen support are pretty exciting. Do try it out when you get a GNOME 3.18 installation, and file bugs, it's coming soon!

    Talking of bug fixes, this one means that I don't need to put in my password by hand when I want to access work related resources. Connect to the VPN, and I'm authenticated to Kerberos.

    I've also got a particular attachment to the GeoClue GPS support through phones. This allows us to have more accurate geolocation support than any desktop environments around.

    A few for later

    The LibreOfficeKit support that will be coming to gnome-documents will help us get support for EPubs in gnome-books, as it will make it easier to plug in previewers other than the Evince widget.

    Victor Toso has also been working through my Grilo bugs to allow us to implement a preview page when opening videos. Work has already started on that, so fingers crossed for GNOME 3.20!

    by Bastien Nocera ( at September 23, 2015 11:11 PM

    September 21, 2015

    Thomas Vander SticheleMedia unit for geeks with kids?

    (Thomas Vander Stichele)

    Phoenix is growing up quickly and pretty soon he’ll be crawling around the house. So it’s time for babyproofing.

    For the past year, I’ve been looking all over the internet for decent media units that we could get. IKEA used to have some good ones, but it doesn’t look like they have any decent model anymore.

    So I turn to the geeky side of the internet, as I’m sure there’s lots of people out there who’ve gone through the same problem with an infant growing up.

    So far, I’m thinking:

    • closed at the front, except for a big slot large enough to fit my central speaker (I admit I went large with a PolkAudio A4)
    • thick glass – the kind that lets IR through, but not babies when they smash into it
    • plenty of holes out the back for ventilation – in fact, mostly open
    • useful leads for cables if possible
    • 50-60 inch wide because the TV needs to go on top
    • high enough – at least 80 cm. So many units are low, why?
    • deep enough – so many media units do not even fit a standard AV receiver, let alone leave enough space for air to circulate so the unit doesn’t burn up
    • cubby holes/shelves high enough so said unit fits as well
    • not butt ugly or escaped from the eighties
    • can hold A/V receiver, standard Digital TV unit, router, a NAS, a PS3, and an Atari VCS 2600. Bonus points for space left over for a future Megadrive or NES.
    • easy to attach to a wall
    • built-in custom rack for Atari VCS 2600 cartridges (though I’d begrudgingly accept a unit that ticks all the other boxes)

    Any requirements I’m missing? Anyone want to share which unit made them happy?

    Update: if it matters, this is for a smallish appartment in Manhattan – preference for no DIY.

    flattr this!

    by Thomas at September 21, 2015 03:13 AM

    September 19, 2015

    Jean-François Fortin TamCapturing the essence of a cool symphonic orchestra through video

    One of the things I do as part of my varied service offering at idéemarque is filmmaking, sound and video editing—as some of you must have realized by now, I have this undying passion for storytelling and the making of motion picture.

    So when a symphonic orchestra requests my help to make a promotional video for them, and gives me carte blanche when it comes to creative freedom, you can imagine I’m pretty thrilled!

    2015-07-31-1 2015-07-31-2

    When thinking of a symphonic orchestra, one typically imagines a bunch of musicians on a stage in a symphonic house or in a pit during an opera performance. In this case however, that’s only part of their activities. As you will see in the video, this particular orchestra puts a lot of effort into creating social events for people to attend—cocktails, circus shows, dinners, art exhibitions, etc. Pretty cool.

    For the video’s soundtrack, they initially suggested the “galop” of Igor Stravinsky’s Suite n° 2 for chamber orchestra. After my first two draft edits however, I came to the conclusion that it was not a good fit: the tempo was very fast and nearly constant throughout the piece, with no place for respite, leading to a frenetic chain of cuts all over the place that left you bewildered at the end. The folks at the FOSDL thought it was pretty good already, but I was not satisfied with myself.

    — “How about I dig around for a more dance-like tune we can use?”, said I.

    — “Sure. Surprise me!”, they replied.

    And so I went looking for something roughly between 80 and 120 beats per minute with enough “range” to be able to instill variance and a proper mood to the video. Gave them a pre-selection of 10-20 choices and they gave me back their two preferences, with which I edited the final version (went through roughly five versions for this project).

    Besides the motion ramping, lip synching and compositing effects in use everywhere, I had to make some pretty extensive changes to the sound track (remixed it to three minutes, then down to roughly two minutes instead of its original length of six minutes). I was careful to match beats and measures, and to keep the music flow smooth with lyrics. I bet 99% of watchers won’t even notice.

    This is the result:

    Their reaction was (paraphrasing & translating 2-3 emails),

    In one word: WOW. Everyone at the office loves it! […] Thanks so much for your help on this, […] I forwarded the link to the Events Committee so they can use it to boost sales, and I’ve had excellent feedback on the quality of your edit so far.

    It was a real pleasure to work with the Fondation de l’Orchestre symphonique de Longueuil, and it definitely looks like the feeling is mutual.

    Need a great video editor/filmmaker for your project? I can help 😉

    If you liked the video above, feel free to give it a thumbs up, share it with those around you (on G+, Twitter, Le Twitteur, and even with that face book), or leave a nice comment!

    by nekohayo at September 19, 2015 02:00 PM

    September 18, 2015

    GStreamerGStreamer Core, Plugins, RTSP Server, Editing Services 1.6.0 release candidate 2 (1.5.91)


    The GStreamer team is pleased to announce the second release candidate for the stable 1.6 release series. The 1.6 release series is adding new features on top of the 1.0, 1.2 and 1.4 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework. The final 1.6.0 release is planned in the next few days unless any major bugs are found.

    Binaries for Android, iOS, Mac OS X and Windows will be provided separately by the GStreamer project.

    Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, or gst-editing-services, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, or gst-editing-services

    Check the release announcement mail for details and the release notes above for a list of changes.

    September 18, 2015 08:20 PM

    September 13, 2015

    Jean-François Fortin TamOutrageous Outreach

    KarenSandlerThe Internet being what it is today, being a public figure can be a very dangerous role. For those unaware, Karen Sandler has been under vigorous attacks—hate mail, public slandering, and more—for having been the GNOME Foundation‘s Executive Director from 2011 to 2014. Contrary to what I had hoped, even many months after, the hate has not died down. You still see wretched hives of scum and villainy like this blog post on a regular basis (warning: the comments over there are depressing). Enough is enough, time to set the record straight.

    This is the comment I posted (which is effectively censored, since it never made it past moderation, even though I asked nicely):

    The premises of this article are entirely false. Karen Sandler did not bankrupt the GNOME Foundation, and the outreach program did not, besides that temporary cash flow problem (that is: sponsoring orgs not paying quickly enough), suck away funds from the foundation. The situation arose because companies paid in variable rates and our accounting did not account for it. After all companies paid their share, the foundation became solvent. This is purely a business accounting issue arising from dealing with rapid growth because the outreach program was so successful.

    The GNOME outreach program was structured this way: Company ABC wants to sponsor the program, and funds in an amount that pays for intern XYZ to work on a Free Software project; the GNOME Foundation acted as a coordinator with sponsoring companies, interns and free software projects (or “organizations”); then money comes in from the sponsor and out to the intern, with a small cut to cover GNOME’s administrative expenses. That’s it, that’s all.

    It’s unfortunate that you’ve written a post without the facts. I have provided them to you, and as one of the directors of the GNOME Foundation I am happy to answer any questions regarding the situation.

    Thing is, two completely unrelated events happened around the same time in the spring of 2014, providing perfect timing/story for those who actively seek conspiracies or a “culprit”:

    1. Karen Sandler stepped down from her role as the GNOME Executive Director, in order to go help another charitable organization in need of her services.
    2. We had trouble getting all the outreach program sponsors’ invoices paid on time to cover the payment to interns, which resulted in the temporary cash-flow problem that got so much attention. This particular situation was not Karen’s fault. She did not know about it until it was too late.

    The fix: we enacted a temporary spending freeze to give us time to chase down those invoices, and after months of work we collected every single last one of them. Net financial result: everything balances out now.


    Simple business mistake. These things happen. No need to create a gamergate out of it.

    While I’m here, it would be worth mentioning the following about the outreach program:

    Warning: if you are going to comment on my blog on this particular blog post, stay civil and think thrice about what you’re going to say. And don’t be so quick to pejoratively label me a “SJW” just because I’m personally standing up for someone here: I’m simply a normal guy who broke the silence after getting fed-up seeing an esteemed, long-time contributor of our community get torn apart under false pretenses.

    by nekohayo at September 13, 2015 10:36 PM

    September 03, 2015

    GStreamerGStreamer Conference 2015: Schedule of Talks and Speakers available


    The GStreamer Conference team is pleased to announce this year's lineup of talks and speakers covering again an exciting range of topics!

    The GStreamer Conference 2015 will take place on 8-9 October 2015 in Dublin (Ireland) and will be co-hosted with the Embedded Linux Conference Europe (ELCE) and LinuxCon Europe.

    Details about the conference and how to register can be found on the conference website.

    This year's topics and speakers:

    • Interactive video playback and capture in the Processing Language via GStreamer · Andres Colubri
    • Distributed transcoding with GStreamer · Thiago Sousa Santos, Samsung
    • Tiled Streaming of UHD video in real-time · Arjen Veenhuizen, TNO
    • GStreamer and WebKit · Philippe Normand, Igalia
    • Hardware accelerated multimedia on TI’s Jacinto 6 SoC · Pooja Prajod, Texas Instruments
    • Demystifying the allocation query · Nicolas Dufresne, Collabora
    • Synchronised multi-room media playback and distributed live media processing and mixing with GStreamer · Sebastian Dröge, Centricular
    • Implementing a WebRTC endpoint in GStreamer: challenges, problems and perspectives · Dr Luis López, Kurento
    • OpenGL Desktop/ES for the GStreamer pipeline · Matthew Waters, Centricular
    • Robust lipsync error detection using gstreamer and QR Codes · Florent Thiery, Ubicast
    • GStreamer VAAPI: Hardware-accelerated decoding and encoding on Intel hardware · Víctor M. Jáquez L., Igalia
    • Colorspaces and HDMI (*) · Hans Verkuil, Cisco
    • GStreamer State of the union · Tim-Philipp Müller, Centricular
    • Video Filters and their applications · Sanjay Narasimha Murthy, Samsung
    • Camera Sharing and Sandboxing with Pinos · Wim Taymans, RedHat
    • Stereoscopic (3D) Video in GStreamer Redux · Jan Schmidt, Centricular
    • Bin It! AKA, How to use bins and bin subclasses to keep state local and easily manage dynamic pipelines · Vivia Nikolaidou, ToolsOnAir
    • The HeliosTv Distributed DVB stack · Romain Picard, SoftAtHome
    • How to contribute to GStreamer · Luis de Bethencourt, Samsung
    • GstPlayer - A simple cross-platform API for all your media playback needs · Sebastian Dröge, Centricular
    • Improving GStreamer performance on large pipelines: from profiling to optimization · Miguel París
    • Kurento Media Server: experiences bringing GStreamer capabilities to WWW developers · José Antonio Santos
    • ToolsOnAir's mixing pipeline architecture overview · Heinrich Fink, ToolsOnAir
    • Distributed Acoustic Triangulation · Jan Schmidt, Centricular
    • Chromium GStreamer backend · Julien Isorce, Samsung
    • ogv.js: bringing open codecs to Safari and IE with emscripten · Brion Vibber, Wikimedia
    • Bringing GStreamer to Radio Broadcasting · Marcin Lewandowski
    • Daala and NetVC: the next-generation of royalty free video codecs · Thomas Daele, Mozilla
    • Profiling individual GStreamer elements (*) · Kyrylo Polezhaiev
    • Pointing cameras at TVs: when HDMI video-capture is not an option · Will Manley, stb-tester
    • decodebin3: designing the next generation playback engine (*) · Edward Hervey, Centricular
    (*) preliminary title

    Lightning Talks:

    • Hyperspectral imagery · Dimitrios Katsaros, QTechnology
    • Industrial application pipelines · Dimitrios Katsaros, QTechnology
    • gst-gtk-launch-1.0 · Florent Thiery, Ubicast
    • liborc (JIT SIMD generator) experiments · Wim Taymans, RedHat
    • V4L2 GStreamer elements update · Nicolas Dufresne, Collabora
    • Analyzing caps negotiation with GstTracer · Thiago Sousa Santos, Samsung
    • Know your queues! queue, queue2, multiqueue, netbuffer and all that · Tim-Philipp Müller
    • Nle: A new design for the GStreamer Non Linear Engine · Thibault Saunier
    • What is new in GstValidate · Thibault Saunier
    • Continuous Integration update · Edward Hervey
    • Remote GStreamer Debugger · Marcin Kolny
    • gstreamermm C++ wrapper · Marcin Kolny
    • Multipath RTP (MPRTP) plugin in GStreamer · Balázs Kreith
    • OpenCV and GStreamer · Vanessa Chipi
    • ...
    • Submit your lightning talk now!

    Full talk abstracts and speaker biographies will be published shortly.

    Many thanks to our sponsors, Google, Centricular and Pexip without whom the conference would not be possible in this form. And to Ubicast who will be recording the talks again.

    Considering becoming a sponsor? Please check out our sponsor brief.

    We hope to see you all in Dublin in October! Don't forget to register!

    September 03, 2015 06:00 PM

    August 30, 2015

    GStreamerNew OS X build (


    New builds of the 1.5.90 release candidate packages for OS X have been uploaded. This build fixes a problem that made the first build unusuable, but contains no source changes. The new binaries can be found here

    August 30, 2015 12:00 AM

    August 21, 2015

    Jean-François Fortin TamHelp us get the GUADEC 2014 videos published

    For those who could not attend GUADEC 2015, video recordings have been processed and published here. You might wonder, then, what happened to the GUADEC 2014 videos. The talks in Strasbourg were recorded indeed, but the audio came from the camera’s built-in microphones (so no truly directional mic and no line-in feed). This is problematic for a number of reasons:

    • We were in the city center of Strasbourg with no air conditioning, which means that the windows were open so we heard all sorts of noises (including cars passing on the stone pavement, construction work, etc.) in addition to background noise.
    • One of the rooms did not have a speaker microphone/amplified sound system
    • The camera microphones being far from the speaker means that you hear noises from the audience (such as chairs moving)

    So the videos required a significant amount of processing to be adequate for publishing. So far, Bastian Ilsø has been doing the majority of the work and has managed to process about 25% of the talks (Alexandre Franke has also been working on sound processing).

    This is where you come in. If you have a little bit of patience and a good pair of headphones, you can help! Take a look at our current dashboard, poke afranke on #guadec through (or by email at afranke at to let us know you will be taking care of talk XYZ, and he will send you a link to the corresponding audio file (and video if you need it). You can then do the processing in Audacity to remove the background noise and occasional noises (ex: chairs) before amplifying the whole sound track. You can find detailed instructions on the recommended way to do that here.

    Processing one video’s soundtrack should normally take you a maximum of two hours (since you have to listen, pause and add silences to remove occasional big sounds) per talk. We’d like to get this accomplished as quickly as possible, so you should get involved only if you can commit to spending a few hours on this soon.

    Once you’re done, let us know and send us the processed sound file — we will then include it in the video for final editing and publishing.

    If a dozen of us processed two of those talks each, we might be done within a week or two! So roll up your sleeves and help us get those important recordings completed for posterity.

    Also, GUADEC 2015 speakers: if you haven’t done so already, please email your slides to Alexandre Franke so we can include them with the video files this year.

    by nekohayo at August 21, 2015 10:00 AM

    Arun RaghavanGUADEC 2015

    This one’s a bit late, for reasons that’ll be clear enough later in this post. I had the happy opportunity to go to GUADEC in Gothenburg this year (after missing the last two, unfortunately). It was a great, well-organised event, and I felt super-charged again, meeting all the people making GNOME better every day.

    GUADEC picnic @ Gothenburg

    I presented a status update of what we’ve been up to in the PulseAudio world in the past few years. Amazingly, all the videos are up already, so you can catch up with anything that you might have missed here.

    We also had a meeting of PulseAudio developers which and a number of interesting topics of discussion came up (I’ll try to summarise my notes in a separate post).

    A bunch of other interesting discussions happened in the hallways, and I’ll write about that if my investigations take me some place interesting.

    Now the downside — I ended up missing the BoF part of GUADEC, and all of the GStreamer hackfest in Montpellier after. As it happens, I contracted dengue and I’m still recovering from this. Fortunately it was the lesser (non-haemorrhagic) version without any complications, so now it’s just a matter of resting till I’ve recuperated completely.

    Nevertheless, the first part of the trip was great, and I’d like to thank the GNOME Foundation for sponsoring my travel and stay, without which I would have missed out on all the GUADEC fun this year.

    Sponsored by GNOME!

    Sponsored by GNOME!

    by Arun at August 21, 2015 06:21 AM

    August 19, 2015

    GStreamerGStreamer Core, Plugins, RTSP Server 1.6.0 release candidate (1.5.90)


    The GStreamer team is pleased to announce the first release candidate for the stable 1.6 release series. The 1.6 release series is adding new features on top of the 1.0, 1.2 and 1.4 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework. The final 1.6.0 release is planned in the next few days unless any major bugs are found.

    Binaries for Android, iOS, Mac OS X and Windows will be provided separately by the GStreamer project.

    Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, or gst-rtsp-server, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, or gst-rtsp-server,

    Check the release announcement mail for details and the release notes above for a list of changes.

    August 19, 2015 02:29 PM