August 27, 2016

Sebastian DrögeGetting RSS feeds for news websites that don’t provide them

(Sebastian Dröge)

This is about a fun little project I already wrote a few months ago, completely unrelated to other things I’m usually writing about, but I thought it might be useful for others too.

When I moved to Greece last year, I had the problem that there were not many news websites that provided local news in English and actually had an RSS feed. And having local news, next to global ones about what happens all over the world, seems like a good idea to know what happens close around you.
The only useful website I found was Ekathimerini. There were two others that seemed useful to have, The Press Project (a crowd-funded project) and To Vima, both of which don’t have an RSS feed (or not for their English version). Of course the real solution to this problem is to learn Greek, which I’m doing now but that’s going to take a while until I’m able to understand news articles without too much effort.

So what did I do? I wrote a small web service that parses the HTML of those websites and returns an RSS feed based on that, together with having it update regularly in the background and keeping some history of items. You can find it here: html-rss-proxy. The resulting RSS feeds seem to work very well in Liferea and Newsblur at least.

Since then it was also extended by another news website, and generally it’s rather simple to add new ones to the code. You just need to figure out how to extract the relevant information from the HTML of some website and then convert that into code like here and wrap it up in the general interface that the other parts of the code expect, like here.

If you add some website yourself, feel free to send me a pull request and I’ll merge it!

Technical bits

On the technical side, this seems to be one of the most stable pieces of software I ever wrote. It never crashed or otherwise failed since I started running it, and fortunately I also didn’t have to update the HTML parsing code yet because of website changes.

It’s written in Haskell, using the Scotty web framework, Cereal serialization library for storing the history of the past articles, http-conduit for fetching the websites, and html-conduit for parsing the HTML. Overall a very pleasant experience, thanks to the language being very convenient to write and preventing most silly mistakes at compile-time, and the high quality of the libraries.

The only part I’m not yet too happy about is the actual HTML parsing, it seems to verbose and redudant. I might port it to another library at a later time, maybe xml-html-conduit-lens.

by slomo at August 27, 2016 03:02 PM

August 21, 2016

Jean-François Fortin TamGUADEC 2016, laptops and tablets made to run GNOME, surprise Pitivi meeting

I went there for the 2016 edition of GUADEC:

2016-08-17--21.35.31

I arrived a couple of days early to attend my last GNOME Foundation board meeting, in one of the KIT’s libraries. The building’s uncanny brutalist architecture only added to the nostalgia of a two years adventure coming to an end:

2016-08-16--17.44.34

Then…

And so I made a new talk proposal at the last minute, which was upvoted fairly quickly by attendees:

2016-08-12--15.03.12

The conference organizers counter-trolled me by inscribing it exactly like this onto the giant public schedule in the venue’s lobby:

2016-08-12--16.08.16

The result was this talk: Laptops & Tablets Manufactured to Run a Pure GNOME. Go watch it now if you missed it. Note: during the talk’s Q&A session, I mistakenly thought that Purism‘s tablets were using an ARM architecture; they’re actually planned to be Intel-based. And to make things clear, for laptop keyboard layouts, Purism is currently offering US/UK, which are different physical layouts (different cutting etc.).

Also relevant to your interests if you’re into that whole privacy thing:

In my luggage, I carried ~20 kilograms of the Foundation’s annual reports. Some folks were skeptical and thought I should “only bring a few of them, people aren’t usually interested in the annual report.” Well, they were dead wrong: within one afternoon, the annual reports “sold out” like hotcakes. See also my tandem lightning talk (at the 29mins mark) to get a glimpse of how much work we put into designing the new annual report this year. Also, if you took one of them at the conference, remember: they’re precious little works of art and a powerful tool to convince people to become contributors or sponsors to support GNOME, so make sure to use them towards that goal!

I was very happy to see Mathieu and Alexandru from the Pitivi team deciding to attend at the last minute, even if it was just for one day. We spent time with Jakub Steiner discussing his workflow and wishlist for a “perfect” video editor:

2016-08-15--17.05.53

Pitivi makes people smile!
Left to right: Alexandru Băluț, Jakub “Skywalker” Steiner, Mathieu Duponchelle.

I took a handful of photos of the BoFs and uploaded them to my gallery for GUADEC 2016. Licensed under the Creative Commons “by attribution” 4.0 as usual, so that the GNOME Engagement team can use them for GNOME promotional materials if needed. Make sure you do the same and list yours in here.

The Flatpak BoF session

The Flatpak BoF session

Overall, this GUADEC was one of the most well-organized ones I’ve seen in years. I was floored by the amount of efforts and planning the local team put into this. They really deserve some kudos! Everything ran smoothly on the surface (I know that in such events there will always be odd situations happening here and there, but they dealt with them so efficiently that they were invisible). The team had a professional-grade two-way radio system to coordinate, a car and trailer to carry stuff around every day, made and reused food (pro-grade cafeteria counter metal containers = genius), a lifetime supply of IKEA mugs that got washed and reused frequently, tons of snacks, managed to pull in great sponsors even at the last minute, put signage in various parts of the city to guide people to the venue, had huge quantities of tasty dead animals (and plants) to eat at a very successful barbecue event, got an icecream vendor to come to the venue, and even filled up a pool and beanbags, for pete’s sake!

Thanks to the the Chaos Computer Club for providing a flawless live video streaming, recording and publishing service. Very rarely did we have GUADEC videos published in a timely fashion in the past, let alone streamed live and with proper laptop video output capture as well as proper sound mixing. This is fantastic.

I should mention the closing night’s event in the biergarten, with sponsored drinks and food by our friendly GStreamer experts Centricular, a very nice gesture that was well appreciated by everyone.

2016-08-15--19.23.28

Survivors at rest in the snacks bunker on the last day of the BoFs

Thanks to everybody involved to make this event a success, and thanks to the GNOME Foundation for making it possible for me to attend.

gnome sponsored badge shadow

by nekohayo at August 21, 2016 11:45 AM

August 19, 2016

GStreamerGStreamer Core, Plugins, RTSP Server, Editing Services, Python, Validate, VAAPI 1.8.3 stable release

(GStreamer)

The GStreamer team is pleased to announce the third bugfix release in the stable 1.8 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.8.x. For a full list of bugfixes see Bugzilla.

See /releases/1.8/ for the full release notes.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, or gstreamer-vaapi, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, or gstreamer-vaapi.

August 19, 2016 10:00 AM

August 12, 2016

Christian SchallerWant make Linux run better on laptops?

(Christian Schaller)

So we have two jobs openings in the Red Hat desktop team. What we are looking for is people to help us ensure that Fedora and RHEL runs great on various desktop hardware, with a focus on laptops. Since these jobs require continuous access to a lot of new and different hardware we can not accept applications this time for remotees, but require you to work out of out office in Munich, Germany. We are looking for people with people not afraid to jump into a lot of different code and who likes tinkering with new hardware. The hardware enablement here might include some kernel level work, but will more likely involve improving higher level stacks. So for example if we have a new laptop where bluetooth doesn’t work you would need to investigate and figure out if the problem is in the kernel, in the bluez stack or in our Bluetooth desktop parts.

This will be quite varied work and we expect you to be part of a team which will be looking at anything from driver bugs, battery life issues, implementing new stacks, biometric login and enabling existing features in the kernel or in low level libraries in the user interface.

You can read more about the jobs at the jobs.redhat.com. That link lists a Senior Engineer, but we also got a Principal Engineer position open with id 53653, but that one is not on the website as I post this, but should hopefully be very soon.

Also if you happen to be in the Karlsruhe area or at GUADEC this year I will be here until Sunday, so you could come over for a chat. Feel free to email me on christian.schaller@gmail.com if you are interested in meeting up.

by uraeus at August 12, 2016 09:07 AM

August 11, 2016

Bastien NoceraFlatpak cross-compilation support

(Bastien Nocera) A couple of weeks ago, I hinted at a presentation that I wanted to do during this year's GUADEC, as a Lightning talk.

Unfortunately, I didn't get a chance to finish the work that I set out to do, encountering a couple of bugs that set me back. Hopefully this will get resolved post-GUADEC, so you can expect some announcements later on in the year.

At least one of the tasks I set to do worked out, and was promptly obsoleted by a nicer solution. Let's dive in.

How to compile for a different architecture

There are four possible solutions to compile programs for a different architecture:

  • Native compilation: get a machine of that architecture, install your development packages, and compile. This is nice when you have fast machines with plenty of RAM to compile on, usually developer boards, not so good when you target low-power devices.
  • Cross-compilation: install a version of GCC and friends that runs on your machine's architecture, but produces binaries for your target one. This is usually fast, but you won't be able to run the binaries created, so might end up with some data created from a different set of options, and won't be able to run the generated test suite.
  • Virtual Machine: you'd run a virtual machine for the target architecture, install an OS, and build everything. This is slower than cross-compilation, but avoids the problems you'd see in cross-compilation.
The final option is one that's used more and more, mixing the last 2 solutions: the QEmu user-space emulator.

Using the QEMU user-space emulator

If you want to run just the one command, you'd do something like:

qemu-static-arm myarmbinary

Easy enough, but hardly something you want to try when compiling a whole application, with library dependencies. This is where binfmt support in Linux comes into play. Register the ELF format for your target with that user-space emulator, and you can run myarmbinary without any commands before it.

One thing to note though, is that this won't work as easily if the qemu user-space emulator and the target executable are built as a dynamic executables: QEmu will need to find the libraries for your architecture, usually x86-64, to launch itself, and the emulated binary will also need to find its libraries.

To solve that first problem, there are QEmu static binaries available in a number of distributions (Fedora support is coming). For the second one, the easiest would be if we didn't have to mix native and target libraries on the filesystem, in a chroot, or container for example. Hmm, container you say.

Running QEmu user-space emulator in a container

We have our statically compiled QEmu, and a filesystem with our target binaries, and switched the root filesystem. Well, you try to run anything, and you get a bunch of errors. The problem is that there is a single binfmt configuration for the kernel, whether it's the normal OS, or inside a container or chroot.

The Flatpak hack

This commit for Flatpak works-around the problem. The binary for the emulator needs to have the right path, so it can be found within the chroot'ed environment, and it will need to be copied there so it is accessible too, which is what this patch will do for you.

Follow the instructions in the commit, and test it out with this Flatpak script for GNU Hello.

$ TARGET=arm ./build.sh
[...]
$ ls org.gnu.hello.arm.xdgapp
918k org.gnu.hello.arm.xdgapp

Ready to install on your device!

The proper way

The above solution was built before it looked like the "proper way" was going to find its way in the upstream kernel. This should hopefully land in the upcoming 4.8 kernel.

Instead of launching a separate binary for each non-native invocation, this patchset allows the kernel to keep the binary opened, so it doesn't need to be copied to the container.

In short

With the work being done on Fedora's static QEmu user-space emulators, and the kernel feature that will land, we should be able to have a nice tickbox in Builder to build for any of the targets supported by QEmu.

Get cross-compiling!

by Bastien Nocera (noreply@blogger.com) at August 11, 2016 03:00 PM

August 10, 2016

Zeeshan AliLife is change

(Zeeshan Ali)
Quite a few major life events happened/happening this summer so I thought I blog about them and some of the experiences I had.

New job & new city/country

Yes, I found it hard to believe too that I'll ever be leaving Red Hat and the best manager I ever had (no offence to others but competing with Matthias is just impossible) but I'll be moving to Gothenburg to join Pelagicore folks as a Software Architect in just 2 weeks. I have always found Swedish language to be a very cute language so looking forward to my attempt of learning Swedish. If only I had learnt Swedish rather than Finnish when I was in Finland.

BTW, I'm selling all my furniture so if you're in London and need some furniture, get in touch!

Fresh helicopter pilot

So after two years of hard work and getting myself sinking in bank loans, I finally did it! Last week, I passed the skills test for Private Pilot License (Helicopters) and currently awaiting anxiously for my license to come through (it usually takes at least two weeks). Once I have that, I can rent Helicopters and take passengers with me. I'll be able to share the costs with passengers but I'm not allowed to make money out of it. The test was very tough and I came very close to failing at one particular point. The good news is that despite me being very tense and very windy conditions on test day, the biggest negative point from my examiner was that I was being over-cautious and hence very slow. So I think it wasn't so bad.



There are a few differences to a driving test. A minor one is is that in driving test, you are not expected to explain your steps but simply execute, where as in skills test for flying, you're expected to think everything out loud. But the most major difference is that in driving test, you are not expected to drive on your own until you pass the test, where as in flying test, you are required to have flown solo for at least 10 hours, which needs to include a solo cross country flight of at least a 100 nautical miles (185 KM) involving 3 major aeorodromes.  Mine involved Estree, Cranfield and Duxford. I've been GPS logging while flying so I can show you log of my qualifying solo cross country flight (click here to see details and notes):



I still got a long way towards Commercial License but at least now I can share the price with friends so building hours towards commercial license, won't be so expensive (I hope). I've found a nice company in Gothenburg that trains in and rents helicopters so I'm very much looking forward to flying over the costs in there. Wanna join? Let me know. :)

August 10, 2016 02:10 PM

August 09, 2016

Bastien NoceraBlog backlog, Post 4, Headset fixes for Dell machines

(Bastien Nocera) At the bottom of the release notes for GNOME 3.20, you might have seen the line:
If you plug in an audio device (such as a headset, headphones or microphone) and it cannot be identified, you will now be asked what kind of device it is. This addresses an issue that prevented headsets and microphones being used on many Dell computers.
Before I start explaining what this does, as a picture is worth a thousand words:


This selection dialogue is one you will get on some laptops and desktop machines when the hardware is not able to detect whether the plugged in device is headphones, a microphone, or a combination of both, probably because it doesn't have an impedance detection circuit to figure that out.

This functionality was integrated into Unity's gnome-settings-daemon version a couple of years ago, written by David Henningsson.

The code that existed for this functionality was completely independent, not using any of the facilities available in the media-keys plugin to volume keys, and it could probably have been split as an external binary with very little effort.

After a bit of to and fro, most of the sound backend functionality was merged into libgnome-volume-control, leaving just 2 entry points, one to signal that something was plugged into the jack, and another to select which type of device was plugged in, in response to the user selection. This means that the functionality should be easily implementable in other desktop environments that use libgnome-volume-control to interact with PulseAudio.

Many thanks to David Henningsson for the original code, and his help integrating the functionality into GNOME, Bednet for providing hardware to test and maintain this functionality, and Allan, Florian and Rui for working on the UI notification part of the functionality, and wiring it all up after I abandoned them to go on holidays ;)

by Bastien Nocera (noreply@blogger.com) at August 09, 2016 10:49 AM

August 06, 2016

Jean-François Fortin TamVice-President’s Report — The State of the GNOME Foundation

Hi! Long time no see. My blog has been pretty quiet in recent months, in the big part due to my extended commitment on the GNOME Foundation‘s Board of Directors (for a second year without an executive director present to take some of the load) and the various business engagements I’ve had.

Generally speaking, this year was a bit less intense than the one before it (we didn’t have to worry about a legal battle with a giant corporation this time around!) although we did end up touching a fair amount of legal matters, such as trademark agreements. One big item we got cleared was the Ubuntu GNOME trademark agreement. We also welcomed businesses that wanted to sell GNOME-related merchandise, you can find them listed here—supporting them by purchasing GNOME-related items also supports the Foundation with a small percentage shared as royalties.

gnome 2016 merchandize - 1 gnome 2016 merchandize - 3 gnome 2016 merchandize - 4 gnome 2016 merchandize - 2

In the summer of 2015, I thought I’d take a break from my presidency from the year before, so I was pretty happy to have a new president and vice-president starting at GUADEC 2015 and me just being a regular board member. Some months later, Christian Hergert had to step down from his role as vice-president because he joined Red Hat, and the GNOME Foundation has a rule where the board of the directors cannot have more than two members (out of seven) from the same company/employer. I took over his role as vice-president then.

And so it went:

tails flying and supporting sonic

Pictured: me, helping Shaun tackle the big things. Sonic boom!

The board has done a lot of work in recent months. In addition to the legal agreements mentioned above, since my last report we’ve held 50 meetings (double the amount from last year; it seems bi-weekly meetings were not enough to cover all that we had to discuss!), over 2400 emails were exchanged on the board mailing list, and we wrote over 24,800 lines of discussion on our IRC channel.

When the 2016 elections came up I thought it was time to let new blood come in and participate. I needed to move on and focus on growing my own business that I have been neglecting for two years, anyway. As the new board came in, I have been gradually winding down (my role after the election and until GUADEC is mostly advisory, as I do not hold voting powers).

I am excited about the team that composes the new Board of Directors and I trust that they will do a great job. The GNOME Foundation always needs a team of experimented, positive and energetic people to come together to think, discuss and make decisions regarding the various challenges its faces. As I wrote during last year’s elections period:

For this to work, we need people that are what I call “powerhouses”, because the GNOME Foundation Board is an “active” board. This means great thinkers and proactive doers ready to deal with anything while being very capable in the board room.

The best metaphor I have for a healthy GNOME board is taken from role-playing games: a well-coordinated “level 45-70” party that will not be afraid to crawl dungeons together for a year. You need polyvalent classes just like you need specialists (analytic mages, “massive damage” knights, resourceful healers, quick & agile rangers, etc.).

So if this makes sense to anyone, I’m a hybrid mage-knight with a ton of HP/MP potions and phoenix feathers ;)

Party line-up, by Laikkuseia

Party line-up, by Laikkuseia

During the 2016 winter and spring, I have also been heavily involved in the engagement team to redesign the Annual Report and get it ready in time for GUADEC. I’ll bring it along with me in my luggage. Do you want a copy? Let me know by tomorrow (Sunday):

(note that I can also bring copies of last year’s report if you’d like to have them for your vintage collection)

I am looking forward to seeing all of you at GUADEC next week; don’t forget to register! And look forward to some very interesting news coming up from my side in the near future.

by nekohayo at August 06, 2016 01:06 PM

July 31, 2016

Nirbheek ChauhanGStreamer and Meson: A New Hope

Anyone who has written a non-trivial project using Autotools has realized that (and wondered why) it requires you to be aware of 5 different languages. Once you spend enough time with the innards of the system, you begin to realize that it is nothing short of an astonishing feat of engineering. Engineering that belongs in a museum. Not as part of critical infrastructure.

Autotools was created in the 1980s and caters to the needs of an entirely different world of software from what we have at present. Worse yet, it carries over accumulated cruft from the past 40 years — ostensibly for better “cross-platform support” but that “support” is mostly for extinct platforms that five people in the whole world remember.

We've learned how to make it work for most cases that concern FOSS developers on Linux, and it can be made to limp along on other platforms that the majority of people use, but it does not inspire confidence or really anything except frustration. People will not like your project or contribute to it if the build system takes 10x longer to compile on their platform of choice, does not integrate with the preferred IDE, and requires knowledge arcane enough to be indistinguishable from cargo-cult programming.

As a result there have been several (terrible) efforts at replacing it and each has been either incomplete, short-sighted, slow, or just plain ugly. During my time as a Gentoo developer in another life, I came in close contact with and developed a keen hatred for each of these alternative build systems. And so I mutely went back to Autotools and learned that I hated it the least of them all.

Sometime last year, Tim heard about this new build system called ‘Meson’ whose author had created an experimental port of GStreamer that built it in record time.

Intrigued, he tried it out and found that it finished suspiciously quickly. His first instinct was that it was broken and hadn’t actually built everything! Turns out this build system written in Python 3 with Ninja as the backend actually was that fast. About 2.5x faster on Linux and 10x faster on Windows for building the core GStreamer repository.

Upon further investigation, Tim and I found that Meson also has really clean generic cross-compilation support (including iOS and Android), runs natively (and just as quickly) on OS X and Windows, supports GNU, Clang, and MSVC toolchains, and can even (configure and) generate XCode and Visual Studio project files!

But the critical thing that convinced me was that the creator Jussi Pakkanen was genuinely interested in the use-cases of widely-used software such as Qt, GNOME, and GStreamer and had already added support for several tools and idioms that we use — pkg-config, gtk-doc, gobject-introspection, gdbus-codegen, and so on. The project places strong emphasis on both speed and ease of use and is quite friendly to contributions.

Over the past few months, Tim and I at Centricular have been working on creating Meson ports for most of the GStreamer repositories and the fundamental dependencies (libffi, glib, orc) and improving the MSVC toolchain support in Meson.

We are proud to report that you can now build GStreamer on Linux using the GNU toolchain and on Windows with either MinGW or MSVC 2015 using Meson build files that ship with the source (building upon Jussi's initial ports).

Other toolchain/platform combinations haven't been tested yet, but they should work in theory (minus bugs!), and we intend to test and bugfix all the configurations supported by GStreamer (Linux, OS X, Windows, iOS, Android) before proposing it for inclusion as an alternative build system for the GStreamer project.

You can either grab the source yourself and build everything, or use our (with luck, temporary) fork of GStreamer's cross-platform build aggregator Cerbero.

Update: I wrote a new post with detailed steps on how to build using Cerbero and generate Visual Studio project files.

Personally, I really hope that Meson gains widespread adoption. Calling Autotools the Xorg of build systems is flattery. It really is just a terrible system. We really need to invest in something that works for us rather than against us.

PS: If you just want a quick look at what the build system syntax looks like, take a look at this or the basic tutorial.

by Nirbheek (noreply@blogger.com) at July 31, 2016 02:22 AM

July 27, 2016

Nirbheek ChauhanBuilding and Developing GStreamer using Visual Studio

Two months ago, I talked about how we at Centricular have been working on a Meson port of GStreamer and its basic dependencies (glib, libffi, and orc) for various reasons — faster builds, better cross-platform support (particularly Windows), better toolchain support, ease of use, and for a better build system future in general.

Meson also has built-in support for things like gtk-doc, gobject-introspection, translations, etc. It can even generate Visual Studio project files at build time so projects don't have to expend resources maintaining those separately.

Today I'm here to share instructions on how to use Cerbero (our “aggregating” build system) to build all of GStreamer on Windows using MSVC 2015 (wherever possible). Note that this means you won't see any Meson invocations at all because Cerbero does all that work for you.

Note that this is still all unofficial and has not been proposed for inclusion upstream. We still have a few issues that need to be ironed out before we can do that¹.

First, you need to setup the environment on Windows by installing a bunch of external tools: Python 2, Python3, Git, etc. You can find the instructions for that here:

https://github.com/centricular/cerbero#windows

This is very similar to the old Cerbero instructions, but some new tools are needed. Once you've done everything there (Visual Studio especially takes a while to fetch and install itself), the next step is fetching Cerbero:

$ git clone https://github.com/centricular/cerbero.git

This will clone and checkout the meson-1.8 branch that will build GStreamer 1.8.x. Next, we bootstrap it:

https://github.com/centricular/cerbero#bootstrap

Now we're (finally) ready to build GStreamer. Just invoke the package command:

python2 cerbero-uninstalled -c config/win32-mixed-msvc.cbc package gstreamer-1.0

This will build all the `recipes` that constitute GStreamer, including the core libraries and all the plugins including their external dependencies. This comes to about 76 recipes. Out of all these recipes, only the following are ported to Meson and are built with MSVC:

bzip2.recipe
orc.recipe
libffi.recipe (only 32-bit)
glib.recipe
gstreamer-1.0.recipe
gst-plugins-base-1.0.recipe
gst-plugins-good-1.0.recipe
gst-plugins-bad-1.0.recipe
gst-plugins-ugly-1.0.recipe

The rest still mostly use Autotools, plain GNU make or cmake. Almost all of these are still built with MinGW. The only exception is libvpx, which uses its custom make-based build system but is built with MSVC.

Eventually we want to build everything including all external dependencies with MSVC by porting everything to Meson, but as you can imagine it's not an easy task. :-)

However, even with just these recipes, there is a large improvement in how quickly you can build all of GStreamer inside Cerbero on Windows. For instance, the time required for building gstreamer-1.0.recipe which builds gstreamer.git went from 10 minutes to 45 seconds. It is now easier to do GStreamer development on Windows since rebuilding doesn't take an inordinate amount of time!

As a further improvement for doing GStreamer development on Windows, for all these recipes (except libffi because of complicated reasons), you can also generate Visual Studio 2015 project files and use them from within Visual Studio for editing, building, and so on.

Go ahead, try it out and tell me if it works for you!

As an aside, I've also been working on some proper in-depth documentation of Cerbero that explains how the tool works, the recipe format, supported configurations, and so on. You can see the work-in-progress if you wish to.

1. Most importantly, the tests cannot be built yet because GStreamer bundles a very old version of libcheck. I'm currently working on fixing that.

by Nirbheek (noreply@blogger.com) at July 27, 2016 03:58 PM

July 19, 2016

Bastien NoceraGUADEC Flatpak contest

(Bastien Nocera) I will be presenting a lightning talk during this year's GUADEC, and running a contest related to what I will be presenting.

Contest

To enter the contest, you will need to create a Flatpak for a piece of software that hasn't been flatpak'ed up to now (application, runtime or extension), hosted in a public repository.

You will have to send me an email about the location of that repository.

I will choose a winner amongst the participants, on the eve of the lightning talks, depending on, but not limited to, the difficulty of packaging, the popularity of the software packaged and its redistributability potential.

You can find plenty of examples (and a list of already packaged applications and runtimes) on this Wiki page.

Prize

A piece of hardware that you can use to replicate my presentation (or to replicate my attempts at a presentation, depending ;). You will need to be present during my presentation at GUADEC to claim your prize.

Good luck to one and all!

by Bastien Nocera (noreply@blogger.com) at July 19, 2016 03:39 PM

July 12, 2016

GStreamerGStreamer Core, Plugins, RTSP Server, Editing Services, Python, Validate 1.9.1 unstable release (binaries)

(GStreamer)

Pre-built binary images of the 1.9.1 unstable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

July 12, 2016 12:00 AM

July 06, 2016

GStreamerGStreamer Core, Plugins, RTSP Server, Editing Services, Python, Validate, VAAPI, OMX 1.9.1 unstable release

(GStreamer)

The GStreamer team is pleased to announce the first release of the unstable 1.9 release series. The 1.9 release series is adding new features on top of the 1.0, 1.2, 1.4, 1.6 and 1.8 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework. The unstable 1.9 release series will lead to the stable 1.10 release series in the next weeks. Any newly added API can still change until that point.

Binaries for Android, iOS, Mac OS X and Windows will be provided in the next days.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

July 06, 2016 12:00 PM

July 01, 2016

Víctor JáquezVA-API and DRM/KMS in MinnowBoard

In 2012 I started to work on a video renderer for GStreamer which uses directly the DRM/KMS kernel subsystem to display images. I even blogged about it, but I didn’t finished it.

Nonetheless, in December last year, a customer asked us to finish the element, but this time focusing on i.MX6, rather than OMAP4. Consequently, early this year, kmssink got merged into upstream.

The video sink has some nice features, such as DMA-buf importing through the PRIME kernel interface. This feature makes possible a zero-copy path when the video decoder delivers dmabuf based frames. For this particular project, the kernel we used supports the CODA media subsystem, and through the GStreamer element v4l2videodec we could link pipelines negotiating dmabuf sharing, and thus we played videos very efficiently.

During the last GStreamer Hackfest, Nicolas Dufresne got working the MFC media subsystem, for Exynos, with v4l2videodec and kmssink, but I don’t remember if he also got the dmabuf sharing working.

All in all, kmssink had only been tested in a couple of ARM devices, so I wonder, as a gstremer-vaapi co-maintainer, if it also works in x86 devices.

Some colleagues at Igalia, use a Minnowboard to test WebKit4Wayland, so I borrowed it, installed a minimal Fedora 23 in it, and, surprisingly, I found out that it has a nice VA-API support:

   $ vainfo
   error: can't connect to X server!
   libva info: VA-API version 0.38.1
   libva info: va_getDriverName() returns 0
   libva info: Trying to open /usr/lib64/dri/i965_drv_video.so
   libva info: Found init function __vaDriverInit_0_38
   libva info: va_openDriver() returns 0
   vainfo: VA-API version: 0.38 (libva 1.6.2)
   vainfo: Driver version: Intel i965 driver for Intel(R) Bay Trail - 1.6.2
   vainfo: Supported profile and entrypoints
         VAProfileMPEG2Simple            : VAEntrypointVLD
         VAProfileMPEG2Simple            : VAEntrypointEncSlice
         VAProfileMPEG2Main              : VAEntrypointVLD
         VAProfileMPEG2Main              : VAEntrypointEncSlice
         VAProfileH264ConstrainedBaseline: VAEntrypointVLD
         VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
         VAProfileH264Main               : VAEntrypointVLD
         VAProfileH264Main               : VAEntrypointEncSlice
         VAProfileH264High               : VAEntrypointVLD
         VAProfileH264High               : VAEntrypointEncSlice
         VAProfileH264StereoHigh         : VAEntrypointVLD
         VAProfileVC1Simple              : VAEntrypointVLD
         VAProfileVC1Main                : VAEntrypointVLD
         VAProfileVC1Advanced            : VAEntrypointVLD
         VAProfileNone                   : VAEntrypointVideoProc
         VAProfileJPEGBaseline           : VAEntrypointVLD


Afterwards, I compiled GStreamer using gst-uninstalled in order to have kmssink and the master branch of gstreamer-vaapi. And got it! I had hardware accelerated decoding with VA-API, and video rendering using KMS/DRM. But, sadly, no zero copy, since, in order to have dmabuf sharing, we have to finish and to merge bug 755072 in gstreamer-vaapi.

Running the typical Big Bunny video (1080p, H.264) the playback consumes less than the 50% of the CPU usage, compared with the 100% of CPU usage with software decoders (and a lot of frames dropped).

I recorded a small video with the experiment as a proof.

Update: Javer Martinez told me in twitter, that MFC can do dma-buf export since uses vb2 and Exynos DRM can import, but he couldn’t get zero copy to work due missing format support.

by vjaquez at July 01, 2016 05:00 PM

June 29, 2016

Arun RaghavanBeamforming in PulseAudio

In case you missed it — we got PulseAudio 9.0 out the door, with the echo cancellation improvements that I wrote about. Now is probably a good time for me to make good on my promise to expand upon the subject of beamforming.

As with the last post, I’d like to shout out to the wonderful folks at Aldebaran Robotics who made this work possible!

Beamforming

Beamforming as a concept is used in various aspects of signal processing including radio waves, but I’m going to be talking about it only as applied to audio. The basic idea is that if you have a number of microphones (a mic array) in some known arrangement, it is possible to “point” or steer the array in a particular direction, so sounds coming from that direction are made louder, while sounds from other directions are rendered softer (attenuated).

Practically speaking, it should be easy to see the value of this on a laptop, for example, where you might want to focus a mic array to point in front of the laptop, where the user probably is, and suppress sounds that might be coming from other locations. You can see an example of this in the webcam below. Notice the grilles on either side of the camera — there is a microphone behind each of these.

Webcam with 2 micsWebcam with 2 mics

This raises the question of how this effect is achieved. The simplest approach is called “delay-sum beamforming”. The key idea in this approach is that if we have an array of microphones that we want to steer the array at a particular angle, the sound we want to steer at will reach each microphone at a different time. This is illustrated below. The image is taken from this great article describing the principles and math in a lot more detail.

Delay-sum beamformingDelay-sum beamforming

In this figure, you can see that the sound from the source we want to listen to reaches the top-most microphone slightly before the next one, which in turn captures the audio slightly before the bottom-most microphone. If we know the distance between the microphones and the angle to which we want to steer the array, we can calculate the additional distance the sound has to travel to each microphone.

The speed of sound in air is roughly 340 m/s, and thus we can also calculate how much of a delay occurs between the same sound reaching each microphone. The signal at the first two microphones is delayed using this information, so that we can line up the signal from all three. Then we take the sum of the signal from all three (actually the average, but that’s not too important).

The signal from the direction we’re pointing in is going to be strongly correlated, so it will turn out loud and clear. Signals from other directions will end up being attenuated because they will only occur in one of the mics at a given point in time when we’re summing the signals — look at the noise wavefront in the illustration above as an example.

Implementation

(this section is a bit more technical than the rest of the article, feel free to skim through or skip ahead to the next section if it’s not your cup of tea!)

The devil is, of course, in the details. Given the microphone geometry and steering direction, calculating the expected delays is relatively easy. We capture audio at a fixed sample rate — let’s assume this is 32000 samples per second, or 32 kHz. That translates to one sample every 31.25 µs. So if we want to delay our signal by 125µs, we can just add a buffer of 4 samples (4 × 31.25 = 125). Sound travels about 4.25 cm in that time, so this is not an unrealistic example.

Now, instead, assume the signal needs to be delayed by 80 µs. This translates to 2.56 samples. We’re working in the digital domain — the mic has already converted the analog vibrations in the air into digital samples that have been provided to the CPU. This means that our buffer delay can either be 2 samples or 3, not 2.56. We need another way to add a fractional delay (else we’ll end up with errors in the sum).

There is a fair amount of academic work describing methods to perform filtering on a sample to provide a fractional delay. One common way is to apply an FIR filter. However, to keep things simple, the method I chose was the Thiran approximation — the literature suggests that it performs the task reasonably well, and has the advantage of not having to spend a whole lot of CPU cycles first transforming to the frequency domain (which an FIR filter requires)(edit: converting to the frequency domain isn’t necessary, thanks to the folks who pointed this out).

I’ve implemented all of this as a separate module in PulseAudio as a beamformer filter module.

Now it’s time for a confession. I’m a plumber, not a DSP ninja. My delay-sum beamformer doesn’t do a very good job. I suspect part of it is the limitation of the delay-sum approach, partly the use of an IIR filter (which the Thiran approximation is), and it’s also entirely possible there is a bug in my fractional delay implementation. Reviews and suggestions are welcome!

A Better Implementation

The astute reader has, by now, realised that we are already doing a bunch of processing on incoming audio during voice calls — I’ve written in the previous article about how the webrtc-audio-processing engine provides echo cancellation, acoustic gain control, voice activity detection, and a bunch of other features.

Another feature that the library provides is — you guessed it — beamforming. The engineers at Google (who clearly are DSP ninjas) have a pretty good beamformer implementation, and this is also available via module-echo-cancel. You do need to configure the microphone geometry yourself (which means you have to manually load the module at the moment). Details are on our wiki (thanks to Tanu for that!).

How well does this work? Let me show you. The image below is me talking to my laptop, which has two microphones about 4cm apart, on either side of the webcam, above the screen. First I move to the right of the laptop (about 60°, assuming straight ahead is 0°). Then I move to the left by about the same amount (the second speech spike). And finally I speak from the center (a couple of times, since I get distracted by my phone).

The upper section represents the microphone input — you’ll see two channels, one corresponding to each mic. The bottom part is the processed version, with echo cancellation, gain control, noise suppression, etc. and beamforming.

WebRTC beamformingWebRTC beamforming

You can also listen to the actual recordings …

https://arunraghavan.net/wp-content/uploads/aec_rec.mp3

… and the processed output.

https://arunraghavan.net/wp-content/uploads/aec_bf.mp3

Feels like black magic, doesn’t it?

Finishing thoughts

The webrtc-audio-processing-based beamforming is already available for you to use. The downside is that you need to load the module manually, rather than have this automatically plugged in when needed (because we don’t have a way to store and retrieve the mic geometry). At some point, I would really like to implement a configuration framework within PulseAudio to allow users to set configuration from some external UI and have that be picked up as needed.

Nicolas Dufresne has done some work to wrap the webrtc-audio-processing library functionality in a GStreamer element (and this is in master now). Adding support for beamforming to the element would also be good to have.

The module-beamformer bits should be a good starting point for folks who want to wrap their own beamforming library and have it used in PulseAudio. Feel free to get in touch with me if you need help with that.

by Arun at June 29, 2016 05:22 AM

June 21, 2016

Bastien NoceraAAA game, indie game, card-board-box

(Bastien Nocera)
Early bird gets eaten by the Nyarlathotep
 
The more adventurous of you can use those (designed as embeddable) Lua scripts to transform your DRM-free GOG.com downloads into Flatpaks.

The long-term goal would obviously be for this not to be needed, and for online games stores to ship ".flatpak" files, with metadata so we know what things are in GNOME Software, which automatically picks up the right voice/subtitle language, and presents its extra music and documents in the respective GNOME applications.
 
But in the meanwhile, and for the sake of the games already out there, there's flatpak-games. Note that lua-archive is still fiddly.
 
Support for a few Humble Bundle formats (some formats already are), grab-all RPMs and Debs, and those old Loki games is also planned.
 
It's late here, I'll be off to do some testing I think :)

PS: Even though I have enough programs that would fail to create bundles in my personal collection to accept "game donations", I'm still looking for original copies of Loki games. Drop me a message if you can spare one!

by Bastien Nocera (noreply@blogger.com) at June 21, 2016 08:57 PM

Christian SchallerFedora Workstation 24 is out and Flatpak is now officially launched!

(Christian Schaller)

This is a very exciting day for me as two major projects I am deeply involved with are having a major launch. First of all Fedora Workstation 24 is out which crosses a few critical milestones for us. Maybe most visible is that this is the first time you can use the new graphical update mechanism in GNOME Software to take you from Fedora Workstation 23 to Fedora Workstation 24. This means that when you open GNOME Software it will show you an option to do a system upgrade to Fedora Workstation 24. We been testing and doing a lot of QA work around this feature so my expectation is that it will provide a smooth upgrade experience for you.
Fedora System Upgrade

The second major milestone is that we do feel Wayland is now in a state where the vast majority of users should be able to use it on a day to day basis. We been working through the kinks and resolving many corner cases during the previous 6 Months, with a lot of effort put into making sure that the interaction between applications running natively on Wayland and those running using XWayland is smooth. For instance one item we crossed off the list early in this development cycle was adding middle-mouse button cut and paste as we know that was a crucial feature for many long time linux users looking to make the switch. So once you updated I ask all of you to try switching to the Wayland session by clicking on the little cogwheel in the login screen, so that we get as much testing as possible of Wayland during the Fedora Workstation 24 lifespan. Feedback provided by our users during the Fedora Workstation 24 lifecycle will be a crucial information to allow us to make the final decision about Wayland as the default for Fedora Workstation 25. Of course the team will be working ardently during Fedora Workstation 24 to make sure we find and address any niggling issues left.

In addition to that there is also of course a long list of usability improvements, new features and bugfixes across the desktop, both coming in from our desktop team at Red Hat and from the GNOME community in general.

There was also the formal announcement of Flatpak today (be sure to read that press release), which is the great new technology for shipping desktop applications. For those of you who have read my previous blog entries you probably seen me talking about this technology using its old name xdg-app. Flatpak is an incredible piece of engineering designed by Alexander Larsson we developed alongside a lot of other components.
Because as Matthew Garret pointed out not long ago, unless we move away from X11 we can not really produce a secure desktop container technology, which is why we kept such a high focus on pushing Wayland forward for the last year. It is also why we invested so much time into Pinos which is as I mentioned in my original annoucement of the project our video equivalent of PulseAudio (and yes a proper Pinos website is getting close :). Wim Taymans who created Pinos have also been working on patches to PulseAudio to make it more suitable for using with sandboxed applications and those patches have recently been taken over by community member Ahmed S. Darwish who is trying to get them ready for merging into the main codebase.

We are feeling very confident about Flatpak as it has a lot of critical features designed in from the start. First of all it was built to be a cross distribution solution from day one, meaning that making Flatpak run on any major linux distribution out there should be simple. We already got Simon McVittie working on Debian support, we got Arch support and there is also an Ubuntu PPA that the team put together that allows you to run Flatpaks fully featured on Ubuntu. And Endless Mobile has chosen flatpak as their application delivery format going forward for their operating system.

We use the same base technologies as Docker like namespaces, bind mounts and cgroups for Flatpak, which means that any system out there wanting to support Docker images would also have the necessary components to support Flatpaks. Which means that we will also be able to take advantage of the investment and development happening around server side containers.

Flatpak is also heavy using another exciting technology, OSTree, which was originally developed by Colin Walters for GNOME. This technology is actually seeing a lot of investment and development these days as it became the foundation for Project Atomic, which is Red Hats effort to create an enterprise ready platform for running server side containers. OStree provides us with a lot of important features like efficient storage of application images and a very efficient transport mechanism. For example one core feature OSTree brings us is de-duplication of files which means you don’t need to keep multiple copies on your disk of the same file, so if ten Flatpak images share the same file, then you only keep one copy of it on your local disk.

Another critical feature of Flatpak is its runtime separation, which basically means that you can have different runtimes for some families of usecases. So for instance you can have a GNOME runtime that allows all your GNOME applications to share a lot of libraries yet giving you a single point for security updates to those libraries. So while we don’t want a forest of runtimes it does allow us to create a few important ones to cover certain families of applications and thus reduce disk usage further and improve system security.

Going forward we are looking at a lot of exciting features for Flatpak. The most important of these is the thing I mentioned earlier, Portals.
In the current release of flatpak you can choose between two options. Either make it completely sandboxed or not make it sandboxed at all. Portals are basically the way you can sandbox your application yet still allow it to interact with your general desktop and storage. For instance Pinos and PulseAudios role for containers is to provide such portals for handling audio and video. Of course more portals are needed and during the the GTK+ hackfest in Toronto last week a lot of time was spent on mapping out the roadmap for Portals. Expect more news about Portals as they are getting developed going forward.

I want to mention that we of course realize that a new technology like Flatpak should come with a high quality developer story, which is why Christian Hergert has been spending time working on support for Flatpak in the Builder IDE. There is some support in already, but expect to see us flesh this out significantly over the next Months. We are also working on adding more documentation to the Flatpak website, to cover how to integrate more build systems and similar with Flatpak.

And last, but not least Richard Hughes has been making sure we have great Flatpak support in Software in Fedora Workstation 24 ensuring that as an end user you shouldn’t have to care about if your application is a Flatpak or a RPM.

by uraeus at June 21, 2016 06:38 PM

June 20, 2016

Guillaume DesmottesGStreamer leaks tracer

Here at Collabora we are pretty interested at improving QA tools in GStreamer. Thibault is for example doing a great job on gst-validate ensuring that a lot of code paths are regularly tested using real life scenarios. Last year I added Valgrind support to gst-validate allowing us to automatically detect memory leaks in test scenarios. My goal was to integrate this as part of GStreamer's automatic QA to prevent memory leak regressions. While this can sometimes be a good approach to track leaks it has a few downsides:

  • Valgrind can be very CPU and/or memory consuming which can be a problem with longer scenarios or on limited hardware such as embedded devices.
  • As a result running the full tests suite with valgrind can take ages.
  • Valgrind checks for any potential memory leak which can lead to a lot of false positives or leaks in low level system libraries on which we have few control. We usually work around this problem using suppression files but they are generally very fragile and depend a lot on the system/distribution which has been used for testing.

I tried to solve these issues by trying a new approach using GstTracer. Tracers are a new mechanism introduced in GStreamer 1.8 allowing tools to hook into GStreamer internals and collect data. So I started by adding tracer hooks when GstObject and GstMiniObject are created and destroyed. Then I implemented a new tracer tracking the lifetime of (mini)objects and listing those which are still alive when the application is exiting. This worked pretty well but I needed a way to discard objects which are intentionally leaked (false positives). To do so I introduced a new (mini)object flag allowing us to mark such objects.

I'm pretty happy with the result, while proof testing this tool I found and fixed dozens of leaks into Gstreamer (core, plugins and tests). Some of those fixes have already reached the 1.8.2 release. It's also very easy to use and doesn't require any external tool unlike Valgrind (which can be tricky to integrate on some platforms).

To use it you just have to load the leaks tracer with your application and enable tracer logs:

GST_TRACERS="leaks" GST_DEBUG="GST_TRACER:7" gst-launch-1.0 videotestsrc num-buffers=10 ! fakesink

You can also filter out the types of GstObject or GstMiniObject tracked to reduce memory consumption:

GST_TRACERS="leaks(GstEvent,GstMessage)"  GST_DEBUG="GST_TRACER:7" gst-launch-1.0 videotestsrc num-buffers=10 ! fakesink

This tracer has recently be merged into GStreamer core and will be part of the 1.9.1 release.

As future enhancements I implemented live tracking and checkpointing support using signals like I already did in gobject-list a while ago. I'd also like to be able to display the creation stack trace of leaked objects to easily spot the leaked instances. Finally, I opened a bug to discuss the integration of the tracer with the QA system.

by Guillaume Desmottes at June 20, 2016 10:03 AM

June 17, 2016

GStreamerGStreamer Core, Plugins, RTSP Server, Editing Services, Validate 1.8.2 stable release (binaries)

(GStreamer)

Pre-built binary images of the 1.8.2 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

June 17, 2016 02:00 PM

June 09, 2016

GStreamerGStreamer Core, Plugins, RTSP Server, Editing Services, Python, Validate, VAAPI 1.8.2 stable release

(GStreamer)

The GStreamer team is pleased to announce the second bugfix release in the stable 1.8 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.8.1. For a full list of bugfixes see Bugzilla.

See /releases/1.8/ for the full release notes.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, or gstreamer-vaapi, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, or gstreamer-vaapi.

June 09, 2016 10:00 AM

May 25, 2016

Olivier CreteGStreamer Spring Hackfest 2016

After missing the last few GStreamer hackfests I finally managed to attend this time. It was held in Thessaloniki, Greece’s second largest city. The city is located by the sea side and the entire hackfest and related activities were either directly by the sea or just a couple blocks away.

Collabora was very well represented, with Nicolas, Mathieu, Lubosz also attending.

Nicolas concentrated his efforts on making kmssink and v4l2dec work together to provide zero-copy decoding and display on a Exynos 4 board without a compositor or other form of display manager. Expect a blog post soon  explaining how to make this all fit together.

Lubosz showed off his VR kit. He implemented a viewer for planar point clouds acquired from a Kinect. He’s working on a set of GStreamer plugins to play back spherical videos. He’s also promised to blog about all this soon!

Mathieu started the hackfest by investigating the intricacies of Albanian customs, then arrived on the second day in Thessaloniki and hacked on hotdoc, his new fancy documentation generation tool. He’ll also be posting a blog about it, however in the meantime you can read more about it here.

As for myself, I took the opportunity to fix a couple GStreamer bugs that really annoyed me. First, I looked into bug #766422: why glvideomixer and compositor didn’t work with RTSP sources. Then I tried to add a ->set_caps() virtual function to GstAggregator, but it turns out I first needed to delay all serialized events to the output thread to get predictable outcomes and that was trickier than expected. Finally, I got distracted by a bee and decided to start porting the contents of docs.gstreamer.com to Markdown and updating it to the GStreamer 1.0 API so we can finally retire the old GStreamer.com website.

I’d also like to thank Sebastian and Vivia for organising the hackfest and for making us all feel welcomed!

GStreamer Hackfest Venue

by ocrete at May 25, 2016 08:43 PM

Bastien NoceraBlog backlog, Post 3, DisplayLink-based USB3 graphics support for Fedora

(Bastien Nocera) Last year, after DisplayLink released the first version of the supporting tools for their USB3 chipsets, I tried it out on my Dell S2340T.

As I wanted a clean way to test new versions, I took Eric Nothen's RPMs, and updated them along with newer versions, automating the creation of 32- and 64-bit x86 versions.

The RPM contains 3 parts, evdi, a GPLv2 kernel module that creates a virtual display, the LGPL library to access it, and a proprietary service which comes with "firmware" files.

Eric's initial RPMs used the precompiled libevdi.so, and proprietary bits, compiling only the kernel module with dkms when needed. I changed this, compiling the library from the upstream repository, using the minimal amount of pre-compiled binaries.

This package supports quite a few OEM devices, but does not work correctly with Wayland, so you'll need to disable Wayland support in /etc/gdm/custom.conf if you want it to work at the login screen, and without having to restart the displaylink.service systemd service after logging in.


 Plugged in via DisplayPort and USB (but I can only see one at a time)


The source for the RPM are on GitHub. Simply clone and run make in the repository to create 32-bit and 64-bit RPMs. The proprietary parts are redistributable, so if somebody wants to host and maintain those RPMs, I'd be glad to pass this on.

by Bastien Nocera (noreply@blogger.com) at May 25, 2016 05:18 PM

May 20, 2016

Víctor JáquezGStreamer Hackfest 2016

Yes, it happened again: the Gstreamer Spring Hackfest 2016! This time in the beautiful city of Thessaloniki. Thanks a lot, Vivia and Sebastian, for making it happen.

Thessaloniki Thessaloniki’s promenade

My objective this time was to work with dma-buf support in gstreamer-vaapi. Though it is supported already, it needs a major clean up, and to extend its usage for downstream buffers (bugs 755072 and 765435).

In the way I learned that we need to update our internal API (called `libgstvaapi`), when handling dma-buf, to support mult-plane formats.

On the other hand, Nicolas Dufresne and I talked a bit about kmssink, libdrm and dma-buf. He managed to hack his Odroid U (Exynos3) to enable its V4L2 mem2mem video decoder and share buffers with kmssink. It was amazing. By the way, he promised me to write a blog post with the instructions to replicate his deed.

GStreamer HackfestPhoto by Luis de Bethencourt

Finally, we had a preview of Edward Hervey‘s decodebin3. It is fun his test of switching the different audio streams in a media container (the different available dubbings) in every second or less. It was truly a multi-language audio!

In the meantime, we shared beers and meals, learning and laughing.

by vjaquez at May 20, 2016 10:32 AM

May 19, 2016

Jean-François Fortin Tam“SmartEco” or “Extreme Eco” projector lamp power saving modes are a trap

Here are some findings I’ve been meaning to post for a while.

A bit over a year ago, I fulfilled a decade-long dream of owning a good projector for movies, instead of some silly monitor with a diagonal measured in “inches”. My lifestyle very rarely allows me to watch movies (or series*), so when I decide to watch something, it needs to have a rating over 90% on RottenTomatoes, it has to be watched with a bunch of friends, and it needs to be a top-notch audio-visual experience. I already had a surround sound system for over a decade, so the projector was the only missing part of the puzzle.

After about six months of research and agonizing over projector choices, I narrowed it down to the infamous BenQ W1070, which uses conventional projection lamps (Aaxa’s LED projectors were not competitively priced at that time, costing more for a lower resolution and less connectivity):

2015-02-07--15.30.52

First power-on, with David Revoy‘s beautiful artwork as my wallpaper

In the process of picking up the BenQ W1070, I compared it to the Acer H6510BD and others, and this particular question came up: how realistic are manufacturers’ claims about their dynamic “lamp life saving” features?

The answer is:

bullshit - ten points from gryffindor

For starters, I asked Acer to clarify what their “Extreme Eco” feature really did, and to their credit they answered truthfully (emphasis mine):

Acer ExtremeEco technology reduces the lamp power to 30% enabling up to 70% power savings when there is no input signal, extending the lifespan of the lamp up to 7000 hours and reducing operating costs. 4,000 Hours (Standard), 5,000 Hours (ECO), 7,000 Hours (ExtremeEco)*
*: Lamp Life of ExtremeEco mode is based on an average usage cycle of 45 minutes ECO mode plus 75 minutes ExtremeEco (30% lamp power) mode

Yeah. “When there’s no input signal”. What kind of fool leaves a home theater projector turned on while unused for extended periods of time and thinks it won’t destroy the lamp’s life?

Now you’re wondering if Acer is the only one trying to be clever about marketing and if other manufacturers actually have better technology for lamp savings. After all, BenQ is very proud to say this (direct quote from their website):

By detecting the input content to determine the amount of brightness required for optimum color and contrast performance, the SmartEco Mode is able to reduce lamp power while delivering the finest image quality. No compromise!

Bundled with this chart, no less:

benq-smarteco-chart-nonsense

Note: “LampSave” mode is a different thing, it’s either the manual screen blanking button, or when SmartEco detects no more input signal at all and turns down the lamp, like Acer’s “ExtremeEco”!

So, perhaps the widely acclaimed BenQ W1070 projector’s “SmartEco” mode is a truly smart psychovisual image analysis system that continously varies the intensity to make darker scenes go extra-dark, darker than the “constant” Eco Mode? Glad you asked, because I have the numbers to prove that’s not the case either.

I measured the BenQ W1070’s power consumption through a 3×6 conditions experiment:

  • Power scheme:
    • Normal mode
    • Eco mode
    • SmartEco mode
  • Content type:
    • A photo of the Hong Kong skyline at night
    • A static view of the TED.com website, split-screen at 50% of the width, on top of the Hong Kong image
    • A static view of the TED.com website, maximized
    • GIMP 2.10, maximized, with a pitch-black image and a minimum amount of UI controls (menubar, rulers, scrollbars, statusbar) as bright elements
    • That same 100% pitch-black static image, fullscreened
    • Playing Sintel and measuring the total electricity consumption over 15 minutes, in kWh

Here are some of the “content type” conditions above, so you have an idea what the static images looked like:

BenQ-W1070-static-benchmark-images

Here is a visual summary of the results for the “static images” conditions:

And here are the raw numbers:

Normal mode Eco mode SmartEco mode
Static H.K. night skyline photo 264 watts 198 watts 273 watts
Static TED.com split-screen 264 watts 198 watts 276 watts
Static TED.com maximized 264 watts 198 watts 276 watts
Static GIMP (almost pitch-black) 264 watts 198 watts 256 watts
Static pitch-black image (fullscreen) 264 watts 198 watts 198 watts
15 minutes movie (Sintel) 0.066 kWh 0.0495 kWh 0.06x kWh

Note: for the Sintel benchmark, the “0.06” figure lacks one decimal compared to the two other conditions (Normal and Eco modes). This is due to display limitations of my measurement instrument; other values were calculated (as they are constant) and confirmed with the measurement instrument (ex: Eco mode was measured to be at 0.04x kWh). It is thus possible that SmartEco’s 0.06 kWh was anywhere from 0.060 to 0.069 kWh.

As you can see, BenQ’s SmartEco provided no significant advantage:

  • With normal/bright still images, it increases consumption by as much as 3% compared to the full-blast “normal mode”; it reduces consumption by the same amount (3%) when displaying very dark images, and does not under any circumstances (even a 100% pitch black screen) fall below the constant power consumption value of the regular “Eco mode”.
  • In practice, while watching movies that combine light and dark scenes, “SmartEco” mode might (or might not) reduce energy consumption compared to “normal mode”, by a small amount.
  • SmartEco will always be outperformed by the regular “eco mode”, which yields a constant 25% power saving.

Out of curiosity, I also tested all the other manual image settings in addition to “eco mode”, including: brightness, contrast, color temperature, “Brilliant Color” (on/off); none of these settings had any effect on power consumption.

My conclusion and recommendation:

  • “Smart” eco modes are dumb and will eat your lamps.
  • With current technology, home theater projectors should be locked down to a constant lamp economy mode to maximize longevity and gain other benefits; after all, you’re supposed to be sitting in a pitch-black room to maximize contrast and quality with any projector anyway, and the lower lamp power setting will also reduce heat, and thus cooling fan noise.

Nonetheless, the W1070 is a very nice projector.


*: When I tell people I don’t watch any series at all, including [insert série-du-jour here], I typically get reactions like this:

you don't watch game of thrones

by nekohayo at May 19, 2016 02:07 PM

May 17, 2016

Sebastian DrögeWriting GStreamer plugins and elements in Rust

(Sebastian Dröge)

This weekend we had the GStreamer Spring Hackfest 2016 in Thessaloniki, my new home town. As usual it was great meeting everybody in person again, having many useful and interesting discussions and seeing what everybody was working on. It seems like everybody was quite productive during these days!

Apart from the usual discussions, cleaning up some of our Bugzilla backlog and getting various patches reviewed, I was working with Luis de Bethencourt on writing a GStreamer plugin with a few elements in Rust. Our goal was to be able to be able to read and write a file, i.e. implement something like the “cp” command around gst-launch-1.0 with just using the new Rust elements, while trying to have as little code written in C as possible and having a more-or-less general Rust API in the end for writing more source and sink elements. That’s all finished, including support for seeking, and I also wrote a small HTTP source.

For the impatient, the code can be found here: https://github.com/sdroege/rsplugin

Why Rust?

Now you might wonder why you would want to go through all the trouble of creating a bridge between GStreamer in C and Rust for writing elements. Other people have written much better texts about the advantages of Rust, which you might want to refer to if you’re interested: The introduction of the Rust documentation, or this free O’Reilly book.

But for myself the main reasons are that

  1. C is a rather antique and inconvenient language if you compare it to more modern languages, and Rust provides a lot of features from higher-level languages while still not introducing all the overhead that is coming with it elsewhere, and
  2. even more important are the safety guarantees of the language, including the strong type system and the borrow checker, which make a whole category of bugs much more unlikely to happen. And thus saves you time during development but also saves your users from having their applications crash on them in the best case, or their precious data being lost or stolen.

Rust is not the panacea for everything, and not even the perfect programming language for every problem, but I believe it has a lot of potential in various areas, including multimedia software where you have to handle lots of complex formats coming from untrusted sources and still need to provide high performance.

I’m not going to write a lot about the details of the language, for that just refer to the website and very well written documentation. But, although being a very young language not developed by a Fortune 500 company (it is developed by Mozilla and many volunteers), it is nowadays being used in production already at places like Dropbox or Firefox (their MP4 demuxer, and in the near future the URL parser). It is also used by Mozilla and Samsung for their experimental, next-generation browser engine Servo.

The Code

Now let’s talk a bit about how it all looks like. Apart from Rust’s standard library (for all the basics and file IO), what we also use are the url crate (Rust’s term for libraries) for parsing and constructing URLs, and the HTTP server/client crate called hyper.

On the C side we have all the boilerplate code for initializing a GStreamer plugin (plugin.c), which then directly calls into Rust code (lib.rs), which then calls back into C (plugin.c) for registering the actual GStreamer elements. The GStreamer elements themselves have then an implementation written in C (rssource.c and rssink.c), which is a normal GObject subclass of GstBaseSrc or GstBaseSink but instead of doing the actual work in C it is just calling into Rust code again. For that to work, some metadata is passed to the GObject class registration, including a function pointer to a Rust function that creates a new instance of the “machinery” of the element. This is then implementing the Source or Sink traits (similar to interfaces) in Rust (rssource.rs and rssink.rs):

pub trait Source: Sync + Send {
    fn set_uri(&mut self, uri_str: Option&str>) -> bool;
    fn get_uri(&self) -> Option;
    fn is_seekable(&self) -> bool;
    fn get_size(&self) -> u64;
    fn start(&mut self) -> bool;
    fn stop(&mut self) -> bool;
    fn fill(&mut self, offset: u64, data: &mut [u8]) -> Result;
    fn do_seek(&mut self, start: u64, stop: u64) -> bool;
}

pub trait Sink: Sync + Send {
    fn set_uri(&mut self, uri_str: Option&str>) -> bool;
    fn get_uri(&self) -> Option;
    fn start(&mut self) -> bool;
    fn stop(&mut self) -> bool;
    fn render(&mut self, data: &[u8]) -> GstFlowReturn;
}

And these traits (plus a constructor) are in the end all that has to be implemented in Rust for the elements (rsfilesrc.rs, rsfilesink.rs and rshttpsrc.rs).

If you look at the code, it’s all still a bit rough at the edges and missing many features (like actual error reporting back to GStreamer instead of printing to stderr), but it already works and the actual implementations of the elements in Rust is rather simple and fun. And even the interfacing with C code is quite convenient at the Rust level.

How to test it?

First of all you need to get Rust and Cargo, check the Rust website or your Linux distribution for details. This was all tested with the stable 1.8 release. And you need GStreamer plus the development files, any recent 1.x version should work.

# clone GIT repository
git clone https://github.com/sdroege/rsplugin
# build it
cd rsplugin
cargo build
# tell GStreamer that there are new plugins in this path
export GST_PLUGIN_PATH=`pwd`
# this dumps the Cargo.toml file to stdout, doing all file IO from Rust
gst-launch-1.0 rsfilesrc uri=file://`pwd`/Cargo.toml ! fakesink dump=1
# this dumps the Rust website to stdout, using the Rust HTTP library hyper
gst-launch-1.0 rshttpsrc uri=https://www.rust-lang.org ! fakesink dump=1
# this basically implements the "cp" command and copies Cargo.toml to a new file called test
gst-launch-1.0 rsfilesrc uri=file://`pwd`/Cargo.toml ! rsfilesink uri=file://`pwd`/test
# this plays Big Buck Bunny via HTTP using rshttpsrc (it has a higher rank than any
# other GStreamer HTTP source currently and is as such used for HTTP URIs)
gst-play-1.0 http://download.blender.org/peach/bigbuckbunny_movies/big_buck_bunny_480p_h264.mov

What next?

The three implemented elements are not too interesting and were mostly an experiment to see how far we can get in a weekend. But the HTTP source for example, once more features are implemented, could become useful in the long term.

Also, in my opinion, it would make sense to consider using Rust for some categories of elements like parsers, demuxers and muxers, as traditionally these elements are rather complicated and have the biggest exposure to arbitrary data coming from untrusted sources.

And maybe in the very long term, GStreamer or parts of it can be rewritten in Rust. But that’s a lot of effort, so let’s go step by step to see if it’s actually worthwhile and build some useful things on the way there already.

For myself, the next step is going to be to implement something like GStreamer’s caps system in Rust (of which I already have the start of an implementation), which will also be necessary for any elements that handle specific data and not just an arbitrary stream of bytes, and it could probably be also useful for other applications independent of GStreamer.

Issues

The main problem with the current code is that all IO is synchronous. That is, if opening the file, creating a connection, reading data from the network, etc. takes a bit longer it will block until a timeout has happened or the operation finished in one way or another.

Rust currently has no support for non-blocking IO in its standard library, and also no support for asynchronous IO. The latter is being discussed in this RFC though, but it probably will take a while until we see actual results.

While there are libraries for all of this, having to depend on an external library for this is not great as code using different async IO libraries won’t compose well. Without this, Rust is still missing one big and important feature, which will definitely be needed for many applications and the lack of it might hinder adoption of the language.

by slomo at May 17, 2016 11:47 AM

May 13, 2016

Bastien NoceraBlutella, a Bluetooth speaker receiver

(Bastien Nocera) Quite some time ago, I was asked for a way to use the AV amplifier (which has a fair bunch of speakers connected to it) in our living-room that didn't require turning on the TV to choose a source.

I decided to try and solve this problem myself, as an exercise rather than a cost saving measure (there are good-quality Bluetooth receivers available for between 15 and 20€).

Introducing Blutella



I found this pot of Nutella in my travels (in Europe, smaller quantities are usually in a jar that looks like a mustard glass, with straight sides) and thought it would be a perfect receptacle for a CHIP, to allow streaming via Bluetooth to the amp. I wanted to make a nice how-to for you, dear reader, but best laid plans...

First, the materials:
  • a CHIP
  • jar of Nutella, and "Burnt umber" acrylic paint
  • micro-USB to USB-A and jack 3.5mm to RCA cables
  • Some white Sugru, for a nice finish around the cables
  • bit of foam, a Stanley knife, a CD marker

That's around 10€ in parts (cables always seem to be expensive), not including our salvaged Nutella jar, and the CHIP itself (9$ + shipping).

You'll start by painting the whole of the jar, on the inside, with the acrylic paint. Allow a couple of days to dry, it'll be quite thick.

So, the plan that went awry. Turns out that the CHIP, with the cables plugged in, doesn't fit inside this 140g jar of Nutella. I also didn't make the holes exactly in the right place. The CHIP is tiny, but not small enough to rotate inside the jar without hitting the side, and the groove to screw the cap also have only one position.

Anyway, I pierced two holes in the lid for the audio jack and the USB charging cable, stuffed the CHIP inside, and forced the lid on so it clipped on the jar's groove.

I had nice photos with foam I cut to hold the CHIP in place, but the finish isn't quite up to my standards. I guess that means I can attempt this again with a bigger jar ;)

The software

After flashing the CHIP with Debian, I logged in, and launched a script which I put together to avoid either long how-tos, or errors when I tried to reproduce the setup after a firmware update and reset.

The script for setting things up is in the CHIP-bluetooth-speaker repository. There are a few bugs due to drivers, and lack of integration, but this blog is the wrong place to track them, so check out the issues list.

Apart from those driver problems, I found the integration between PulseAudio and BlueZ pretty impressive, though I wish there was a way for the speaker to reconnect to the phone I streamed from when turned on again, as Bluetooth speakers and headsets do, removing one step from playing back audio.

by Bastien Nocera (noreply@blogger.com) at May 13, 2016 05:30 PM

May 12, 2016

Christian SchallerH264 in Fedora Workstation

(Christian Schaller)

So after a lot of work to put the policies and pieces in place we are now giving Fedora users access to the OpenH264 plugin from <a href="http://www.cisco.com

Cisco.
Dennis Gilmore posted a nice blog entry explaining how you can install OpenH264 in Fedora 24.

That said the plugin is of limited use today for a variety of reasons. The first being that the plugin only supports the Baseline profile. For those not intimately familiar with what H264 profiles are they are
basically a way to define subsets of the codec. So as you might guess from the name Baseline, the Baseline profile is pretty much at the bottom of the H264 profile list and thus any file encoded with another profile of H264 will not work with it. The profile you need for most online videos is the High profile. If you encode a file using OpenH264 though it will work with any decoder that can do Baseline or higher, which is basically every one of them.
And there are some things using H264 Baseline, like WebRTC.

But we realize that to make this a truly useful addition for our users we need to improve the profile support in OpenH264 and luckily we have Wim Taymans looking at the issue and he will work with Cisco engineers to widen the range of profiles supported.

Of course just adding H264 doesn’t solve the codec issue, and we are looking at ways to bring even more codecs to Fedora Workstation. Of course there is a limit to what we can do there, but I do think we will have some announcements this year that will bring us a lot closer and long term I am confident that efforts like Alliance for Open Media will provide us a path for a future dominated by royalty free media formats.

But for now thanks to everyone involved from Cisco, Fedora Release Engineering and the Workstation Working Group for helping to make this happen.

by uraeus at May 12, 2016 02:30 PM

May 09, 2016

Bastien NoceraBlog backlog, Post 2, xdg-app bundles

(Bastien Nocera)
I recently worked on creating an xdg-app bundle for GNOME Videos, aka Totem, so it would be built along with other GNOME applications, every night, and made available via the GNOME xdg-app repositories.

There's some functionality that's not working yet though:
  • No support for optical discs
  • The MPRIS plugin doesn't work as we're missing dbus-python (I'm not sure that the plugin will survive anyway, it's more suited to audio players, don't worry though, it's not going to be removed until we have made changes to the sound system in GNOME)
  • No libva/VDPAU hardware acceleration (which would require plugins, and possibly device access of some sort)
However, I created a bundle that extends the freedesktop runtime, that contains gst-libav. We'll need to figure out a way to distribute it in a way that doesn't cause problems for US hosts.

As we also have a recurring problem in Fedora with rpmfusion being out of date, and I sometimes need a third-party movie player to test things out, I put together an mpv manifest, which is the only MPlayer-like with a .desktop and a GUI when launched without any command-line arguments.

Finally, I put together a RetroArch bundle for research into a future project, which uncovered the lack of joystick/joypad support in the xdg-app sandbox.

Hopefully, those few manifests will be useful to other application developers wanting to distribute their applications themselves. There are some other bundles being worked on, and that can be used as examples, linked to in the Wiki.

by Bastien Nocera (noreply@blogger.com) at May 09, 2016 05:15 PM

May 05, 2016

Bastien NoceraBlog backlog, Post 1, Emoji

(Bastien Nocera) Short version


dnf copr enable hadess/emoji
dnf update cairo
dnf install eosrei-emojione-fonts



Long version

A little while ago, I was reading this article, called "Emoji: how do you get from U+1F355 to 🍕?", which said, and I reluctantly quote: "[...] and I don’t know what Linux does, but it’s probably black and white and who cares [...]".

Well. I care. And you probably do as well if your pizza slice above is black and white.

So I set out to check on the status of Behdad Esfahbod (or just "Behdad" as we know him)'s patches to add colour font support to cairo, which he presented at GUADEC in Strasbourg Gothenburg. It adds support for the "bitmap in font" as Android does, and as freetype supports.

It kind of worked, and Matthias Clasen reworked the patches a few times, completing the support. This is probably not the code that will be worked on and will land in cairo, but it's a good enough base for people interested in contributing to use.

After that, we needed something to display using that feature. We ended up using the same font recommended in this article, the Emoji One font.


There's still plenty to be done to support emojis, even after the cairo support is merged. We'd need a way to input emojis (maybe Lalo Martins is listening), and support in a lot of toolkits other than GNOME (Firefox only supports the SVG-in-OTF format, WebKit, Chrome, LibreOffice don't seem to know about colour fonts either).

You can find more information about design interests in GNOME around Emoji on the Wiki.

Update: Behdad's presentation was in Gothenburg, not Strasbourg. You can also see the video on YouTube.

by Bastien Nocera (noreply@blogger.com) at May 05, 2016 02:47 PM

May 04, 2016

Arun RaghavanImprovements to PulseAudio’s Echo Cancellation

As we approach the PulseAudio 9.0 release, I thought it would be a good time to talk about one of the things I had a chance to work on, that landed in this cycle.

Old-time readers will remember the work I had done in the past on echo cancellation. If you’re unfamiliar with the concept, imagine a situation where you’re making a call from your phone or laptop. You don’t have a headset, so you use your device’s speaker and microphone. Now when the person on the other end speaks, their voice is played out of your speaker and captured by your mic. This means that they can now also hear what they’re saying, with some lag — this is called echo. If this has happened to you, you know how annoying and disruptive it can be.

Using Acoustic Echo Cancellation (AEC), PulseAudio is able to detect this in the captured input, and remove the audio we recently played back. While doing this, we also run some other algorithms to enhance the captured input, such as noise suppression (great at damping out background and fan noise) and acoustic gain control, or AGC, which adjusts the mic volume so you are clearly audible). In addition to voice call use cases, this is also handy to have in other applications such as speech recognition (where you want the device to detect what a user is saying, while possibly playing out other sounds).

We don’t implement these algorithms ourselves in PulseAudio. The echo cancellation module — cunningly named module-echo-cancel — provides the infrastructure to plug in different echo canceller implementations. One of these that we support (and recommend), is based on Google’s [WebRTC.org] implementation which includes an extremely capable set of voice processing algorithms.

This is a large code-base, intended to support a full real-time communication stack, and we didn’t want to pick up all that code to include in PulseAudio. So what I did was to make a copy of the AudioProcessing module, wrap it in an easy-to-package library, and then used that from PulseAudio. Quite some time passed by, and I didn’t get a chance to update that code, until last October.

What’s New

The update brought us a number of things since the last one (5 years ago!):

  • The AGC module has essentially been rewritten. In practice, we see that it is slower to change the volume.

  • Voice Activity Detection (VAD) has also been split off into its own module and undergone significant changes.

  • Beamforming has been added, to allow you to use a set of microphones to be able to “point” your microphone array in a specific direction (more on this in a later post).

  • There is now an intelligibility enhancer for applying processing on the stream coming in from the far end (so you can hear the other side better). This feature has not been hooked up in PulseAudio yet.

  • There is a transient suppressor for when you’re on a laptop, and your microphone is picking up keystrokes. This can be important since the sound of the keystroke introduces sharp spikes or “transients” in the audio stream, which can throw off the echo canceller that works best with the frequency range of the human voice. This one seems to be work-in-progress, and not actually used yet.

In addition to this, I’ve also extended support in module-echo-cancel for performing cancellation on multiple channels. So we are now able to deal with hardware that has any number of playback and capture channels (and they don’t even need to be equal), and we no longer have the artificial restriction of having to downmix things to mono.

These changes are in the newly released webrtc-audio-processing v0.2. Unfortunately, we do break API with regards to the previous version. I wrote about this a while back, and hopefully the impact on other users of this library will be minimal.

All this work was made possible thanks to Aldebaran Robotics. A special shout-out to Julien Massot and his excellent team!

These features are already in our master branch, and will be part of the 9.0 release. If you’re using these features, let me know how things work for you, and watch out for a follow up post about beamforming.

If you or your company are looking for help with either PulseAudio or GStreamer, do take a look at the consulting services I currently provide.

by Arun at May 04, 2016 08:04 AM