April 23, 2014

GStreamerGNonLin and GStreamer Editing Services 1.2.1 stable release

(GStreamer)

The GStreamer team is pleased to announce a new release of the stable 1.2 release series of GStreamer Editing Services and GNonLin.

Check out the GES release notes here or download tarballs from here.

Check out the GNonLin release notes here or download tarballs from here.

April 23, 2014 02:17 PM

April 22, 2014

Zeeshan AliWhat's coming in Maps 3.14 and beyond

(Zeeshan Ali)
Jonas has written a very nice blog post about present and future of Maps project. I definitely recommending reading it if you are interested in this project. Since he is not on planet.gnome yet (some policy about having some posts before applying to be added), I thought I share it here.

April 22, 2014 12:06 PM

April 21, 2014

Michael SheldonDeep Vision – State of the art computer vision for Ubuntu Touch

(Michael Sheldon)

Over the Easter weekend I finally got around to implementing a first prototype of an idea I’ve had for a while, which aims to bring some state of the art computer vision techniques to mobile devices.

Deep Vision uses the implementation of convolutional neural networks provided by libccv to classify images. So it’ll try to figure out whatever is the principal object in an image your provide it with.

At the moment it just has a sample classification database from the ImageNet project, containing 1000 assorted items, however in the future I’d like to see specific classifiers for different tasks (e.g. a classifier trained purely on different plants, so when you’re out for a hike and you want to know what something is you can just point your phone at it and find out.)

Unlike something like Google Goggles it’s doing all the classification on the phone itself without needing to upload the image to any external services.

The video below provides a quick demo of it in action and you can also grab a click package here to play with it yourself: http://mikeasoft.com/~mike/com.mikeasoft.deepvision_0.1.3_armhf.click

Source code can be found at: https://launchpad.net/deepvision

It was just hacked together over the weekend, so it’s still a little rough in places but all the core functionality should work reasonably well :).

Video of Deep Vision

by Mike at April 21, 2014 05:58 PM

Thibault Sauniergst-validate: A suite of tools to run integration tests for GStreamer

(Thibault Saunier)

Collabora ltd. has been developing a tool that allows GStreamer developers to check that the GstElements they write behave the way they are supposed to: GstValidate. The tool was started first to provide plug-ins developers with a tool to check that they use the framework the proper way. Since the beginning, it has been available in gst-devtools, a gst module where we collect a set of tools to facilitate GStreamer development and usage.

Well, what is it about?

The GstValidateMonitor

Basically gst-validate allows us to monitor everything that is happening inside a GstPipeline. For example if you want to check that every component of a pipeline is properly behaving, you can create a GstValidatePipelineMonitor that will track that pipeline. Then each time a GstElement is added to the pipeline, a GstValidateElementMonitor will be instantiated and start tracking that element. Then when a GstPad is added to that GstElement a GstValidatePadMonitor will be monitoring that pad.

This monitoring logic allows us to check that what those elements do respect some rules GStreamer components have to follow so all the elements of a pipeline can properly interact together. For example, a GstValidatePadMonitor will make sure that if we receive a GstSegment from upstream, an equivalent segment is sent downstream before any buffer gets out. You can find the whole list of tests currently implemented here.

The GstValidateRunner

Then there is an issue reporting system so that each issue found during the execution of the pipeline is reported with as much details as possible so that users understand what the detected misbehaviour is about and can fix it efficiently.

In term of code the only thing to do in order to get a pipeline monitored is:

int
main (int argc, gchar ** argv)
{
 GstElement *pipeline;
 GstValidateRunner *runner;
 GstValidateMonitor *monitor;

 gboolean ret = 0;

 /* Initialize GStreamer and GstValidate */
 gst_init (&argc, &argv);
 gst_validate_init ();

 /* Create the pipeline and make sure it is
  * monitored */
 pipeline = gst_pipeline_new (
     "monitored-pipeline");
 runner = gst_validate_runner_new ();
 monitor = gst_validate_monitor_factory_create (
     GST_OBJECT (pipeline),
     runner,
     NULL);

 /* HERE you can do anything you want with that
  * monitored pipeline */

 /* Now print the errors on stdout.
  * The return value of that function
  * is != 0 if if critical errors occured
  * during the execution of the pipeline */
 ret = gst_validate_runner_printf (runner);

 /* Cleanup */
 gst_object_unref (pipeline);
 gst_object_unref (monitor);
 gst_object_unref (runner);

 return ret;
}

The result of gst_validate_runner_printf will look something like:

issue : buffer is out of the segment range Detected on theoradec0.srcpad at 0:00:00.096556426
        
Details : buffer is out of segment and shouldn't be pushed. Timestamp: 0:00:25.000 - duration: 0:00:00.040 Range: 0:00:00.000 - 0:00:04.520
Description : buffer being pushed is out of the current segment's start-stop  range. Meaning it is going to be discarded downstream without any use

Here we can see that an issue occurred on the src pad of theoradec as it outputted a buffer that was not inside the segment it last pushed. This is an interesting piece of information and clearly shows an error in the element. (Note: This issue does not actually exist)

How should it be used?

GstValidate command line tools

In order to make gst-validate usage simple, we created command line tools that allow plugin developers test there elements in many use cases from a high level perspective.

The gst-validate pipeline launcher

This is a command line pipeline launcher similar to gst-launch. That tool uses the gst-launch pipeline description syntax and make sure that the pipeline is monitored and that the users will have all the reported information from the GstValidate infrastructure. As you can expect, you can monitor the playback of a media file using playbin as follow:

gst-validate-1.0 playbin uri=file:///.../file.ogg

You will then be able to see all the issues GstValidate found.

The gst-validate-transcoding tool

A command line tool allowing to test media files transcoding with a straight forward syntax. You can for example transcode any media file to vorbis vp8 in webm doing:

gst-validate-transcoding-1.0 
    file:///./file.ogg 
    file:///.../transcoded.webm 
    -o 'video/webm:video/x-vp8:audio/x-vorbis'

It will report what issues happened during the execution of that pipeline.

The gst-validate-media-check tool

A command line tool checking that media files discovering works properly with gst-discoverer. Basically it needs a reference text file containing valid information about a media file (which can be generated with the same tool) and then it will be able to check that those informations correspond to what is reported by gst-discoverer over new runs.  For example, given that we have a valid reference.media_info file, we can run:

gst-validate-media-check-1.0 
  file:///./file.ogv 
  --expected-results reference.media_info

That will then output found errors if any and return an exist code different from 0 if an error was found.

GstValidateScenarios

As you can notice, those tools let us test static pipelines execution and not that the pipeline reacts properly during execution of actions from the end user such as seeking, or changing the pipeline state, etc… In order to make that possible and easy to use we introduced the concept of scenarios.

A scenario is a set of actions to be executed on the monitored pipeline serialized in a text file. An  action (GstValidateAction) is just a function call executed at a precise time (usually based on the playback position of the pipeline).

An example of scenario:

# Just some metadatas describing the scenario
# The format is the GstStructure serialization 
# synthax
description, seek=true, duration=3.0

# When the pipeline reaches 1.0 second of
# playback it launches an accurate flushing
# seek to 10.0 seconds
seek, playback_time=1.0, start=10.0, 
flags=accurate+flush

# Send EOS to the pipeline
# so it stops and the application
# knows about it.
eos, playback_time=12.0

You can find more examples of scenarios here

gst-validate-launcher

This looks all right but shouldn’t those tests be executed automatically on a large number of samples and with the various existing scenarios? This is where gst-validate-launcher comes into play. It is basically a python application that launches the tools described above with the proper parameters. It then monitors them, check their results and serializes those results into a junit xml file (more formatter could obviously be implemented). That tools is pretty simple and it is only a matter of setting the media samples to run the tests on and set what scenarios you want to run.

Where is it used?

GStreamer Editing Services

As part of the GStreamer Editing Services project, I have made sure that ges-launch (a command line tool that launches timelines, video editing projects etc…) works with GstValidate, if compiled against it. This means that we can launch scenarios and test GES sharing the same infrastructure as the rest of GStreamer. It is also very interesting to be able to monitor dynamic pipeline (within GES the GstPipeline changes very dynamically) to discover elements misbehavior in that stressful use case. For the time being we do not have many GES specific GstValidateActions implemented, but we will implement more as needed (mostly implementing timeline edition actions, ie. moving clips around the timeline, changing effect properties, etc…) As part of the Pitvi fundraiser, we are also investigating how to serialize users action in Pitivi so that we could easily reproduce users issues outside of the application (and thus outside the python interpreter).

The GStreamer jenkins integration server

On the GStreamer continuous integration server, we are running gst-validate-launcher on a set of media sample in major formats after (almost) each commit on any of the component of the GStreamer stack. This is just the beginning and we are slowly adding more tests there making sure they pass and tracking regressions.

A lot of work has been done around that tool. We still need to clean up some part, review the few APIs we have, and a particular effort as to be made around the documentation. But now that good basis are there, we should just keep adding more tests to detect regressions in GStreamer as soon as possible. If you are interested in using that tool please come talk to us in #gstreamer on freenode!

by thiblahute at April 21, 2014 10:02 AM

April 20, 2014

GStreamerGStreamer Core and Plugins 1.2.4 stable release

(GStreamer)

The GStreamer team is pleased to announce a new release of the stable 1.2 release series. The 1.2 release series is adding new features on top of the 1.0 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework that contains new features.

Binaries for Android, iOS, OS X and Windows are also provided.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, or gst-libav, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, or gst-libav,

Check the release announcement mail for details and the release notes above for a list of changes.

Also available are binaries for Android, iOS, Mac OS X and Windows.

April 20, 2014 01:00 AM

April 18, 2014

GStreamerOrc 0.4.19 bug-fix release

(GStreamer)

The GStreamer team announces another maintenance bug-fix release of liborc, the Optimized Inner Loop Runtime Compiler. It contains:

  • Fix out-of-tree builds
  • Fix many memory leaks, compiler warnings and coverity warnings
  • Documentation fix for mulhsw, mulhuw

Direct tarball download: orc-0.4.19.

April 18, 2014 11:00 AM

April 17, 2014

Bastien NoceraWhat is GOM¹

(Bastien Nocera) Under that name is a simple idea: making it easier to save, load, update and query objects in an object store.

I'm not the main developer for this piece of code, but contributed a large number of fixes to it, while porting a piece of code to it as a test of the API. Much of the credit for the design of this very useful library goes to Christian Hergert.

The problem

It's possible that you've already implemented a data store inside your application, hiding your complicated SQL queries in a separate file because they contain injection security issues. Or you've used the filesystem as the store and threw away the ability to search particular fields without loading everything in memory first.

Given that SQLite pretty much matches our use case - it offers good search performance, it's a popular thus well-documented project and its files can be manipulated through a number of first-party and third-party tools - wrapping its API to make it easier to use is probably the right solution.

The GOM solution

GOM is a GObject based wrapper around SQLite. It will hide SQL from you, but still allow you to call to it if you have a specific query you want to run. It will also make sure that SQLite queries don't block your main thread, which is pretty useful indeed for UI applications.

For each table, you would have a GObject, a subclass of GomResource, representing a row in that table. Each column is a property on the object. To add a new item to the table, you would simply do:

item = g_object_new (ITEM_TYPE_RESOURCE,
"column1", value1,
"column2", value2, NULL);
gom_resource_save_sync (item, NULL);

We have a number of features which try to make it as easy as possible for application developers to use gom, such as:
  • Automatic table creation for string, string arrays, and number types as well as GDateTime, and transformation support for complex types (say, colours or images).
  • Automatic database version migration, using annotations on the properties ("new in version")
  • Programmatic API for queries, including deferred fetches for results
Currently, the main net gain in terms of lines of code, when porting SQLite, is the verbosity of declaring properties with GObject. That will hopefully be fixed by the GProperty work planned for the next GLib release.

The future

I'm currently working on some missing features to support a port of the grilo bookmarks plugin (support for column REFERENCES).

I will also be making (small) changes to the API to allow changing the backend from SQLite to a another one, such as XML, or a binary format. Obviously the SQL "escape hatches" wouldn't be available with those backends.

Don't hesitate to file bugs if there are any problems with the API, or its documentation, especially with respect to porting from applications already using SQLite directly. Or if there are bugs (surely, no).

Note that JavaScript support isn't ready yet, due to limitations in gjs.

¹: « SQLite don't hurt me, don't hurt me, no more »

by Bastien Nocera (noreply@blogger.com) at April 17, 2014 09:36 AM

April 16, 2014

Christian SchallerPreparing the ground for the Fedora Workstation

(Christian Schaller)

Things are moving forward for the Fedora Workstation project. For those of you who don’t know about it, it is part of a broader plan to refocus Fedora around 3 core products with clear and distinctive usecase for each. The goal here is to be able to have a clear definition of what Fedora is and have something that for instance ISVs can clearly identify and target with their products. At the same time it is trying to move away from the traditional distribution model, a model where you primarily take whatever comes your way from upstream, apply a little duct tape to try to keep things together and ship it. That model was good in the early years of Linux existence, but it does not seem a great fit for what people want from an operating system today.

If we look at successful products MacOS X, Playstation 4, Android and ChromeOS the red thread between them is that while they all was built on top of existing open source efforts, they didn’t just indiscriminately shovel in any open source code and project they could find, instead they decided upon the product they wanted to make and then cherry picked the pieces out there that could help them with that, developing what they couldn’t find perfect fits for themselves. The same is to some degree true for things like Red Hat Enterprise Linux and Ubuntu. Both products, while based almost solely on existing open source components, have cherry picked what they wanted and then developed what pieces they needed on top of them. For instance for Red Hat Enterprise Linux its custom kernel has always been part of the value add offered, a linux kernel with a core set of dependable APIs.

Fedora on the other hand has historically followed a path more akin to Debian with a ‘more the merrier’ attitude, trying to welcome anything into the group. A metaphor often used in the Fedora community to describe this state was that Fedora was like a collection of Lego blocks. So if you had the time and the interest you could build almost anything with it. The problem with this state was that the products you built also ended up feeling like the creations you make with a random box of lego blocks. A lot of pointy edges and some weird looking sections due to needing to solve some of the issues with the pieces you had available as opposed to the piece most suited.

With the 3 products we are switching to a model where although we start with that big box of lego blocks we add some engineering capacity on top of it, make some clear and hard decisions on direction, and actually start creating something that looks and feels like it was made to be a whole instead of just assembled from a random set of pieces. So when we are planning the Fedora Workstation we are not just looking at what features we can develop for individual libraries or applications like GTK+, Firefox or LibreOffice, but we are looking at what we want the system as a whole to look like. And maybe most important we try our hardest to look at things from a feature/usecase viewpoint first as opposed to a specific technology viewpoint. So instead of asking ‘what features are there in systemd that we can expose/use in the desktop being our question, the question instead becomes ‘what new features do we want to offer our users in future versions of the product, and what do we need from systemd, the kernel and others to be able to do that’.

So while technologies such as systemd, Wayland, docker, btrfs are on our roadmap, they are not there because they are ‘cool technologies’, they are there because they provide us with the infrastructure we need to achieve our feature goals. And whats more we make sure to work closely with the core developers to make the technologies what we need them to be. This means for example that between myself and other members of the team we are having regular conversations with people such as Kristian Høgsberg and Lennart Poettering, and of course contributing code where possible.

To explain our mindset with the Fedora Workstation effort let me quickly summarize some old history. In 2001 Jim Gettys, one of the original creators of the X Window System did at talk a GUADEC in Sevile called ‘Draining the Swamp’. I don’t think the talk can be found online anywhere, but he outlined some of the same thoughts in this email reply to Richard Stallman some time later. I think that presentation has shaped the thinking of the people who saw it ever since, I know it has shaped mine. Jim’s core message was that the idea that we can create a great desktop system by trying to work around the shortcomings or weirdness in the rest of the operating system was a total fallacy. If we look at the operating system as a collection of 100% independent parts, all developing at their own pace and with their own agendas, we will never be able to create a truly great user experience on the desktop. Instead we need to work across the stack, fixing the issues we see where they should be fixed, and through that ‘drain the swamp’. Because if we continued to try to solve the problems by adding layers upon layers of workarounds and abstraction layers we would instead be growing the swamp, making it even more unmanageable. We are trying to bring that ‘draining the swamp’ mindset with us into creating the Fedora Workstation product.

With that in mind what is the driving ideas behind the Fedora Workstation? The Fedora Workstation effort is meant to provide a first class desktop for your laptop or workstation computer, combining a polished user interface with access to new technologies. We are putting a special emphasis on developers with our first releases, both looking at how we improve the desktop experience for developers, and looking at what tools we can offer to developers to let them be productive as quickly as possible. And to be clear when we say developers we are not only thinking about developers who wants to develop for the desktop or the desktop itself, but any kind of software developer or DevOPs out there.

The full description of the Fedora Workstation can be found here, but the essence of our plan is to create a desktop system that not only provides some incremental improvements over how things are done today, but which tries truly take a fresh look at how a linux desktop operating system should operate. The traditional distribution model, built up around software packages like RPM or Deb has both its pluses and minuses.
Its biggest challenge is probably that it creates a series of fiefdoms where a 3rd party developers can’t easily target the system or a family of systems except through spending time very specifically supporting each one. And even once a developers decides to commit to trying to support a given system it is not clear what system services they can depend on always being available or what human interface design they should aim for. Solving these kind of issues is part of our agenda for the new workstation.

So to achieve this we have decided on a set of core technologies to build this solution upon. The central piece of the puzzle is the so called LinuxApps proposal from Lennart Poettering. LinuxApps is currently a combination of high level ideas and some concrete building blocks. In terms of the building blocks are technologies such as Wayland, kdbus, overlayfs and software containers. The ideas side include developing a permission system similar to what you for instance see Android applications employ to decide what rights a given application has and develop defined versioned library bundles that 3rd party applications can depend on regardless of the version of the operating system. On the container side we plan on expanding on the work Red Hat is doing with Docker and Project Atomic.

In terms of some of the other building blocks I think most of you already know of the big push we are doing to get the new Wayland display server ready. This includes work on developing core infrastructure like libinput, a new library for handling input devices being developed by Jonas Ådahl and our own Peter Hutterer. There is also a lot of work happening on the GNOME 3 side of things to make GNOME 3 Wayland ready. Jasper St.Pierre wrote up a great blog blog entry outlining his work to make GDM and the GNOME Shell work better with Wayland. It is an ongoing effort, but there is a big community around this effort as most recently seen at the West Cost Hackfest at the Endless Mobile office.

As I mentioned there is a special emphasis on developers for the initial releases. These includes both a set of small and big changes. For instance we decided to put some time into improving the GNOME terminal application as we know it is a crucial piece of technology for a lot of developers and system administers alike. Some of the terminal improvements can be seen in GNOME 3.12, but we have more features lined up for the terminal, including the return of translucency. But we are also looking at the tools provided in general and the great thing here is that we are able to build upon a lot of efforts that Red Hat is developing for the Red Hat product portfolio, like Software Collections which gives easy access to a wide range of development tools and environments. Together with Developers Assistant this should greatly enhance your developers experience in the Fedora Workstation. The inclusion of Software collections also means that Fedora becomes an even better tool than before for developing software that you expect to deploy on RHEL, as you can be sure that an identical software collection will be available on RHEL that you have been developing against on Fedora as software collections ensure that you can have the exact same toolchain and toolchain versions available for both systems.

Of course creating a great operating system isn’t just about the applications and shell, but also about supporting the kind of hardware people want to use. A good example here is that we put a lot of effort into HiDPI support. HiDPI screens are not very common yet, but a lot of the new high end laptops coming out are using them already. Anyone who has used something like a Google Pixel or a Samsung Ativ Book 9 Plus has quickly come to appreciate the improved sharpness and image quality these displays brings. Due to the effort we put in there I have been very pleased to see many GNOME 3.12 reviews mentioning this work recently and saying that GNOME 3.12 is currently the best linux desktop for use with HiDPI systems due to it.

Another part of the puzzle for creating a better operating system is the software installation. The traditional distribution model often tended to try to bundle as many applications as possible as there was no good way for users to discover new software for their system. This is a brute force approach that assumes that if you checked the ‘scientific researcher’ checkbox you want to install a random collection of 100 applications useful for ‘scientific researchers’. To me this is a symptom of a system that does not provide a good way of finding and installing new applications. Thanks to the ardent efforts of Richard Hughes we have a new Software Installer that keeps going from strength to strength. It was originally launched in Fedora 19, but as we move forward towards the first Fedora Workstation release we are enabling new features and adding polish to it. One area where we need to wider Fedora community to work with us is to increase the coverage of appdata files. Appdata files essentially contains the necessary metadata for the installer to describe and advertise the application in question, including descriptive text and screenshots. Ideally upstreams should come with their own appdata file, but in the case where they are not we should add them to the Fedora package directly. Currently applications from the GTK+ and GNOME sphere has relatively decent appdata coverage, but we need more effort into getting applications using other toolkits covered too.

Which brings me to another item of importance to the workstation. The linux community has for natural reasons been very technical in nature which has meant that some things that on other operating systems are not even a question has become defining traits on Linux. The choice of GUI development toolkits being one of these. It has been a great tool used by the open source community to shoot ourselves in the foot for many years now. So while users of Windows or MacOS X probably never ask themselves what toolkit was used to implement a given application, it seems to be a frequently asked one for linux applications. So we want to move away from it with the Workstation. So while we do ship the GNOME Shell as our interface and use GTK+ for developing tools ourselves, including spending time evolving the toolkit itself that does not mean we think applications written using for instance Qt, EFL or Java are evil and should be exorcised from the system. In fact if an application developer want to write an application for the linux desktop at all we greatly appreciate that effort regardless of what tools they decide to use to do so. The choice of development toolkits is a choice meant to empower developers, not create meaningless distinctions for the end user. So one effort we have underway is to work on the necessary theming and other glue code to make sure that if you run a Qt application under the GNOME Shell it feels like it belongs there, which also extends to if you are using accessibility related setups like the high contrast theme. We hope to expand upon that effort both in width and in depth going forward.

And maybe on a somewhat related note we are also trying to address the elephant in the room when it comes to the desktop and that is the fact that the importance of the traditional desktop is decreasing in favor of the web. A lot of things that you used to do locally on your computer you are probably just doing online these days. And a lot of the new things you have started doing on your computer or other internet capable device are actually web services as opposed to a local applications. The old Sun slogan of ‘The Network is the Computer’ is more true today than it has ever been before. So we don’t believe the desktop is dead in any way or form, as some of the hipsters in the media like to claim, in fact we expect it to stay around for a long time. What we do envision though is that the amount of time you spend on webapps will continue to grow and that more and more of your computing tasks will be done using web services as opposed to local applications. Which is why we are continuing to deeply integrate the web into your desktop. Be that through things like GNOME Online accounts or the new Webapps that are introduced in Software installer. And as I have mentioned before on this blog we are also still working on trying to improve the integration of Chrome and Firefox apps into the desktop along the same lines. So while we want the desktop to help you use the applications you want to run locally as efficiently as possible, we also realize that you like us are living in a connected world and thus we need to help give you get easy access to your online life to stay relevant.

So there are of course a lot of other parts of the Fedora Workstation effort, but this has already turned into a very long blog post as it is so I leave the rest for later. Please feel free to post any questions or comments and I will try to respond.

by uraeus at April 16, 2014 01:57 PM

April 14, 2014

Bastien NoceraJDLL 2014 report

(Bastien Nocera) The 2014 "Journées du Logiciel Libre" took place in Lyon like (almost) every year this past week-end. It's a francophone free software event over 2 days with talks, and plenty of exhibitors from local Free Software organisations. I made the 600 metres trip to the venue, and helped man the GNOME booth with Frédéric Peters and Alexandre Franke's moustache.



Our demo computer was running GNOME 3.12, using Fedora 20 plus the GNOME 3.12 COPR repository which was working pretty well, bar some teething problems.

We kept the great GNOME 3.12 video running in Videos, showcasing the video websites integration, and regularly demo'd new applications to passers-by.

The majority of people we talked to were pretty impressed by the path GNOME has taken since GNOME 3.0 was released: the common design patterns across applications, the iterative nature of the various UI elements, the hardware integration or even the online services integration.

The stand-out changes for users were the Maps application which, though a bit bare bones still, impressed users, and the redesigned Videos.

We also spent time with a couple of users dispelling myths about "lightness" of certain desktop environments or the "heaviness" of GNOME. We're constantly working on reducing resource usage in GNOME, be it sluggishness due to the way certain components work (with the applications binary cache), memory usage (cf. the recent gjs improvements), or battery usage (cf. my wake-up reduction posts). The use of gnome-shell using tablet-grade hardware for desktop machines shows that we can offer a good user experience on hardware that's not top-of-the-line.

Our booth was opposite the ones from our good friends from Ubuntu and Fedora, and we routinely pointed to either of those booths for people that were interested in running the latest GNOME 3.12, whether using the Fedora COPR repository or Ubuntu GNOME.

We found a couple of bugs during demos, and promptly filed them in Bugzilla, or fixed them directly. In the future, we might want to run a stable branch version of GNOME Continuous to get fixes for embarrassing bugs quickly (such as a crash when enabling Zoom in gnome-shell which made an accessibility enthusiast tut at us).


GNOME and Rhône

Until next year in sunny Lyon.

(and thanks Alexandre for the photos in this article!)

by Bastien Nocera (noreply@blogger.com) at April 14, 2014 06:47 PM

April 13, 2014

Zeeshan AliLocation hackfest

(Zeeshan Ali)
I'm organising a hackfest in London from May 23 to 25 2014. The plan is to improve our location-related components and to get them useful to other OSs: KDE, Jolla and hopefully also Ubuntu phone. If you are (or want to) doing anything related to location and want to attend, please do add yourself to wikipage as soon as possible so I can notify our hosts if we'd need a bigger room.

Oh and if you need a place to stay, do contact me!

I'm thankful to awesome Mozilla folks for hosting this event and providing an awesome open geolocation service to everyone.













April 13, 2014 04:06 PM

Sebastian DrögeOpenGL support in GStreamer

(Sebastian Dröge)

Over the last few months Matthew Waters, Julien Isorce and to some lesser degree myself worked on integrating proper OpenGL support into GStreamer.

Previously there were a few sinks based on OpenGL (osxvideosink for Mac OS X and eglglessink for Android and iOS), but they all only allowed rendering to a window. They did not allow rendering of a video into a custom texture that is then composited inside the application into an OpenGL scene. And then there was gst-plugins-gl, which allowed more flexible handling of OpenGL inside GStreamer pipelines, including uploading and downloading of video frames to the GPU, provided various filters and base classes to easily implement shader-based filters, provided infrastructure for sharing OpenGL contexts between different elements (even if they run in different threads) and also provided a video sink. The latter was now improved a lot, ported to all the new features for hardware integration and finally merged into gst-plugins-bad. Starting with GStreamer 1.4 in a few weeks, OpenGL will be a first-class citizen in GStreamer pipelines.

After yesterday’s addition of EAGL support for iOS (EAGL is Apple’s iOS API for handling GLES contexts), there is nothing missing to use this new set of library and plugins on all platforms supported by GStreamer. And finally we can get rid of eglglessink, which was only meant as an intermediate solution until we have all the infrastructure for real OpenGL support.

by slomo at April 13, 2014 08:28 AM

April 12, 2014

David SchleefCryptographically Random Password Generator Bookmarklet

(David Schleef)

I looked at several password generator bookmarklets and extensions and decided they were all fundamentally broken (lack of real randomness, trivial for a web site to steal your master password, etc.), so I wrote my own. This bookmarklet, when activated, will use the browser crypto extensions to generate a cryptographically random string of 12 characters (letters, numbers, _ and -) and show it to you. Use this on a web page that needs a new password.

The %2012;%20i++)%20{%20%20%20%20%20pw%20+=%20('0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ-_')[array[i]&63];%20%20%20}%20%20%20prompt('Random%20password:',%20pw);%20})()">%2012;%20i++)%20{%20%20%20%20%20pw%20+=%20('0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ-_')[array[i]&63];%20%20%20}%20%20%20prompt('Random%20password:',%20pw);%20})()">%2012;%20i++)%20{%20%20%20%20%20pw%20+=%20('0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ-_')[array[i]&63];%20%20%20}%20%20%20prompt('Random%20password:',%20pw);%20})()">bookmarklet. (Save link as a bookmark.)

Code for the curious:

javascript:(function () {
var pw = '';
var array = new Uint8Array(12);
window.crypto.getRandomValues(array);
for (var i = 0; i < 12; i++) {
pw += ('0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ-_')[array[i]&63];
}
prompt('Random password:', pw);
})()

This bookmarklet should work on most recent browsers, if your browser is older, you clearly don't care about security anyway.

The recent bug related to lack of input verification (i.e., heartbleed) has led me to look around for better password management, since I'm going to be changing passwords on almost all services. My system up to this point was to use Firefox for storing passwords and typically using the same or similar passwords for multiple services. Reusing passwords has always been a bad idea, but it was the zero effort route. I like zero effort. A bookmarklet that generates a secure password is almost zero effort, and I'm going to continue to use Firefox to store passwords.

If you use Firefox to store passwords, remember to set a master password, otherwise your passwords are trivially usable and recoverable by anyone with access to your computer. You can get away with not doing that if you encrypt your hard drive and your computer auto-locks when idle or asleep, as I do.

by ds at April 12, 2014 11:41 PM

April 08, 2014

Jean-François Fortin TamLe problème fondamental de notre système électoral

Permettez-moi de résumer le problème fondamental de notre système électoral (le scrutin uninominal majoritaire à 1 tour) avec un simple diagramme que j’ai créé à partir des résultats de l’élection québécoise de 2014:

2014-03-07

La barre du haut correspond à ce pour quoi le peuple a voté. La barre du bas représente comment ces votes sont transformés en sièges à l’Assemblée. Résultat: on est de nouveau coincés pour quatre ans avec un gouvernement «tout puissant» dont la majorité de l’électorat ne voulait pas.

Comme si ce n’était pas déjà assez problématique en soi, ce système mène inévitablement au vote tactique (communément connu sous l’adage «Tout vote pour quiconque autre que le top 2 des candidats est en fait un vote pour le candidat #1»).

Outre le Canada, le même problème affecte les États-Unis, le Royaume Uni et autres pays sous-développés.

Et c’est pourquoi, tant qu’il n’y aura pas un mode de scrutin proportionnel, le Québec ne sera jamais un pays.

by nekohayo at April 08, 2014 04:22 AM

Jean-François Fortin TamHow do you visually represent a project’s timeline?

Here is a fun example to illustrate why software development in general is a complex endeavour:

  1. You think you’re going to fix a tiny problem: “hey, maybe we could make ‘s welcome dialog look a bit nicer“.
  2. Eventually, someone proposes a design or idea that looks interesting, and you realize that to truly realize it, you should also implement an audacious new feature: a way to visually represent an entire timeline as a thumbnail (that one is an open question, by the way; if you have some clever ideas, feel free to share them)
  3. …and to display new feature B properly, you should also consider—ideally—being a good citizen and implementing feature C upstream, in the toolkit you use instead of doing your own thing in your corner.

This kind of serendipity and interdependence happens regularly in FLOSS applications like Pitivi where we prioritize quality over “meeting shareholders’ deadlines and objectives”, which is why we sometimes take more time to flesh out a solution to a problem: we aim for the best user experience possible, all while negotiating and working with the greater software ecosystem we live in, instead of silently piling up hacks in our application… and we depend on the involvement of everyone for things to progress.

by nekohayo at April 08, 2014 01:47 AM

April 03, 2014

Bastien NoceraXDG Summit: Day #4

(Bastien Nocera) During the wee hours of the morning, David Faure posted a new mime applications specification which will allow to setup per-desktop default applications, for example, watching films in GNOME Videos in GNOME, but DragonPlayer in KDE. Up until now, this was implemented differently in at least KDE and GNOME, even to the point that GTK+ applications would use the GNOME default when running on a KDE desktop, and vice-versa.

This is made possible using XDG_CURRENT_DESKTOP as implemented in gdm by Lars. This environment variable will also allow implementing a more flexible OnlyShowIn and NotShowIn desktop entry fields (especially for desktops like Unity implemented on top of GNOME, or GNOME Classic implemented on top of GNOME) and desktop-specific GSettings/dconf configurations (again, very useful for GNOME Classic). The environment variable supports applying custom configuration in sequence (first GNOME Classic then GNOME in that example).

Today, Ryan and David discussed the desktop file cache, making it faster to access desktop file data without hitting scattered files. The partial implementation used a custom structure, but, after many kdbus discussions earlier in the week, Ryan came up with a format based on serialised GVariant, the same format as kdbus messages (but implementable without implementing a full GVariant parser).

We also spent quite a bit of time writing out requirements for a filesystem notification to support some of the unloved desktop use cases. Those use cases are currently not supported by either inotify and fanotify.

That will end our face-to-face meeting. Ryan and David led a Lunch'n'Learn in the SUSE offices to engineers excited about better application integration in the desktops irrespective of toolkits.

Many thanks to SUSE for the accommodation as well as hosting the meeting in sunny Nürnberg. Special thanks to Ludwig Nussel for the morning biscuits :)

by Bastien Nocera (noreply@blogger.com) at April 03, 2014 09:30 PM

Bastien NoceraFreedesktop Hackfest: Day #3

(Bastien Nocera) Wednesday, Mittwoch. Half of the hackfest has now passed, and we've started to move onto other discussion items that were on our to-do list.

We discussed icon theme related simplifications, especially for application developers and system integrators. As those changes would extend into bundle implementation, being pretty close to an exploded-tree bundle, we chose to postpone this discussion so that the full solution includes things like .service/.desktop merges, and Intents/Implements desktop keys.

David Herrman helped me out with testing some Bluetooth hardware (which might have involved me trying to make Mario Strikers Charged work in a Wii emulator on my laptop ;)

We also discussed a full-fledged shared inhibition API, and we agreed that the best thing to do would be to come up with an API to implement at the desktop level. The desktop could then proxy that information to other session- and/or system-level implementations.

David Faure spent quite a bit of time cleaning up after my bad copy/pasted build system for the idle inhibit spec (I copied a Makefile with "-novalidate" as an option, and the XML file was full of typos and errors). He also fixed the KDE implementation of the idle inhibit to match the spec.

Finally, I spent a little bit of time getting kdbus working on my machine, as this seemed to trigger the infamous "hidden cursor bug" without fail on every boot. Currently wondering why gnome-shell isn't sending any events at all before doing a VT switch and back.

Due to the Lufthansa strike, and the long journey times, tomorrow is going to be the last day of the hackfest for most us.

by Bastien Nocera (noreply@blogger.com) at April 03, 2014 12:28 AM

April 02, 2014

Christian SchallerGNOME 3.12 release comments

(Christian Schaller)

So the recent GNOME 3.12 release has gotten a very positive reception. Since I know that many members of my team has worked very hard on GNOME 3.12 I am delighted to see all the positive feedback the release is getting. And of course it doesn’t hurt having it give us a flying start to the Fedora Workstation effort. Anyway, for the fun of it I tried putting together a set of press quotes, kinda like how they tend to do for computer game advertisements.

  • “GNOME 3.12: Pixel perfect” “GNOME 3 has finally arrived” – The Register
  • “It is the GNOME release I have been waiting for” – Linux Action Show
  • “The Very Exciting GNOME 3.12 Has Been Released” – Phoronix.com
  • “…. a milestone feature update for users …” – eweek.com
  • “The design team has refined everything …” – omgubuntu.co.uk
  • “One of the big Linux desktops is updated” – TheInquirer
  • “High Resolution screens are best managed under Gnome 3.12″ – laptopspirit.fr
  • “One of the most striking innovations..” – Heise.de
  • “has resurrected what was once the darling of the Linux desktop” – TechRepublic.com

Some of the quotes might feel a little out of context, but as I said I did it for fun and if you end up spending time reading GNOME 3.12 articles to verify the quotes, then all the better ;)

Also you should really check out the nice GNOME 3.12 release video that can be found on the GNOME 3.12 release page.

Anyway, I plan on doing a blog post about the Fedora Workstation effort this week and will talk a bit about how GNOME 3.12 and later fits into that.

by uraeus at April 02, 2014 08:59 AM

April 01, 2014

Christian SchallerTransmageddon 1.0 released!

(Christian Schaller)

It has been a long time in the making, but I have finally cut a new release of the Transmageddon transcoder application. The code inside Transmageddon has seen some major overhaul as I have updated it to take advantage of new GStreamer APIs and features. New features in this release include:

  • Support files with multiple audio streams, allowing you to transcode them to different codecs or drop them from the new file
  • DVD ripping support. So know you can use your movie DVDs as input in Transmageddon, be aware though that you need to install things like lsdvd and the GStreamer dvdread plugin from gst-plugins-ugly for it to become available. And you probably also want libdvdcss installed to be able to transcode most movie DVDs.
  • Another small feature of the release is that you can now set language information on files with one audio stream inside. I hope to extend this to also work with files that have multiple audio streams. If you rip a DVD with multiple audio streams Transmageddon will preserve the existing audio information, so in that case you shouldn’t need to set the language metadata manually.
  • Enabled VP9 support in the code.

There are some other smaller niceties too, like the use of blue default action buttons to match the GNOME 3 style better and I also switched to new icon designed by Jakub Steiner. There is also an appdata file now, which should make Transmageddon available in a nice way inside the new Fedora Software Installer’

Also there is now an Advanced section on the Transmageddon website explaining how you can create custom presets that allow you to do things like resize the video or change the bitrate of the audio.

And last, but not least here is a screenshot of the new version.
transmageddon-1.0-blue-button

You can download the new version from the Transmageddon website, I will update the version in Fedora soon.

by uraeus at April 01, 2014 07:43 PM

Bastien NoceraFreedesktop Summit: Day #2

(Bastien Nocera) Today, Ryan carried on with writing the updated specification for startup notification.

David Faure managed to get Freedesktop.org specs updated on the website (thanks to Vincent Untz for some chmod'ing), and removed a number of unneeded items in the desktop file specification, with help from Jérôme.

I fixed a number of small bugs in shared-mime-info, as well as preparing for an 8-hour train ride.

Lars experimented with technics to achieve a high score at 2048, as well as discussing various specifications, such as the possible addition of an  XDG_CURRENT_DESKTOP envvar. That last suggestion descended into a full-room eye-rolling session, usually when xdg-open code was shown.

by Bastien Nocera (noreply@blogger.com) at April 01, 2014 05:22 PM

Bastien NoceraXDG Hackfest: Day #1

(Bastien Nocera) I'm in Nürnberg this week for the Freedesktop Hackfest, aka the XDG Summit, aka the XDG Hackfest aka... :)

We started today with discussions about desktop actions, and how to implement them, such as whether showing specific "Edit" or "Share" sub-menus and how to implement them. We decided that that could be implemented through specific desktop keys which a file manager could use. This wasn't thought to be generally useful to require a specification for now.

The morning is stretching to discuss "splash screens". A desktop implementor running on low-end hardware is interested in having a placeholder window show up as soon as possible, in some cases even before the application has linked and the toolkit is available. This discussion is descending into slightly edge cases, such as text editors launching either new windows or new tabs depending on a number of variables.

Specific implementation options were discussed after a nice burrito lunch. We've decided that the existing X11 startup notification would be ported to D-Bus, using signals instead of X messages. Most desktop shells would support both versions for a while. Wayland clients that want startup notification would be required to use the D-Bus version of the specification. In parallel, we would start passing workspace information along with the DESKTOP_STARTUP_ID envvar/platform data.

Jérôme, David and I cleared up a few bugs in shared-mime-info towards the end of the day.

Many thanks to SUSE for the organisation, and accommodation sponsorship.

Update: Fixed a typo

by Bastien Nocera (noreply@blogger.com) at April 01, 2014 09:26 AM

March 26, 2014

Bastien NoceraMy GNOME 3.12 in numbers

(Bastien Nocera) 1 new GNOME Videos, 1 updated Bluetooth panel, 2 new thumbnailers, 9 grilo sources, and 1 major UPower rework.

I'm obviously very attached to the GNOME Videos UI changes, the first major UI rework in its 12-year existence.


GNOME Videos watching itself

by Bastien Nocera (noreply@blogger.com) at March 26, 2014 09:55 PM

March 21, 2014

Jean-François Fortin TamPitivi 0.93 released

Last week, a flash snowstorm brought me around 2ft of snow overnight. I thought, “If I’m going to clear that much snow, might as well have some fun and make a timelapse out of it”, and so I did. While watching it, I realized, “Hmm… that’s an interesting metaphor for the huge amount of preparatory and cleanup work we’ve been doing in the past few years”:

Today, we’re very happy to announce another great Pitivi release. It brings a truckload of bug fixes and refinements, which you can read about in the 0.93 release notes (prepared by yours truly). This release now brings us to a quality level where we can let go of the “alpha” status and call this a “beta”. Many nasty bugs are gone and people are increasingly relying on it for their own projects. Besides the video above, the 2014 fundraiser‘s video and the Pitivi showcase, I was quite pleased to see someone using Pitivi to easily make a nice video for a commercial booth at a technology tradeshow!

0.93 is the result of continued efforts in our spare time—occasional hacking during vacations, nights and week-ends. Just imagine what could be achieved if Mathieu and Thibault could be funded to work full-time towards bringing us to 1.0!

Update: you may also want to take a look at this blog post.

by nekohayo at March 21, 2014 04:48 AM

March 20, 2014

Christian SchallerUpdate from GStreamer Hackfest at Google Office in Munich

(Christian Schaller)

To give the wider community a chance to see what happened during the GStreamer hackfest last weekend I put together this blog post is based on an summary written by Wim Taymans, so a big thanks to Wim for letting me reuse parts of his summary.)

So last weekend 21 GStreamer hackers got together at the Google office in Munich to spend the weekend hacking on their favourite GStreamer bits. At this point in time we didn’t have any major basic plumbing tasks that needed tackling so the time was spent hacking on a lot of different projects and using the opportunity to discuss design and challenges with each other.

We where 3 people attending from Red Hat and Fedora; Wim Taymans, Alberto Ruiz and myself.

With the Release of GStreamer 1.0 in September 2012, we drastically changed the way memory is handled in the multimedia pipeline and the large body of work is still in exploring, improving and porting elements to this new memory model. We’re also mostly working on improving the existing elements with comparatively little new infrastructure work.

We’re also seeing a lot of people from different companies that contribute significant amounts of code to the official GStreamer repositories. This has traditionally been a much more closed effort with various pieces of code living in multiple repositories, especially for the hardware acceleration bits. It is good to see that the 1.0 series brings all these efforts together again with more coordination and a more coherent story.

HW acceleration

One of the large ongoing tasks is to improve our support for hardware accelerated decoding, effects and display. With 1.0 we can finally get this done cleanly and efficiently in very many use cases.

Matthew Waters to flew in from Australia to move the gst-plugins-gl set of plugins to the core GStreamer plugins packages. He has been working on these plugins for a while now. Their goal is to use OpenGL to apply operations on the video, like rotation on a cube or applying a shader. With the 1.0 memory management it becomes possible to do this efficiently with a minimal amount of texture upload/downloads. More work is needed here, we can optimize things some more by delaying the work and running the shaders as part of the rendering operation.

Andoni Morales (Fluendo) has also been working on improving hardware acceleration on android. He used some of the new features of 1.0 to make the android codecs use zero-copy by implementing the texture-upload meta data on buffers. This allows the video sink to efficiently create a texture from the decoded data for display. Andoni also ported winks, a video capture source on Windows, to GStreamer 1.0.

Nicolas Dufresne (Collabora) has been working on adding a new set of decoders based on the mem2mem API in v4l2. Not many drivers provide this API yet but it is implemented in some Samsung Exynos SOCs. We would also like to support other m2m operations later, such as color conversion but for that we need to make some of our base classes support the required asynchronous behaviour of mem2mem. The memory management in our v4l2 elements has been gone through several iterations of improvements during the 1.0 cycle but it still is not entirely what it should be. We agreed on what we should do to fix this in the near future. We also briefly discussed the need for a new event that can be used to reclaim memory from a pipeline; many elements that use hardware buffers need to free those before they can negotiate a new format with the hardware so we need a way to make that possible.

Mathieu Bourron (Collabora) has been working on libva, the library for GPU based video decoding and encoding on Intel hardware, and spent his time at the Hackfest fixing up the SPU overlay element to enable hardware accelerated subpicture overlays in the video sink. Traditionally GStreamer would use the CPU to overlay the subpictures (of DVD, for example) on top of the video images. With new GL-based sinks, and hardware accelerated decoders this is very undesirable and can be done much more efficient as part of the final rendering. In 1.0 we have the infrastructure to delay this overlay operation by attaching extra metadata (with the subpicture) to the video images when the video sink knows how to overlay them. We have been doing this with subtitles in cluttersink and other sinks for a while now and soon we can also do this with subpictures.

Plugin Hacking

Arun Raghavan, GStreamer hacker and Pulse Audio maintainer, worked on disabling the audio and video filters in playbin when passthrough mode was selected. In passthrough mode, a video or audio sink can directly handle the encoded media (think a bluetooth headset that can handle mp3 directly or a hardware sink that takes encoded data). He expaned on that work in blog entry.

As a cool hack, Arun also made a source element to read from torrent files so you can watch a movie while you torrent it. He provides more information on that element in his blog, it is actually really cool.

Thiago Santos (Collabora) was continuing with his work to improve the DASH demuxer, reworking the buffering code to make it buffer less and smoother. Dash is one of the new formats (with HLS and MSS) to stream media over HTTP while adapting to bandwidth changes. On the server side, it makes media available in various bitrates while a client switches between bitrates depending on its measured network conditions. Andoni Morales also worked on a new dashsink element that implements the server side of the DASH
format.

Mathieu Duponchelle, a former GSoC student was trying to improve support for seeking in MPEG Transport Streams in order to use them in PiTiVi. Seeking in MPEG TS is not an easy thing because they are really optimized for streaming only. He got help from Thibault Saunier (Collabora), who was also hacking on PiTiVi and who was preparing a new release of gnonlin, GES and gst-python 1.2 (which he released on Sunday). Mathieu is one of the developers able to work fulltime on PiTiVi now thanks to the PiTiVi fundraiser, so be sure to contribute to that!

Jan Schmidt (Centricular), a long time GStreamer core hacker was working on debugging some DVB issues and also ended up taking part in a lot of the general design and troubleshooting discussions happening during the hackfest, helping other people move forward with their projects.

Long time GStreamer hacker Edward Hervey (Collabora) was planning to do a lot of DVB hacking but had to give up on that effort when it was clear that Google had signal isolated the office for security reasons, so there was no DVB signal in the Google office. Instead he worked on merging some pending DVB patches and implemented GAP support in the mpeg transport stream plugin. GAP support deals with streams that have long periods of no media (like missing audio for some time in DVD). It makes sure that downstream elements keeps processing the silence instead of waiting for more data.

Applications

Meg Ford, a GSOC student mentored by Sebastian Dröge (Centricular) was working on Gnome Sound Recorder and fixing up the last bugs, preparing it for a new release.

Myself, Christian Schaller (Red Hat) was on a bug fixing spree in Transmageddon (a transcoding application written in python and GStreamer) and managed to reduce the number of known bugs to only 1. Fixed that last bug once I got home, so now I just need to hammer at Transmageddon for a bit to make sure I caught all the corner case issues so I can do a major new release with new features such as handling files with multiple audio streams, handling DVD ripping, handling VP9 encoding, handling setting audio stream language information, reducing decoding overhead for streams that we are going to throw away and more. Also had help reviewing and cleaning up the Transmageddon code from Alberto Ruiz, freeing Transmageddon from some ugly code that had survived many library updates and rewrites.

Alessandro Decina(Spotify) kept working on his patches to update the Firefox GStreamer backend to GStreamer 1.0. We hope to deploy this work in Fedora in the not to distant future. As a hack for the hackfest he provided patches to implement audio and video capture.

Wim Taymans (Red Hat) was hacking on a new library that can parse and generate MIKEY messages (RFC 3830). He want to use this in the GStreamer RTSP server to negotiate SRTP (secure RTP) encryption parameters.

We had 2 people from the Swedish company AXIS, who provide network cameras that all run GStreamer and who contribute on a regular basis to the RTP and RTSP elements and libraries. Ognyan Tonchev was mostly writing some unit tests for RTSP and multicast handling in the RTSP server. Sebastian Rasmussen had been hacking on our watchdog element and the payloaders.

Infrastructure

Long time GStreamer hacker Stefan Sauer (Google) gave a demo of his idea for a tracing infrastructure in GStreamer. The idea is to place trace macros at strategic places that would send structured data to pluggable tracer modules. Some of the tracer modules could, for example measure CPU usage of a plugin or measure the latency. The idea would be to gradually replace our extensive (but unstructured) logging with this new trace infrastructure. This would allow us to do new interesting things, like send debug log to a remote machine or produce STF (Structured Trace Format) to analyse with standard tools. No immediate plans were made to merge this but there seems to be very little resistance to get this merged soon.

Core hacker Sebastian Dröge (Centricular) has been going over the current Stream selection ideas. One of the long outstanding issues is that of switching streams between different languages: you have a movie in different languages and you want to switch between them. To achieve low-latency old data should be kept around for the streams that are not currently selected and be quickly and sent to the audio device. The idea is a combination of events to select a stream and to have the demuxer seek back in the stream on
switches. No final conclusion or plan that can solve all requirements has been reached yet.

Also investigations have begun to make decodebin deal with renegotiations. For example, when a new stream is selected, we might need to use a different decoder for this stream but also when new input is received, decodebin should be able to reconfigure itself. The decodebin code is a complicated beast so any change to it should be done carefully.

GStreamer maintainer Tim-Philipp Müller (Centricular) spent his time merging the new device probing and monitoring API (written by Olivier Crête from Collabora) that had been sitting in bugzilla for a while now. The purpose is to be able to probe devices and their capabilities such as v4l2 and ALSA devices. It’s also possible to be notified when devices appear and disappear in the system. An implementation for pulseaudio devices and another for v4l2 devices using gudev has been committed as well. This reimplements a
feature that was in 0.10, but got cut from 1.0 due to us not being happy with the old design. One of the complications with that was the fact that we ran out of bits in one of our enums so we needed to find a good solution for that.

We briefly discussed how to implement the SKIP seek flag. This extra flag can be used when doing fast forward or reverse and instructs the decoders that it is allowed to throw away data to more efficiently perform the trick mode (at reduced accuracy). There was a prototype for AVI playback that I implemented once that we discussed a bit. We’ll see if someone takes up the task to finalize this work and implement SKIP mode in more demuxers.

I took some photos during the event to capture the spirit and put them on Google Plus for your viewing pleasure.

A big thank you to Google for hosting us and providing us with free lunch and free drinks through the weekend.

by uraeus at March 20, 2014 04:34 PM

March 19, 2014

Arun RaghavanIntroducing peerflixsrc

Some of you might have been following all the brouhaha over Popcorn Time. I won’t get into the arguments that can be made for and against at the moment.

While poking around at what it was that Popcorn Time was doing, I stumbled upon peerflix, a Node.js-based application that takes a .torrent file that points to one big video file, and presents that as an HTTP stream. It has its own BitTorrent implementation where it prioritises early chunks of the file so that it is possible to start watching the video before the entire file has been downloaded. It also seeds the file while the video is being watched locally.

Seeing as I was at the GStreamer Hackfest in Munich when this came up in discussions, it seemed topical to have a GStreamer element to wrap this neat bit of functionality. Thus was peerflixsrc born. This is a simple source element that takes a URI to a torrent file (something like torrent+http://archive.org/some/video.torrent), fires up peerflix in the background, and provides the data from the corresponding HTTP stream. Conveniently enough, this can be launched using playbin or Totem (hinting at the possibilities of what can come next!). Here’s what it looks like…

Screenshot of Totem playing a torrent file directly using peerflixsrc

Screenshot of Totem playing a torrent file directly using peerflixsrc

The code is available now. To use it, build this copy of gst-plugins-bad using your favourite way, make sure you have peerflix installed (sudo npm install -g peerflix), and you’re good to go.

This is not quite mature enough to go into upstream GStreamer. The ugliest part is firing up a Node.js server to make this work, not the least because managing child processes on Linux is not the prettiest code you can write. Maybe someone wants to look at rewriting the torrent bits from peerflix in C? There don’t seem to be any decent C-based libraries for this out there, though.

In the mean time, enjoy this, and comments / patches welcome!

by Arun at March 19, 2014 04:26 PM

Zeeshan AliBoxes 3.12

(Zeeshan Ali)
I just rolled out Boxes 3.11.92, which is going to become 3.12 in a week. Apart from lots of fixes and minor improvements like addition of keyboard shortcuts for improved accessibility for example, there are some note worthy changes against 3.10:
  • Dropped use of clutter and clutter-gtk: While it was a good idea to mix gtk+ and clutter at the beginning of the project to make most of the animations and transparency controls possible, Gtk+ gained new API over last few years to make most of what Boxes needed, possible. So I decided to attempt to remove clutter* from the picture and I'm glad to report that my attempt was a success. This means:

    • Less animations: Some of the animations we had are still not possible with Gtk+ (at least not in any easy/nice way) so they had to be dropped but they are nothing really essential to how Boxes work and were only good for impressing first time users. I'm talking about box thumbnail flying around the window for transitions between different UI states.

    • More animations: Making use of new Gtk+ API, we gained some nice animations for UI transitions that nicely makes up for the dropped animations. Here is a video of Boxes 3.12, where you can see all these animations.



    • Simplified code: Removal of clutter actors from widget hierarchy also made it easier to simplify the hierarchy quite a bit. I also took the liberty of moving most of the UI setup to UI description files separate from rest of the code. So overall the code is a lot cleaner and therefore much easier to maintain, hence removed clutter even literally.

  • Ability to easily import existing VMs from system libvirt: Many people have been using virt-manager for years and while they'd want to use Boxes, them not being able to easily use their existing VMs wasn't  encouraging them to switch. Now we've fixed that.



  • NAT networking: Since the very beginning of the project, one complain we kept getting was that the default network setup by Boxes in VMs was slow and VM was unreachable even from the host machine. This has finally been fixed by Boxes now setting up a NAT network in new VMs using the special bridge network setup by libvirt. This means that all VMs are on the same private (to host) network and therefore VMs and host can directly communicate with each other. Its also much faster than 'user-mode' networking we've been using till now.
Thats basically it for this release! Now some features I'd want to add in 3.14:

  • Import and export of VMs. Doing this properly will involve creation of a new library that deals with OVF. I'd like it in a library because there is at least one project (apart from Boxes) that can make use of it: gnome-continuous. Boxes already allow you to import the qcow2 images gnome-continuous produces but since this image does not provide information about the VM itself, you can very easily find yourself creating a broken VM with them. QXL breaking every other release and gnome-continuous tracking git master of that does not help at all here /rant. So if continuous would provide OVF files instead of raw disk images, it can tell Boxes to use 'vga' rather than 'qxl' whenever QXL is known to be broken.

  • Support for express installation for many other OSs/distros, especially Debian/Ubuntu. The idea has been proposed for SoC and there is already one student that has applied for it.

  • Support multiple monitors in VMs.

  • Snapshots: You went for an OS update and it completely destroyed your VM, what do you do? Snapshots will make it possible to save the VM for you before you go for that OS update so that if things go south, you have a way to easily recover. What if you installed multiple updates at different times and you don't know which update caused the problem. Snapshots will also make it possible to save multiple checkpoints of your VM so you can go back to any of them and then use the one that was not broken. Pretty cool if you think about it and makes you wish you could do the same with life. :)

    This idea has also been proposed for SoC and there are two students who have already applied for it.

  • Downloading of ISOs and images (and also VMs, after we have the VM import feature in place). Currently you can't give Boxes URL of a remote ISO or image and expect to be able to handle that. We need to fix that but automatically downloading the ISO/image for you. To make it even better, would be nice to:

    • Autocomplete URLs while you type it in the wizard using the list of known URLs in libosinfo database. E.g you type "Fed" and URLs of all Fedora releases get proposed to you, you keep typing till "Fedora 19", you already have a URL to hit enter on.

    • Provide a way to add entries to the ready menu you get in the wizard. This will allow us to provide 'a few clicks' ™ way for user to try latest GNOME unstable releases and distros to do something similar for bleeding-edge/development snapshots of their distro.

    This has also been proposed as second part of the "Automated installation" SoC project I mentioned above.

March 19, 2014 04:07 PM

Arun RaghavanGStreamer Hackfest 2014

Last weekend, I was at the GStreamer Hackfest in Munich. As usual, it was a blast — we got much done, and it was a pleasure to meet the fine folks who bring you your favourite multimedia framework again. Thanks to the conference for providing funding to make this possible!

My plan was to work on making Totem’s support for passthrough audio work flawlessly (think allowing your A/V receiver to decode AC3/DTS if it allows it, with more complex things coming the future as we support it). We’ve had the pieces in place in GStreamer for a while now, and not having that just work with Totem has been a bit of a bummer for me.

The immediate blocker so far has been that Totem needs to add a filter (scaletempo) before the audio sink, which forces negotiation to always pick a software decoder. We solved this by adding the ability for applications to specify audio/video filters for playbin to plug in if it can. There’s a now-closed bug about it, for the curious. Hopefully, I’ll get the rest of the work to make Totem use this done soon, so things just work.

Now the reason that didn’t happen at the hackfest is that I got a bit … distracted … at the hackfest by another problem. More details in an upcoming post!

by Arun at March 19, 2014 03:46 PM

March 18, 2014

David SchleefMoved blog to Mezzanine

(David Schleef)

Wow! So python. Much django.

I moved my blog from Wordpress to Mezzanine, running on Django. Why? Because I've never really been happy with Wordpress. Because I've been working with Django a lot recently (Rdio runs on Django), and I've grown used to deploying a web site from a git repository.

by ds at March 18, 2014 05:40 AM

March 17, 2014

Andy Wingostack overflow

(Andy Wingo)

Good morning, gentle hackers. Today's article is about stack representation, how stack representations affect programs, what it means to run out of stack, and that kind of thing. I've been struggling with the issue for a while now in Guile and finally came to a nice solution. But I'm getting ahead of myself; read on for some background on the issue, and details on what Guile 2.2 will do.

stack limits

Every time a program makes a call that is not a tail call, it pushes a new frame onto the stack. Returning a value from a function pops the top frame off the stack. Stack frames take up memory, and as nobody has an infinite amount of memory, deep recursion could cause your program to run out of memory. Running out of stack memory is called stack overflow.

Most languages have a terrible stack overflow story. For example, in C, if you use too much stack, your program will exhibit "undefined behavior". If you are lucky, it will crash your program; if are unlucky, it could crash your car. It's especially bad in C, as you neither know ahead of time how much stack your functions use, nor the stack limit imposed by the user's system, and the stack limit is often quite small relative to the total memory size.

Things are better, but not much better, in managed languages like Python. Stack overflow is usually assumed to throw an exception (though I couldn't find the specification for this), but actually making that happen is tricky enough that simple programs can cause Python to abort and dump core. And still, like C, Python and most dynamic languages still have a fixed stack size limit that is usually much smaller than the heap.

Arbitrary stack limits would have an unfortunate effect on Guile programs. For example, the following implementation of the inner loop of map is clean and elegant:

(define (map f l)
  (if (pair? l)
      (cons (f (car l))
            (map f (cdr l)))
      '()))

However, if there were a stack limit, that would limit the size of lists that can be processed with this map. Eventually, you would have to rewrite it to use iteration with an accumulator:

(define (map f l)
  (let lp ((l l) (out '()))
    (if (pair? l)
        (lp (cdr l) (cons (f (car l)) out))
        (reverse out))))

This second version is sadly not as clear, and it also allocates twice as much heap memory (once to build the list in reverse, and then again to reverse the list). You would be tempted to use the destructive linear-update reverse! to save memory and time, but then your code would not be continuation-safe -- if f returned again after the map had finished, it would see an out list that had already been reversed. (If you're interested, you might like this little Scheme quiz.) The recursive map has none of these problems.

a solution?

Guile 2.2 will have no stack limit for Scheme code.

When a thread makes its first Guile call, a small stack is allocated -- just one page of memory. Whenever that memory limit would be reached, Guile arranges to grow the stack by a factor of two.

Ideally, stack growth happens via mremap, and ideally at the same address in memory, but it might happen via mmap or even malloc of another memory block. If the stack moves to a different address, we fix up the frame pointers. Recall that right now Guile runs on a virtual machine, so this is a stack just for Scheme programs; we'll talk about the OS stack later on.

Being able to relocate the stack was not an issue for Guile, as we already needed them to implement delimited continuations. However, relocation on stack overflow did cause some tricky bugs in the VM, as relocation could happen at more places. In the end it was OK. Each stack frame in Guile has a fixed size, and includes space to make any nested calls; check my earlier article on the Guile 2.2 VM for more. The entry point of a function handles allocation of space for the function's local variables, and that's basically the only point the stack can overflow. The few things that did need to point into the stack were changed to be an offset from the stack base instead of a raw pointer.

Even when you grow a stack by a factor of 2, that doesn't mean you immediately take up twice as much memory. Operating systems usually commit memory to a process on a page-by-page granularity, which is usually around 4 kilobytes. Once accessed, this memory is always a part of your process's memory footprint. However, Guile mitigates this memory usage here; because it has to check for stack overflow anyway, it records a "high-water mark" stack usage since the last garbage collection. When garbage collection happens, Guile arranges to return the unused part of the stack to the operating system (using MADV_DONTNEED), but without causing the stack to shrink. In this way, the stack can grow to consume up to all memory available to the Guile process, and when the recursive computation eventually finishes, that stack memory is returned to the system.

You might wonder, why not just allocate enormous stacks, relying on the kernel to page them in lazily as needed? The biggest part of the answer is that we need to still be able to target 32-bit platforms, and this isn't a viable strategy there. Even on 64-bit, whatever limit you choose is still a limit. If you choose 4 GB, what if you want to map over a larger list? It's admittedly extreme, given Guile's current GC, but not unthinkable. Basically, your stack should be able to grow as big as your heap could grow. The failure mode for the huge-stack case is also pretty bad; instead of getting a failure to grow your stack, which you can handle with an exception, you get a segfault as the system can't page in enough memory.

The other common strategy is "segmented stacks", but the above link covers the downsides of that in Go and Rust. It would also complicate the multiple-value return convention in Guile, where currently multiple values might temporarily overrun the receiver's stack frame.

exceptional situations

Of course, it's still possible to run out of stack memory. Usually this happens because of a program bug that results in unbounded recursion, as in:

(define (faulty-map f l)
  (if (pair? l)
      (cons (f (car l)) (faulty-map f l))
      '()))

Did you spot the bug? The recursive call to faulty-map recursed on l, not (cdr l). Running this program would cause Guile to use up all memory in your system, and eventually Guile would fail to grow the stack. At that point you have a problem: Guile needs to raise an exception to unwind the stack and return memory to the system, but the user might have throw handlers in place that want to run before the stack is unwound, and we don't have any stack in which to run them.

Therefore in this case, Guile throws an unwind-only exception that does not run pre-unwind handlers. Because this is such an odd case, Guile prints out a message on the console, in case the user was expecting to be able to get a backtrace from any pre-unwind handler.

runaway recursion

Still, this failure mode is not so nice. If you are running an environment in which you are interactively building a program while it is running, such as at a REPL, you might want to impose an artificial stack limit on the part of your program that you are building to detect accidental runaway recursion. For that purpose, there is call-with-stack-overflow-handler. You run it like this:

(call-with-stack-overflow-handler 10000
  (lambda ()              ; body
    (faulty-map (lambda (x) x) '(1 2 3)))
  (lambda ()              ; handler
    (error "Stack overflow!")))

→ ERROR: Stack overflow

The body procedure is called in an environment in which the stack limit has been reduced to some number of words (10000, in the above example). If the limit is reached, the handler procedure will be invoked in the dynamic environment of the error. For the extent of the call to the handler, the stack limit and handler are restored to the values that were in place when call-with-stack-overflow-handler was called.

Unlike the unwind-only exception that is thrown if Guile is unable to grow its stack, any exception thrown by a stack overflow handler might invoke pre-unwind handlers. Indeed, the stack overflow handler is itself a pre-unwind handler of sorts. If the code imposing the stack limit wants to protect itself against malicious pre-unwind handlers from the inner thunk, it should abort to a prompt of its own making instead of throwing an exception that might be caught by the inner thunk. (Overflow on unwind via inner dynamic-wind is not a problem, as the unwind handlers are run with the inner stack limit.)

Usually, the handler should raise an exception or abort to an outer prompt. However if handler does return, it should return a number of additional words of stack space to grant to the inner environment. A stack overflow handler may only ever "credit" the inner thunk with stack space that was available when the handler was instated. When Guile first starts, there is no stack limit in place, so the outer handler may allow the inner thunk an arbitrary amount of space, but any nested stack overflow handler will not be able to consume more than its limit.

I really, really like Racket's notes on iteration and recursion, but treating stack memory just like any other kind of memory isn't always what you want. It doesn't make sense to throw an exception on an out-of-memory error, but it does make sense to do so on stack overflow -- and you might want to do some debugging in the context of the exception to figure out what exactly ran away. It's easy to attribute blame for stack memory use, but it's not so easy for heap memory. And throwing an exception will solve the problem of too much stack usage, but it might not solve runaway memory usage. I prefer the additional complexity of having stack overflow handlers, as it better reflects the essential complexity of resource use.

os stack usage

It is also possible for Guile to run out of space on the "C stack" -- the stack that is allocated to your program by the operating system. If you call a primitive procedure which then calls a Scheme procedure in a loop, you will consume C stack space. Guile tries to detect excessive consumption of C stack space, throwing an error when you have hit 80% of the process' available stack (as allocated by the operating system), or 160 kilowords in the absence of a strict limit.

For example, looping through call-with-vm, a primitive that calls a thunk, gives us the following:

(use-modules (system vm vm))

(let lp () (call-with-vm lp))

→ ERROR: Stack overflow

Unfortunately, that's all the information we get. Overrunning the C stack will throw an unwind-only exception, because it's not safe to do very much when you are close to the C stack limit.

If you get an error like this, you can either try rewriting your code to use less stack space, or you can increase Guile's internal C stack limit. Unfortunately this is a case in which the existence of a limit affects how you would write your programs. The the best thing is to have your code operate without consuming so much OS stack by avoiding loops through C trampolines.

I don't know what will happen when Guile starts to do native compilation. Obviously we can't relocate the C stack, so lazy stack growth and relocation isn't a viable strategy if we want to share the C and Scheme stacks. Still, we need to be able to relocate stack segments for delimited continuations, so perhaps there will still be two stacks, even with native C compilation. We will see.

Well, that's all the things about stacks. Until next time, happy recursing!

by Andy Wingo at March 17, 2014 11:40 AM

March 16, 2014

GStreamerGst Python, GNonLin and GStreamer Editing Services 1.2.0 stable release

(GStreamer)

The GStreamer project is pleased to announce the very first release of the new API and ABI-stable 1.x series of gst-python, GNonLin, and Gstreamer Editing Services.

Check out the GES release notes here or download tarballs from here.

Check out the GNonLin release notes here or download tarballs from here.

Check out the gst-python release notes here or download tarballs from here.

March 16, 2014 01:10 AM

March 07, 2014

Andy Wingoes6 generator and array comprehensions in spidermonkey

(Andy Wingo)

Good news, everyone: ES6 generator and array comprehensions just landed in SpiderMonkey!

Let's take a quick look at what comprehensions are, then talk about what just landed in SpiderMonkey and when you'll see it in a Firefox release. Thanks to Bloomberg for sponsoring this work.

comprendes, mendes

Comprehensions are syntactic sugar for iteration. Unlike for-of, which processes its body for side effects, an array comprehension processes its body for its values, collecting them into a new array. Like this:

// Before (by hand)
var foo = (function(){
             var result = [];
             for (var x of y)
               result.push(x*x);
             return result;
           })();

// Before (assuming y has a map() method)
var foo = y.map(function(x) { return x*x });

// After
var foo = [for (x of y) x*x];

As you can see, array comprehensions are quite handy. They're expressions, not statements, and so their result can be passed directly to whatever code needs it. This can make your program more clear, because you aren't forced to give names to intermediate values, like result. At the same time, they work on any iterable, so you can use them on more kinds of data than just arrays. Because array comprehensions don't make a new closure, you can access arguments and this and even yield from within the comprehension tail.

Generator comprehensions are also nifty, but for a different reason. Let's look at an example first:

// Before
var bar = (function*(){ for (var x of y) yield y })();

// After
var bar = (for (x of y) y);

As you can see the syntactic win here isn't that much, compared to just writing out the function* and invoking it. The real advantage of generator comprehensions is their similarity to array comprehensions, and that often you can replace an array comprehension by a generator comprehension. That way you never need to build the complete list of values in memory -- you get laziness for free, just by swapping out those square brackets for the comforting warmth of parentheses.

Both kinds of comprehension can contain multiple levels of iteration, with embedded conditionals as you like. You can do [for (x of y) for (z of x) if (z % 2) z + 1] and all kinds of related niftiness. Comprehensions are almost always more concise than map and filter, with the added advantage that they are usually more efficient.

what happened

SpiderMonkey has had comprehensions for a while now, but only as a non-standard language extension you have to opt into. Now that comprehensions are in the draft ES6 specification, we can expose them to the web as a whole, by default.

Of course, the comprehensions that ES6 specified aren't quite the same as the ones that were in SM. The obvious difference is that SM's legacy comprehensions were written the other way around: [x for (x of y)] instead of the new [for (x of y) x]. There were also a number of other minor differences, which I'll list here for posterity:

  • ES6 comprehensions create one scope per "for" node -- not one for the comprehension as a whole.

  • ES6 comprehensions can have multiple "if" components, which may be followed by other "for" or "if" components.

  • ES6 comprehensions should make a fresh binding on each iteration of a "for", although Firefox currently doesn't do this (bug 449811). Incidentally, for-of in Firefox has this same problem.

  • ES6 comprehensions only do for-of iteration, not for-in iteration.

  • ES6 generator comprehensions always need parentheses around them. (The parentheses were optional in some cases for SM's old generator comprehensions.

  • ES6 generator comprehensions are ES6 generators (returning {value, done} objects), not legacy generators (StopIteration).

I should note in particular that the harmony wiki is out of date, as the feature has moved into the spec proper: array comprehensions, generator comprehensions.

For another fine article on ES6 generators, check out Ariya Hidayat's piece on comprehensions from about a year ago.

So, ES6 comprehensions just landed in SpiderMonkey today, which means it should be part of Firefox 30, which should reach "beta" in April and become a stable release in June. You can try it out tomorrow if you use a nightly build, provided it doesn't cause some crazy breakage tonight. As of this writing, Firefox will be the first browser to ship ES6 array and generator comprehensions.

colophon

I had a Monday of despair: hacking at random on something that didn't solve my problem. But I had a Tuesday morning of pleasure, when I realized that my Monday's flounderings could be cooked into a delicious mid-week bisque; the hack was obvious and small and would benefit the web as a whole. (Wednesday was for polish and Thursday on another bug, and Friday on a wild parser-to-OSR-to-assembly-and-back nailbiter; but in the end all is swell.)

Thanks again to Bloomberg for this opportunity to build out the web platform, and to Mozilla for their quality browser wares (and even better community of hackers).

This has been an Igalia joint. Until next time!

by Andy Wingo at March 07, 2014 09:11 PM