December 20, 2014

GStreamerGStreamer Core, Plugins and RTSP server 1.4.5 stable release

(GStreamer)

The GStreamer team is pleased to announce a bugfix release of the stable 1.4 release series. The 1.4 release series is adding new features on top of the 1.2 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework that contains new features. The 1.4.x bugfix releases only contain important bugfixes compared to 1.4.0.

Binaries for Android, iOS, Mac OS X and Windows are provided by the GStreamer project for this release. The Android binaries are now built with the r10c NDK and as such binary compatible again with all NDK and Android releases. Additionally now binaries for Android ARMv7 and Android X86 are provided. This binary release features the first 1.4 releases of GNonLin and the GStreamer Editing Services.

The 1.x series is a stable series targeted at end users. It is not API or ABI compatible with the 0.10.x series. It can, however, be installed in parallel with the 0.10.x series and will not affect an existing 0.10.x installation.

The stable 1.4.x release series is API and ABI compatible with 1.0.x and any other 1.x release series in the future. Compared to 1.0.x it contains some new features and more intrusive changes that were considered too risky as a bugfix.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, or gst-rtsp-server, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, or gst-libav, or gst-rtsp-server.

Check the release announcement mail for details and the release notes above for a list of changes.

Also available are binaries for Android, iOS, Mac OS X and Windows.

December 20, 2014 03:17 PM

December 15, 2014

Phil NormandWeb Engines Hackfest 2014

(Phil Normand)

Last week I attended the Web Engines Hackfest. The event was sponsored by Igalia (also hosting the event), Adobe and Collabora.

As usual I spent most of the time working on the WebKitGTK+ GStreamer backend and Sebastian Dröge kindly joined and helped out quite a bit, make sure to read his post about the event!

We first worked on the WebAudio GStreamer backend, Sebastian cleaned up various parts of the code, including the playback pipeline and the source element we use to bridge the WebCore AudioBus with the playback pipeline. On my side I finished the AudioSourceProvider patch that was abandoned for a few months (years) in Bugzilla. It’s an interesting feature to have so that web apps can use the WebAudio API with raw audio coming from Media elements.

I also hacked on GstGL support for video rendering. It’s quite interesting to be able to share the GL context of WebKit with GStreamer! The patch is not ready yet for landing but thanks to the reviews from Sebastian, Mathew Waters and Julien Isorce I’ll improve it and hopefully commit it soon in WebKit ToT.

Sebastian also worked on Media Source Extensions support. We had a very basic, non-working, backend that required… a rewrite, basically :) I hope we will have this reworked backend soon in trunk. Sebastian already has it working on Youtube!

The event was interesting in general, with discussions about rendering engines, rendering and JavaScript.

by Philippe Normand at December 15, 2014 04:51 PM

December 11, 2014

Sebastian DrögeWeb Engines Hackfest 2014

(Sebastian Dröge)

During the last days I attended the Web Engines Hackfest 2014 in A Coruña, which was kindly hosted by Igalia in their office. This gave me some time to work again on WebKit, and especially improve and clean up its GStreamer media backend, and even more important of course an opportunity to meet again all the great people working on it.

Apart from various smaller and bigger cleanups, and getting 12 patches merged (thanks to Philippe Normand and Gustavo Noronha reviewing everything immediately), I was working on improving the WebAudio AudioDestination implementation, which allows websites to programmatically generate audio via Javascript and output it. This should be in a much better state now.

But the biggest chunk of work was my attempt for a cleaner and actually working reimplementation of the Media Source Extensions. The Media Source Extensions basically allow a website to provide a container or elementary stream for e.g. the video tag via Javascript, and can for example be used to implement custom streaming protocols. This is still far from finished, but at least it already works on YouTube with their Javascript DASH player and should give a good starting point for finishing the implementation. I hope to have some more time for continuing this work, but any help would be welcome.

Next to the Media Source Extensions, I also spent some time on reviewing a few GStreamer related patches, e.g. the WebAudio AudioSourceProvider implementation by Philippe, or his patch to use GStreamer’s OpenGL library directly in WebKit instead of reimplementing many pieces of it.

I also took a closer look at Servo, Mozilla’s research browser engine written in Rust. It looks like a very promising and well designed project (both, Servo and Rust actually!). I’m sure I’ll play around with Rust in the near future, and will also try to make some time available to work a bit on Servo. Thanks also to Martin Robinson for answering all my questions about Servo and Rust.

And then there were of course lots of discussions about everything!

Good things are going to happen in the future, and WebKit still seems to be a very active project with enthusiastic people :)
I hope I’ll be able to visit next year’s Web Engines Hackfest again, it was a lot of fun.

by slomo at December 11, 2014 06:54 PM

December 09, 2014

Andy Wingostate of js implementations, 2014 edition

(Andy Wingo)

I gave a short talk about the state of JavaScript implementations this year at the Web Engines Hackfest.


29 minutes, vorbis or mp3; slides (PDF)

The talk goes over a bit of the history of JS implementations, with a focus on performance and architecture. It then moves on to talk about what happened in 2014 and some ideas about where 2015 might be going. Have a look if that's a thing you are in to. Thanks to Adobe, Collabora, and Igalia for sponsoring the event.

by Andy Wingo at December 09, 2014 10:29 AM

December 08, 2014

Bastien NoceraA look at new developer features

(Bastien Nocera) As the development window for GNOME 3.16 advances, I've been adding a few new developer features, selfishly, so I could use them in my own programs.

Connectivity support for applications

Picking up from where Dan Winship left off, we've merged support for application to detect the network availability, especially the "connected to a network but not to the Internet" case.

In glib/gio now, watch the value of the "connectivity" property in GNetworkMonitor.

Grilo automatic network awareness

This glib/gio feature allows us to show/hide Grilo sources from applications' view if they require Internet and LAN access to work. This should be landing very soon, once we've made the new feature optional based on the presence of the new GLib.

Totem

And finally, this means we'll soon be able to show a nice placeholder when no network connection is available, and there are no channels left.

Grilo Lua resources support

A long-standing request, GResources support has landed for Grilo Lua plugins. When a script is loaded, we'll look for a separate GResource file with ".gresource" as the suffix, and automatically load it. This means you can use a local icon for sources with the URL "resource:///org/gnome/grilo/foo.png". Your favourite Lua sources will soon have icons!

Grilo Opensubtitles plugin

The developers affected by this new feature may be a group of one, but if the group is ever to expand, it's the right place to do it. This new Grilo plugin will fetch the list of available text subtitles for specific videos, given their "hashes", which are now exported by Tracker.

GDK-Pixbuf enhancements

I can point you to the NEWS file for the latest version, but the main gains are that GIF animations won't eat all your memory, DPI metadata support in JPEG, PNG and TIFF formats, and, for image viewers, you can tell whether a TIFF file is multi-page to open it in a more capable viewer.

Batched inserts, and better filters in GOM

Does what it says on the tin. This is useful for populating the database quicker than through piecemeal inserts, it also means you don't need to chain inserts when inserting multiple items.

Mathieu also worked on fixing the priority of filters when building complex queries, as well as supporting more than 2 items in a filter ("foo OR bar OR baz" for example).

by Bastien Nocera (noreply@blogger.com) at December 08, 2014 10:55 PM

December 02, 2014

Andy Wingothere are no good constant-time data structures

(Andy Wingo)

Imagine you have a web site that people can access via a password. No user name, just a password. There are a number of valid passwords for your service. Determining whether a password is in that set is security-sensitive: if a user has a valid password then they get access to some secret information; otherwise the site emits a 404. How do you determine whether a password is valid?

The go-to solution for this kind of problem for most programmers is a hash table. A hash table is a set of key-value associations, and its nice property is that looking up a value for a key is quick, because it doesn't have to check against each mapping in the set.

Hash tables are commonly implemented as an array of buckets, where each bucket holds a chain. If the bucket array is 32 elements long, for example, then keys whose hash is H are looked for in bucket H mod 32. The chain contains the key-value pairs in a linked list. Looking up a key traverses the list to find the first pair whose key equals the given key; if no pair matches, then the lookup fails.

Unfortunately, storing passwords in a normal hash table is not a great idea. The problem isn't so much in the hash function (the hash in H = hash(K)) as in the equality function; usually the equality function doesn't run in constant time. Attackers can detect differences in response times according to when the "not-equal" decision is made, and use that to break your passwords.

Edit: Some people are getting confused by my use of the term "password". Really I meant something more like "secret token", for example a session identifier in a cookie. I thought using the word "password" would be a useful simplification but it also adds historical baggage of password quality, key derivation functions, value of passwords as an attack target for reuse on other sites, etc. Mea culpa.

So let's say you ensure that your hash table uses a constant-time string comparator, to protect against the hackers. You're safe! Or not! Because not all chains have the same length, "interested parties" can use lookup timings to distinguish chain lookups that take 2 comparisons compared to 1, for example. In general they will be able to determine the percentage of buckets for each chain length, and given the granularity will probably be able to determine the number of buckets as well (if that's not a secret).

Well, as we all know, small timing differences still leak sensitive information and can lead to complete compromise. So we look for a data structure that takes the same number of algorithmic steps to look up a value. For example, bisection over a sorted array of size SIZE will take ceil(log2(SIZE)) steps to get find the value, independent of what the key is and also independent of what is in the set. At each step, we compare the key and a "mid-point" value to see which is bigger, and recurse on one of the halves.

One problem is, I don't know of a nice constant-time comparison algorithm for (say) 160-bit values. (The "passwords" I am thinking of are randomly generated by the server, and can be as long as I want them to be.) I would appreciate any pointers to such a constant-time less-than algorithm. However a bigger problem is that the time it takes to access memory is not constant; accessing element 0 of the sorted array might take more or less time than accessing element 10. In algorithms we typically model access on a more abstract level, but in hardware there's a complicated parallel and concurrent protocol of low-level memory that takes a non-deterministic time for any given access. "Hot" (more recently accessed) memory is faster to read than "cold" memory.

Non-deterministic memory access leaks timing information, and in the case of binary search the result is disaster: the attacker can literally bisect the actual values of all of the passwords in your set, by observing timing differences. The worst!

You could get around this by ordering "passwords" not by their actual values but by their cryptographic hashes (e.g. by their SHA256 values). This would force the attacker to bisect not over the space of password values but of the space of hash values, which would protect actual password values from the attacker. You still leak some timing information about which paths are "hot" and which are "cold", but you don't expose actual passwords.

It turns out that, as far as I am aware, it is impossible to design a key-value map on common hardware that runs in constant time and is sublinear in the number of entries in the map. As Zooko put it, running in constant time means that the best case and the worst case run in the same amount of time. Of course this is false for bucket-and-chain hash tables, but it's false for binary search as well, as "hot" memory access is faster than "cold" access. The only plausible constant-time operation on a data structure would visit each element of the set in the same order each time. All constant-time operations on data structures are linear in the size of the data structure. Thems the breaks! All you can do is account for the leak in your models, as we did above when ordering values by their hash and not their normal sort order.

Once you have resigned yourself to leaking some bits of the password via timing, you would be fine using normal hash tables as well -- just use a cryptographic hashing function and a constant-time equality function and you're good. No constant-time less-than operator need be invented. You leak something on the order of log2(COUNT) bits via timing, where COUNT is the number of passwords, but since that's behind a hash you can't use it to bisect on actual key values. Of course, you have to ensure that the hash table isn't storing values in sorted order and short-cutting early. This sort of detail isn't usually part of the contract of stock hash table implementations, so you probably still need to build your own.

Edit: People keep mentioning Cuckoo hashing for some reason, despite the fact that it's not a good open-hashing technique in general (Robin Hood hashes with linear probing are better). Thing is, any operation on a data structure that does not touch all of the memory in the data structure in exactly the same order regardless of input leaks cache timing information. That's the whole point of this article!

An alternative is to encode your data structure differently, for example for the "key" to itself contain the value, signed by some private key only known to the server. But this approach is limited by network capacity and the appropriateness of copying for the data in question. It's not appropriate for photos, for example, as they are just too big.

Edit: Forcing constant-time on the data structure via sleep() or similar calls is not a good mitigation. This either massively slows down your throughput, or leaks information via side channels. Remote attackers can measure throughput instead of latency to determine how long an operation takes.

Corrections appreciated from my knowledgeable readers :) I was quite disappointed when I realized that there were no good constant-time data structures and would be happy to be proven wrong. Thanks to Darius Bacon, Zooko Wilcox-O'Hearn, Jan Lehnardt, and Paul Khuong on Twitter for their insights; all mistakes are mine.

by Andy Wingo at December 02, 2014 10:01 PM

December 01, 2014

Jan SchmidtNetwork clock examples

(Jan Schmidt)

Way back in 2006, Andy Wingo wrote some small scripts for GStreamer 0.10 to demonstrate what was (back then) a fairly new feature in GStreamer – the ability to share a clock across the network and use it to synchronise playback of content across different machines.

Since GStreamer 1.x has been out for over 2 years, and we get a lot of questions about how to use the network clock functionality, it’s a good time for an update. I’ve ported the simple examples for API changes and to use the gobject-introspection based Python bindings and put them up on my server.

To give it a try, fetch play-master.py and play-slave.py onto 2 or more computers with GStreamer 1 installed. You need a media file accessible via some URI to all machines, so they have something to play.

Then, on one machine run play-master.py, passing a URI for it to play and a port to publish the clock on:

./play-master.py http://server/path/to/file 8554

The script will print out a command line like so:

Start slave as: python ./play-slave.py http://server/path/to/file [IP] 8554 1071152650838999

On another machine(s), run the printed command, substituting the IP address of the machine running the master script.

After a moment or two, the slaved machine should start playing the file in synch with the master:

Network Synchronised Playback

If they’re not in sync, check that you have the port you chose open for UDP traffic so the clock synchronisation packets can be transferred.

This basic technique is the core of my Aurena home media player system, which builds on top of the network clock mechanism to provide file serving and a simple shuffle playlist.

For anyone still interested in GStreamer 0.10 – Andy’s old scripts can be found on his server: play-master.py and play-slave.py

by thaytan at December 01, 2014 02:13 PM

Thiago SantosAdaptiveDemux merged (finally)

The adaptivedemux base class has just been merged to gst-plugins-bad package and, with it, the first port of our 3 adaptive demuxer elements has also landed. Dashdemux is now only a thin layer of code that handles the specifics of parsing and keeping an MPD representation while the base class is the real adaptive client that does the fragment download and pushing among other generic adaptive features. (More on the base class can be found here: http://blog.thiagoss.com/2014/09/01/adaptive-demuxer-baseclass/)

Various playback tests have been done this month but some corner case may have escaped. So, if you test the new dashdemux (please do it) and happen to find a regression or any other issue report them at http://bugzilla.gnome.org/enter_bug.cgi?product=GStreamer. Bugs reported for DASH will have a very high priority on my TODO list for the next weeks. There are already some bugs open with patches for the old dashdemux version and I'm going to go over them and check if the patch can be ported to the adaptivedemux baseclass so those get fixed for all 3 formats for once (base classes are great!)

As for the other 2 demuxers (SmoothStreaming and HLS), we should be merging the port to the baseclass of one of them by the end of this week if nothing seriously wrong is reported until then. The other one will be merged in the following week. The rationale is that it allows some time for the code to sink in and obvious bugs to be found and fixed before having to test the next one.

Again, please test with git master so we can have a solid support for adaptive formats by the upcoming 1.6 release.

EDIT: mssdemux port has been merged in 04/12/2014

by Thiago Santos at December 01, 2014 03:29 AM

November 30, 2014

Thomas Vander Stichele23 years

(Thomas Vander Stichele)

(This post is only about music – for people not from Belgium, Luc de Vos, singer of Gorki, passed away yesterday at 52)

I am 15. I hear a song on the radio, and I don’t understand the lyrics. Why would you ask a piranha to devour you? Still, I’m intrigued. I’d only really gotten into music little by little. My earliest musical memory is hearing my parents’ record player playing ‘I want you’ by Bob Dylan. After that, it was my inexplicable arousal at seeing the Hey You the Rock Steady Crewvideo in 1983 when I was 7, getting the Top Gun soundtrack on cassette (my first ever music purchase) in 1986, and watching the video for ‘I want your sex’ by George Michael in 1987 over and over on my recording of Veronica’s “Countdown”. At my confirmation (12 years old), when kids typically get some kind of bigger gift they’ve been dreaming of for a long time, I still chose a computer instead of a stereo.

I am 16, I just had my birthday. I am doing a summer job at my family’s company (which processes animal fat) and I am staying with my grandparents in Bavegem. With the money from my birthday I bought a portable stereo CD/cassette player for the incredible amount of 6000 BEF (or 150 euro as the kids would call it these days). . I listen to nothing else for weeks on end. I can still hum the amazingly beautiful piano part that closes Mia from memory. It’s been my favorite song ever since.

I am 17, and learning the guitar. It turns out that Mia is quite complicated to get right, because of that perfect 3/4-5/4 tempo, or whatever you’d call it if you knew anything about music. It doesn’t help that I’m left-handed playing on a right-handed guitar, but I make the song my own. To this day though, I can still not play and sing it at the same time. There is something about the timing of how that third line starts before the music starts, where he signs ‘Mensen als ik’, that I just can’t figure out. It’s magic – it makes this song all the better.

I am 17, and Gorky is now Gorki, with completely new band members. I see them live for the first time, at ‘De Kring’ in Merelbeke, with my best friend Jeremy. I wish I had bought all the t-shirts that night – they had a different one for each of the new songs. The album sounds so different – parts of it recorded in Africa. I don’t listen to that album enough, but I still love playing Berejager on guitar, such a beautiful intro.

I am 17, and it’s my last year of boy scout before becoming a leader. I have a mini-JIN camp called JINTRO during the year, that ends with a party. I dance with a girl to Mia, and one minute into the dance she says, ‘no no, we’re not going to do a one-tile-dance for the rest of the night. Here’s how you do it’ and she teaches me two basic moves to make a slow dance more interesting. Thank you, Karlien, for changing my life.

I am 18, and we travel through Catalunya with the boy and girl scouts group I’m in, and a local Catalan group. This is one of the CD’s we brought with us as a sample of our own culture. The Catalans love it – they say it sounds like Bruce Springsteen. I can see where they’re coming from. At the end of the two weeks, he guitar player of their group nails down a really good version of Mia (without the words of course)

I am 18, and have my first serious girlfriend. Mia is a song that runs through our history together – we must have danced to it at every party that played it (she messaged me yesterday that she immediately thought of me when she heard the news… just like I did of her). Back then, parties still had blocks of 3 slow songs every one or two hours. I miss that tradition… The moves that Karlien taught me put me well ahead of the pack of my fellow young adult males, and that paid off generously in the young adult females agreeing to dance with me at every party. (The theory of compounded interest clearly put in practice, now that I think of it)

I am 19, and one of my fellow boy scout leaders gives me an old demo cassette of Gorky. Among other things, it contains a cover of the Pixies’ “Monkey Gone to Heaven”, some of their songs that didn’t make their debut (but appeared on Boterhammen, like ‘Ik word oud’, or were turned into a b-side). It also contains the original version of Mia, as a fast-paced slurred-sung rocker. They made the right call slowing it down.

I am 21, and I have a radio show at a student radio I helped start up. I am too young to know how the world really works and just send out interview requests to managers and record labels for bands that I like. In those days, I got to interview my favorite band, The Afghan Whigs, as well as other bands like Everclear and The Sheila Divine. But we also managed to get Luc De Vos as a guest on our radio show, and Jeremy and I interviewed him inbetween songs for an hour. (That tape is at my parents’ place. I have an Excel sheet that tells me exactly which box it’s in, and I hope I can recover it next time I go to Belgium.) I tell him about that demo tape that I have, and he asks for a copy. A little after that, I bring him a copy of that practice tape, I put ‘Congregation’ by The Afghan Whigs on the other side (because I want one of my favorite bands to know another one of my favorite bands), and I go past his house to drop it off. (From the news report this weekend I hear he still lived in the same street, so I can only assume he was still living in the same house he’s lived for the last 17 years).

I am 22, and Luc De Vos plays solo at the university somewhere, in an auditorium. I think it was one of the first times he ever did that. He probably already read out a column he wrote. But I remember how amazing he was by himself, what beautiful versions of these songs that I knew so well he played, songs that usually they didn’t play live because they were the slower ones. ‘Arme Jongen’, I remember him playing it there like it was yesterday.

I am 26, and I see him at various festivals, always there to either play or enjoy the music. I see him backstage with his son, recently born. He is walking around with some kind of elastic band tied around his waist that keeps his kid from running away more than ten meters from him, and it is hilarious to see in the backstage area.

Time starts moving quicker as I grow up, become an adult, and graduate from college. More and more albums. Every album still contained at least one killer song. ‘Leve de Lente’ still gives me goosebumps when those guitars crash in. ‘Vaarwel Lieveling’ is possibly his most underrated song – I don’t think I’ve ever heard that one played live. ‘Ode an die freude’, ‘We zijn zo jong’, ‘Duitsland wint altijd’ – I love the sound of resignment he has in his voice, like a deep sigh put too music. That album came with a floppy disk (!) with the lyrics. ‘Het voorspel was moordend’, ‘Tijdbom’ – while the music came back to being a bit more convential, the lyrics got more hermetically sealed. I must admit that I slowly lost track – having moved to Barcelona at some point, it was much harder to catch them live of course. I know their first five albums the best, and while I still bought the others (having missed only one), none of them had the luxury of not having any other album in my collection to compete with like their debut album had. But there is no denying that when they were great, they were still amazing. A song like ‘Veronica komt naar je toe’ managed to pull together so many different things. The title was a recurring slogan of a Dutch channel that was popular among young people in Belgium, for lack of a Belgian alternative. Here’s a great song, with a great chorus, and his ability to sample just this one sentence to evoke a memory of youth every one of my generation remembers (while it evoked at the same time my personal memory of seeing ‘I want your sex’ on Veronica). And then he manages to evoke such a common feeling everyone has, where you are trying to grab that fleeting thing you were thinking just a second ago, straddling typically complicated-to-phrase words in Dutch with effortless ease – ‘Wat was het nu ook alweer/dat ik wou doen/het was iets belangrijks’ (or ‘What was it again/I wanted to do/it was something important). In the beginning, his lyrics were quirky in ideas, but fairly straightforward in their phrasing. Further on in their career, they experimented quite a bit musically, but especially the lyrics could get complicated, and with exceptional and inventive phrasing.

I’m 31, and I live in Barcelona, but I travel back to Belgium because Gorki is playing their debut album, Gorky. I wrote about that concert back then, but that memory is still strong. I can’t believe that was 7 years ago…

I always enjoyed reading his columns in Zone 09 whenever I was in my hometown, I thought he had a great gift for writing. I noticed just now he left behind quite a few more books than I had, so I started tracking those down. So many of my memories have his music attached to it. His was the first band that opened me up to a wider range of music, away from the mainstream (not everybody would agree I guess, but I never considered them mainstream. Their debut album certainly was different enough from whatever was considered mainstream at the time, and as often happens this debut was only widely recognized several albums into their career later, while at the same time those later albums never really got the same kind of traction.)

I loved his way of looking at the world, the way he described it in music, lyrics, writing, and interviews. Always with that cheeky look. Like, surprisingly it now turns out, so many of my generation, his music was intertwined with my growing up. Here’s a man I was hoping to live long and make much more music, and grow old playing hundreds of songs in bars and clubs, but it wasn’t to be. He set out to be a successful rock singer, whether that was tongue-in-cheek or not, and by all accounts he achieved what he set out to do. And everything he did, he did it for the best of reasons. He did it for ‘a fistful of bonnekes’

flattr this!

by Thomas at November 30, 2014 07:52 PM

November 27, 2014

Andy Wingoscheme workshop 2014

(Andy Wingo)

I just got back from the US, and after sleeping for 14 hours straight I'm in a position to type about stuff again. So welcome back to the solipsism, France and internet! It is good to see you on a properly-sized monitor again.

I had the enormously pleasurable and flattering experience of being invited to keynote this year's Scheme Workshop last week in DC. Thanks to John Clements, Jason Hemann, and the rest of the committee for making it a lovely experience.

My talk was on what Scheme can learn from JavaScript, informed by my work in JS implementations over the past few years; you can download the slides as a PDF. I managed to record audio, so here goes nothing:


55 minutes, vorbis or mp3

It helps to follow along with the slides. Some day I'll augment my slide-rendering stuff to synchronize a sequence of SVGs with audio, but not today :)

The invitation to speak meant a lot to me, for complicated reasons. See, Scheme was born out of academic research labs, and to a large extent that's been its spiritual core for the last 40 years. My way to the temple was as a money-changer, though. While working as a teacher in northern Namibia in the early 2000s, fleeing my degree in nuclear engineering, trying to figure out some future life for myself, for some reason I was recording all of my expenses in Gnucash. Like, all of them, petty cash and all. 50 cents for a fat-cake, that kind of thing.

I got to thinking "you know, I bet I don't spend any money on Tuesdays." See, there was nothing really to spend money on in the village besides fat cakes and boiled eggs, and I didn't go into town to buy things except on weekends or later in the week. So I thought that it would be neat to represent that as a chart. Gnucash didn't have such a chart but I knew that they were implemented in Guile, as part of this wave of Scheme consciousness that swept the GNU project in the nineties, and that I should in theory be able to write it myself.

Problem was, I also didn't have internet in the village, at least then, and I didn't know Scheme and I didn't really know Gnucash. I think what I ended up doing was just monkey-typing out something that looked like the rest of the code, getting terrible errors but hey, it eventually worked. I submitted the code, many years ago now, some of the worst code you'll read today, but they did end up incorporating it into Gnucash and to my knowledge that report is still there.

I got more into programming, but still through the back door, so to speak. I had done some free software work before going to Namibia, on GStreamer, and wanted to build a programmable modular synthesizer with it. I read about Supercollider, and decided I wanted to do something like that but with the "unit generators" defined in GStreamer and orchestrated with Scheme. If I knew then that Scheme could be fast, I probably would have started on an entirely different course of things, but that did at least result in gainful employment doing unrelated GStreamer things, if not a synthesizer.

Scheme became my dominant language for writing programs. It was fun, and the need to re-implement a bunch of things wasn't a barrier at all -- rather a fun challenge. After a while, though, speed was becoming a problem. It became apparent that the only way to speed up Guile would be to replace its AST interpreter with a compiler. Thing is, I didn't know how to write one! Fortunately there was previous work by Keisuke Nishida, jetsam from the nineties wave of Scheme consciousness. I read and read that code, mechanically monkey-typed it into compilation, and slowly reworked it into Guile itself. In the end, around 2009, Guile was faster and I found myself its co-maintainer to boot.

Scheme has been a back door for me for work, too. I randomly met Kwindla Hultman-Kramer in Namibia, and we found Scheme to be a common interest. Some four or five years later I ended up working for him with the great folks at Oblong. As my interest in compilers grew, and it grew as I learned more about Scheme, I wanted something closer there, and that's what I've been doing in Igalia for the last few years. My first contact there was a former Common Lisp person, and since then many contacts I've had in the JS implementation world have been former Schemers.

So it was a delight when the invitation came to speak (keynote, no less!) the Scheme Workshop, behind the altar instead of in the foyer.

I think it's clear by now that Scheme as a language and a community isn't moving as fast now as it was in 2000 or even 2005. That's good because it reflects a certain maturity, and makes the lore of the tribe easier to digest, but bad in that people tend to ossify and focus on past achievements rather than future possibility. Ehud Lamm quoted Nietzche earlier today on Twitter:

By searching out origins, one becomes a crab. The historian looks backward; eventually he also believes backward.

So it is with Scheme and Schemers, to an extent. I hope my talk at the conference inspires some young Schemer to make an adaptively optimized Scheme, or to solve the self-hosted adaptive optimization problem. Anyway, as users I think we should end the era of contorting our code to please compilers. Of course some discretion in this area is always necessary but there's little excuse for actively bad code.

Happy hacking with Scheme, and au revoir!

by Andy Wingo at November 27, 2014 05:48 PM

November 14, 2014

Andy Wingoon yakshave, on color, on cosines, on glitchen

(Andy Wingo)

Hold on to your butts, kids, because this is epic.

on yaks

As in all great epics, our prideful, stubborn hero starts in a perfectly acceptable state of things, decides on a lark to make a small excursion, and comes back much much later to inflict upon you pictures from his journey.

So. I have a web photo gallery but I don't take many pictures these days. Dealing with photos is a bit of a drag, and the ways that are easier like Instagram or what-not give me the (peer, corporate, government: choose 3) surveillance hives. So, I had vague thoughts that I should update my web gallery. Yakpoint 1.

At the same time, my web gallery was written for mod_python on the server, and I don't like hacking in Python any more and kinda wanted to switch away from Apache. Yakpoint 2.

So I rewrote the server-side part in Scheme. (Yakpoint 3.) It worked fine but I found I needed the ability to get the dimensions of files on the server, so I wrote a quick-and-dirty JPEG parser. Yakpoint 4.

I needed EXIF data as well, as the original version displayed EXIF data, and for that I used a binding to libexif that I had written a few years ago when I thought about starting this project (Yakpoint -1). However I found some crashers in the library, because it had never really been tested in production, and instead of fixing them I said "what the hell, I'll just write an EXIF parser". (Yakpoint 5.) So I did and adapted the web gallery to use it (Yakpoint 6, for the adaptation.)

At this point, I looked back, and looked forward, and looked all around, and all was good, but what was with this uneasiness I was feeling? And indeed, I hadn't actually made anything better, and I wasn't taking more photos, and the workflow was the same.

I was also concerned about the client side of things, which was still in Python and using some breakage-prone legacy libraries to do the photo scaling and transformations and what-not, and relied on a desktop application (f-spot) of dubious future. So I started to look at what it would take to port that script to Scheme (Yakpoint 7). Well it used some legacy libraries to copy files over SSH (gnome-vfs; switching away from that would be Yakpoint 8) and I didn't want to make a Scheme GIO binding (Yakpoint 9, narrowly avoided), and I then -- and then, dear reader -- so then I said "well WTF my caching story on the server is crap anyway, I never know when the sqlite database has changed or not so I never know what responses I can cache, what I really want is a functional datastore" (Yakpoint 10), which is what I have with Git and Tekuti (Yakpoint of yore), and so why not just store my photos in Git like I do in Tekuti for blog posts and serve them from there, indexing as needed? Of course I'd need some other server software (Yakpoint of fore, by which I meantersay the future), but then I could just git push to update my photo gallery, and I wouldn't have to endure the horror that is GVFS shelling out to ssh in a FUSE daemon (Yakpoint of ne'er).

So. After mulling over these thoughts for a while I decided, during an autumnal walk on the Salève in which we had the greatest views of Mont Blanc everrrrr and yet where are the photos?, that really what I needed was new photo management software, not just a web gallery. I should be able to share photos from my phone or from my desktop, fix them up either place, tag and such, and OK woo hoo! Such is the future! And the present for many people? Thing is, I also needed good permissions management (Yakpoint what, 10 I guess?), because you know a dude just out of college is not the same as that dude many years later. Which means serving things over HTTPS (Yakpoints 11-47) in such a way that the app has some good control over who gets what.

Well. Anyway. My mind ran ahead, and runs ahead, and yet we haven't actually tasted the awesome sauce yet. So! The photo management software, whereever it lives, needs to rotate photos at least, and scale them down to a few resolutions. I smell a yak! I looked at jpegtran which can do some lossless rotations but it's not available as a library, which is odd; and really I don't like shelling out for core program functionality, because every time I deal with the file system it's the wild west of concurrent mutation. If naming things is one of the two hardest problems in computer science, the file system is the worst because you have to give a global name to every intermediate value.

At the same time to scale images, what was I to do? Make a binding to libjpeg? Well I started (Yakpoint 48) but for reals kids, libjpeg is not fun. It works great and is really clever but

  1. it's approximately impossible to use from a dynamic ffi; you want a compiler to verify that you are using the right structure definitions

  2. there has been an inane ABI and format break imposed by the official IJG libjpeg but which other implementations have not followed, but how could you know which one you are using?

  3. the error handling facility encourages longjmp in C programs; somewhat terrifying

  4. off-heap image manipulation libraries always interact poorly with GC, because the GC only sees the small pointer to the off-heap image, and so doesn't GC often enough

  5. I have zero guarantee that libjpeg won't change ABI in weird ways, and I don't want to touch this software for the next 10 years

  6. I want to do jpegtran-like lossless transformations, but that's not available as a library, and it's totes ridics that binding libjpeg does not help you out here

  7. it's still an unsafe C library, battle-tested yes, but terrifyingly unsafe, and I'd be putting it on my server and who knows?

Friends, I arrived at the pasture, and I, I chose the yak less shaven. I took my lame JPEG parser and turned it into a full decoder (Yakpoint 49), realized it wasn't much more work to do an encoder (Yakpoint 50), and implemented the lossless transformations (Yakpoint 51).

on haters

Before we go on, I know some people would think "what is this kid about". I mean, custom gallery software, a custom JPEG library of all things, all bespoke, why don't you just use off-the-shelf solutions? Why aren't you normal and use a normal language and what about the best practices and where's your business case and I can't go on about this because there's a technical term for people that say this kind of thing and it's "hater".

Thing is, when did a hater ever make anything cool? Come to think of it, when did a hater make anything at all? In my experience the most vocal haters have nothing behind their names except a long series of pseudonymous rants in other people's comment boxes. So friends, in the joyful spirit of earning-anew, let's talk about JPEG!

on color

JPEG is a funny thing. Photos are our lives and our memories, our first steps and our friends, and yet I for one didn't know very much about them. My mental model that "a JPEG is a rectangle of pixels" doesn't turn out to be quite right.

If you actually look in a normal JPEG, you see three planes of information. If I take this image, for example:

If I decode it, actually I get three images. Here's the first one:

This is just the greyscale version of the image. So, storytime! Remember black and white television? We had an old one that got moved around the house sometimes, like if Mom was working at something in the kitchen. We also had a color one in the living room, and you could watch one or the other and they showed the same stuff. Strange when you think about it though -- one being in color and the other not. Well it turns out that color was literally just added on, both historically and technically. The main broadcast was still in black and white, and then in one part of the frequency band there were separate color signals, which color TVs would pick up, mix with the black and white signal, and come out with color. Wikipedia notes that "color TV" was really just "colored TV", which is a phrase whose cleverness I respect. Big ups to the W P.

In the context of JPEG, this black-and-white signal is sometimes called "luma", but is more precisely called Y', where the "prime" (the apostrophe) indicates that the signal has gamma correction applied.

In the image above, I replaced the color planes (sometimes collectively called the "chroma") with zeroes, while losslessly keeping the luma. Below is the first color plane, with the Y' plane replaced with a uniform 50% luma, and the other color plane replaced with zeros.

This color signal is technically known as CB, which may be very imperfectly understood as the bluish component of the color. Well the original image wasn't very blue, so we don't see very much here.

Indeed, our eyes have a harder time seeing differences in color than differences in intensity. Apparently this goes all the way down to biology -- we have more receptors in our eyes for "black and white" and fewer for color.

Early broadcasters took advantage of this difference in perception by actually devoting more bandwidth in their broadcasts to luma than to chroma; if you check the Wikipedia page you will see that the area in the spectrum allocation devoted to color is much smaller than the area devoted to intensity. So it is in JPEG: the above image being half-width indicates that actually we're just encoding one CB sample for every two Y' samples.

Finally, here we have the CR color plane, which can loosely be thought of as the "redness" of the image.

These test images and crops preserve the actual encoding of this photo as it came from my camera, without re-encoding. That's partly why there's not much interesting going on; with the megapixels these days, it's hard to fit much of anything in a few hundred pixels square. This particular camera is sub-sampling in the horizontal direction, but it's also common to subsample vertically as well, producing color planes that are half-width and half-height. In my limited investigations I have found that cameras tend to sub-sample just in the X direction, producing what they call 4:2:2 images, and that standard software encoders subsample in both, producing 4:2:0.

Incidentally, properly scaling up the color planes is quite an irritating endeavor -- the standard indicates that the color is sampled between the locations of the Y' samples ("centered" chroma), but these images originally have EXIF data that indicates that the color samples are taken at the position of the first Y' sample ("co-sited" chroma). I'm pretty sure libjpeg doesn't delve into the EXIF to check this though, so it would seem that all renderings I have seen of these photos are subtly off.

But how do you get proper color out of these strange luma and chroma things? Well, the Y'CBCR colorspace is really just the same color cube as RGB, except rotated: the Y' axis traverses the diagonal from (0, 0, 0) (black) to (255, 255, 255) (white). CB and CR are perpendicular to that diagonal, pointing towards blue or red respectively. So to go back to RGB, you multiply by a matrix to rotate the cube.

It's not a very intuitive color system, as you can see from the images above. For one thing, at zero or full luma, the chroma axes have no meaning; black and white can have no hue. Indeed if you imagine trying to fit a cube corner-down into a similar-sized box, you end up either having empty space in the box, or you have to cut off corners from the cube, or both. Cut corners means that bits of the Y'CBCR signal are wasted; empty space means there are RGB colors that are not representable in Y'CBCR. I'm not sure, but I think both are true for the particular formulation of Y'CBCR used in JPEG.

There's more to say about color here but frankly I don't know enough to do so, even though I worked in digital video for many years. If this is something you are mildly interested in, I highly, highly recommend watching Wim Taymans' presentation at this year's GStreamer conference. He takes a look at color in video that is constructive, building up from biology through math to engineering. His is a principled approach rather than a list of rules. It really clarified a number of things for me (and opened doors to unknown unknowns beyond).

on cosines

Where were we? Right, JPEG. So the proper way to understand what JPEG is is to understand the encoding process. We've covered colorspace conversion from RGB to Y'CBCR and sub-sampling. Next, the image canvas is divided into equal-sized "macroblocks". (These are called "minimum coded units" (MCUs) in the JPEG context, but in video they are usually called macroblocks, and it's a better name.) Without sub-sampling, each macro-block will contain one 8-sample-by-8-sample block for each component (Y', CB, CR) of the image. In my images above, the canvas space corresponding to one chroma block is the space of two luma blocks, so the macroblocks will be 16 samples wide and 8 samples tall, and contain two Y' blocks and one each of CB and CR. If the image canvas can't be evenly divided into macroblocks, it is padded to fit, usually by duplicating the last column or row of samples.

Then to make a JPEG, each block is encoded separately, then the whole thing is just written out to a file, and you're done!

This description glosses over a couple of important points, but it's a good big-picture view to have in mind. The pipeline goes from RGB pixels, to a padded RGB canvas, to separate Y'CBCR planes, to a possibly subsampled set of those planes, to macroblocks, to encoded macroblocks, to the file. Decoding is the reverse. It's a totally doable, comprehensible thing, and that was one of the big takeaways for me from this project. I took photography classes in high school and it was really cool to see how to shoot, develop, and print film, and this is similar in many ways. The real "film" is raw-format data, which some cameras produce, but understanding JPEG is like understanding enlargers and prints and fixer baths and such things. It's smelly and dark but pretty cool stuff.

So, how do you encode a block? Well peoples, this is a kinda cool thing. Maybe you remember from some math class that, given n uniformly spaced samples, you can always represent that series as a sum of n cosine functions of equally spaced frequencies. In each litle 8-by-8 block, that's what we do: a "forward discrete cosine transformation" (FDCT), which is just multiplying together some matrices for every point in the block. The FDCT is completely separable in the X and Y directions, so the space of 8 horizontal coefficients multiplies by the space of 8 vertical coefficients at each column to yield 64 total coefficients, which is not coincidentally the number of samples in a block.

Funny thing about those coefficients: each one corresponds to a particular horizontal and vertical frequency. We can map these out as a space of functions; for example giving a non-zero coefficient to (0, 0) in the upper-left block of a 8-block-by-8-block grid, and so on, yielding a 64-by-64 pixel representation of the meanings of the individual coefficients. That's what I did in the test strip above. Here is the luma example, scaled up without smoothing:

The upper-left corner corresponds to a frequency of 0 in both X and Y. The lower-right is a frequency of 4 "hertz", oscillating from highest to lowest value in both directions four times over the 8-by-8 block. I'm actually not sure why there are some greyish pixels around the right and bottom borders; it's not a compression artifact, as I constructed these DCT arrays programmatically. Anyway. Point is, your lover's smile, your sunny days, your raw urban graffiti, your child's first steps, all of these are reified in your photos as a sum of cosine coefficients.

The odd thing is that what is reified into your pictures isn't actually all of the coefficients there are! Firstly, because the coefficients are rounded to integers. Mathematically, the FDCT is a lossless operation, but in the context of JPEG it is not because the resulting coefficients are rounded. And they're not just rounded to the nearest integer; they are probably quantized further, for example to the nearest multiple of 17 or even 50. (These numbers seem exaggerated, but keep in mind that the range of coefficients is about 8 times the range of the original samples.)

The choice of what quantization factors to use is a key part of JPEG, and it's subjective: low quantization results in near-indistinguishable images, but in middle compression levels you want to choose factors that trade off subjective perception with file size. A higher quantization factor leads to coefficients with fewer bits of information that can be encoded into less space, but results in a worse image in general.

JPEG proposes a standard quantization matrix, with one number for each frequency (coefficient). Here it is for luma:

(define *standard-luma-q-table*
  #(16 11 10 16 24 40 51 61
    12 12 14 19 26 58 60 55
    14 13 16 24 40 57 69 56
    14 17 22 29 51 87 80 62
    18 22 37 56 68 109 103 77
    24 35 55 64 81 104 113 92
    49 64 78 87 103 121 120 101
    72 92 95 98 112 100 103 99))

This matrix is used for "quality 50" when you encode an 8-bit-per-sample JPEG. You can see that lower frequencies (the upper-left part) are quantized less harshly, and vice versa for higher frequencies (the bottom right).

(define *standard-chroma-q-table*
  #(17 18 24 47 99 99 99 99
    18 21 26 66 99 99 99 99
    24 26 56 99 99 99 99 99
    47 66 99 99 99 99 99 99
    99 99 99 99 99 99 99 99
    99 99 99 99 99 99 99 99
    99 99 99 99 99 99 99 99
    99 99 99 99 99 99 99 99))

For chroma (CB and CR) we see that quantization is much more harsh in general. So not only will we sub-sample color, we will also throw away more high-frequency color variation. It's interesting to think about, but also makes sense in some way; again in photography class we did an exercise where we shaded our prints with colored pencils, and the results were remarkable. My poor, lazy coloring skills somehow rendered leaves lifelike in different hues of green; really though, they were shades of grey, colored in imprecisely. "Colored TV" indeed.

With this knowledge under our chapeaux, we can now say what the "JPEG quality" setting actually is: it's simply that pair of standard quantization matrices scaled up or down. Towards "quality 100", the matrix approaches all-ones, for no quantization, and thus minimal loss (though you still have some rounding, often subsampling as well, and RGB-to-Y'CBCR gamut loss). Towards "quality 0" they scale to a matrix full of large values, for harsh quantization.

This understanding also explains those wavey JPEG artifacts you get on low-quality images. Those artifacts look like waves because they are waves. They usually occur at sharp intensity transitions, which like a cymbal crash cause lots of high frequencies that then get harshly quantized. Incidentally I suspect (but don't know) that this is the same reason that cymbals often sound bad in poorly-encoded MP3s, because of harsh quantization in the frequency domain.

Finally, the coefficients are written out to a file as a stream of bits. Each file gets a huffman code allocated to it, which ideally is built from the distribution of quantized coefficient sizes seen in all of the blocks of an image. There are usually different encodings for luma and chroma, to reflect their different quantizations. Reading and writing this bitstream is a bit of a headache but the algorithm is specified in the JPEG standard, and all you have to do is implement it. Notably, though, there is special support for encoding a run of zero-valued coefficients, which happens often after quantization. There are rarely wavey bits in a blue blue sky.

on transforms

It's terribly common for photos to be wrongly oriented. Unfortunately, the way that many editors fix photo rotation is by setting a bit in the EXIF information of the JPEG. This is ineffectual, as web browsers don't look in the EXIF information, and silly, because it turns out you can losslessly rotate most JPEG images anyway.

Consider that the body of a JPEG is an array of macroblocks. To rotate an image, you just have to rearrange those macroblocks, then rearrange the blocks inside the macroblocks (e.g. swap the two Y' blocks in my above example), then transform the blocks themselves.

The lossless transformations that you can do on a block are transposition, vertical flipping, and horizontal flipping.

Transposition flips a block along its downward-sloping diagonal. To do so, you just swap the coefficients at (u, v) with the coefficients at (v, u). Easy peasey.

Flipping is trickier. Consider the enlarged DCT image from above. What would it take to horizontally flip the function at (0, 1)? Instead of going from light to dark, you want it to go from dark to light. Simple: you just negate the coefficients! But you only want to negate those coefficients that are "odd" in the X direction, which are those coefficients whose column is odd. And actually that's all there is to it. Flipping vertically is the same, but for coefficients whose row is odd.

I said "most images" above because those whose size is not evenly divided by the macroblock size can't be losslessly rotated -- you will end up seeing some of the hidden data that falls off the edge of the canvas. Oh well. Most raw images are properly dimensioned, and if you're downscaling, you already have to re-encode anyway.

But that's just flipping and transposition, you say! What about rotation? Well it turns out that you can express rotation in terms of these operations: rotating 90 degrees clockwise is just a transpose and a horizontal flip (in that order). Together, flipping horizontally, flipping vertically, and transposing form a group, in the same way that flipping and flopping form a group for mattresses. Yeah!

on scheme

I wrote this library in Scheme because that's my language of choice these days. I didn't run into any serious impedance mismatches; Guile has a generic multi-dimensional array facility that made it possible to express many of these operations as generic folds, unfolds, or maps over arrays. The huffman coding part was a bit irritating, but all in all things were pretty good. The speed is pretty bad, but I haven't optimized it at all, and it gives me a nice test case for the compiler. Anyway, it's been fun and it suits my needs. Check out the project page if you're interested. Yes, to shave a yak you have to get a bit bovine and smelly, but yaks live in awesome places!

Finally I will leave you with a glitch, one of many that I have produced over the last couple weeks. Comments and corrections welcome below. Happy hacking!

by Andy Wingo at November 14, 2014 04:49 PM

Andy Wingogenerators in firefox now twenty-two times faster

(Andy Wingo)

It's with great pleasure that I can announce that, thanks to Mozilla's Jan de Mooij, the new ES6 generator functions are twenty-two times faster in Firefox!

Some back-story, for the unawares. There's a new version of JavaScript coming, ECMAScript 6 (ES6). Among the new features that ES6 brings are generator functions: functions that can suspend. Firefox's JavaScript engine, SpiderMonkey, has had support for generators for many years, long before other engines. This support was upgraded to the new ES6 standard last year, thanks to sponsorship from Bloomberg, and was shipped out to users in Firefox 26.

The generators implementation in Firefox 26 was quite basic. As you probably know, modern JavaScript implementations have a number of tiered engines. In the case of SpiderMonkey there are three tiers: the interpreter, the baseline compiler, and the optimizing compiler. Code begins execution in the interpreter, which is the quickest engine to start. If a piece of code is hot -- meaning that lots of time is being spent there -- then it will "tier up" to the next level, where it is analyzed, possibly optimized, and then compiled to machine code.

Unfortunately, generators in SpiderMonkey have always been stuck at the lowest tier, the interpreter. This is because of SpiderMonkey's choice of implementation strategy for generators. Generators were implemented as "floating interpreter stack frames": heap-allocated objects whose shape was exactly the same as a stack frame in the interpreter. This had the advantage of being fairly cheap to implement in the beginning, but ultimately it made them unable to take advantage of JIT compilation, as JIT code runs on its own stack which has a different layout. The previous strategy also relied on trampolining through a helper written in C++ to resume generators, which killed optimization opportunities.

The solution was to represent suspended generator objects as snapshots of the state of a stack frame, instead of as stack frames themselves. In order for this to be efficient, last year we did a number of block scope optimizations to try and reduce the amount of state that a generator frame would have to restore. Finally, around March of this year we were at the point where we could refactor the interpreter to implement generators on the normal interpreter stack, with normal interpreter bytecodes, with the vision of being able to JIT-compile those bytecodes.

I ran out of time before I could land that patchset; although the patches were where we wanted to go, they actually caused generators to be even slower and so they languished in Bugzilla for a few more months. Sad monkey. It was with delight, then, that a month or so ago I saw that SpiderMonkey JIT maintainer Jan de Mooij was interested in picking up the patches. Since then he has been hacking off and on at getting my old patches into shape, and ended up applying them all.

He went further, optimizing stack frames to not reserve space for "aliased" locals (locals allocated on the scope chain), speeding up object literal creation in the baseline compiler and finally has implemented baseline JIT compilation for generators.

So, after all of that perf nargery, what's the upshot? Twenty-two times faster! In this microbenchmark:

function *g(n) {
    for (var i=0; i<n; i++)
        yield i;
}
function f() {
    var t = new Date();
    var it = g(1000000);
    for (var i=0; i<1000000; i++)
	it.next();
    print(new Date() - t);
}
f();

Before, it took SpiderMonkey 980 milliseconds to complete on Jan's machine. After? Only 43! It's actually marginally faster than V8 at this point, which has (temporarily, I think) regressed to 45 milliseconds on this test. Anyway. Competition is great and as a committer to both projects I find it very satisfactory to have good implementations on both sides.

As in V8, in SpiderMonkey generators cannot yet reach the highest tier of optimization. I'm somewhat skeptical that it's necessary, too, as you expect generators to suspend fairly frequently. That said, a yield point in a generator is, from the perspective of the optimizing compiler, not much different from a call site, in that it causes all locals to be saved. The difference is that locals may have unboxed representations, so we would have to box those values when saving the generator state, and unbox on restore.

Thanks to Bloomberg for funding the initial work, and big, big thanks to Mozilla's Jan de Mooij for picking up where we left off. Happy hacking with generators!

by Andy Wingo at November 14, 2014 08:41 AM

November 10, 2014

GStreamerGStreamer Core, Plugins and RTSP server 1.4.4 stable release

(GStreamer)

The GStreamer team is pleased to announce a bugfix release of the stable 1.4 release series. The 1.4 release series is adding new features on top of the 1.2 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework that contains new features. The 1.4.x bugfix releases only contain important bugfixes compared to 1.4.0.

Binaries for Android, iOS, Mac OS X and Windows are provided by the GStreamer project for this release. The Android binaries are now built with the r10c NDK and as such binary compatible again with all NDK and Android releases. Additionally now binaries for Android ARMv7 and Android X86 are provided. This binary release features the first 1.4 releases of GNonLin and the GStreamer Editing Services.

The 1.x series is a stable series targeted at end users. It is not API or ABI compatible with the 0.10.x series. It can, however, be installed in parallel with the 0.10.x series and will not affect an existing 0.10.x installation.

The stable 1.4.x release series is API and ABI compatible with 1.0.x and any other 1.x release series in the future. Compared to 1.0.x it contains some new features and more intrusive changes that were considered too risky as a bugfix.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, or gst-rtsp-server, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, or gst-libav, or gst-rtsp-server.

Check the release announcement mail for details and the release notes above for a list of changes.

Also available are binaries for Android, iOS, Mac OS X and Windows.

November 10, 2014 09:00 AM

November 09, 2014

Andy Wingoffconf 2014

(Andy Wingo)

Last week I had the great privilege of speaking at ffconf in Brighton, UK. It was lovely. The town put on a full demonstration of its range of November weather patterns, from blue skies to driving rain to hail (!) to sea-spray to drizzle and back again. Good times.

The conference itself was quite pleasant as well, and from the speaker perspective it was amazing. A million thanks to Remy and Julie for making it such a pleasure. ffconf is mostly a front-end development conference, so it's not directly related with the practice of my work on JS implementations -- perhaps you're unaware, but there aren't so many browser implementors that actually develop content for their browsers, and indeed fewer JS implementors that actually write JS. Me, I sling C++ all day long and the most JavaScript I write is for tests. When in the weeds, sometimes we forget we're building an amazing runtime and that people do inspiring things with it, so it's nice to check in with front-end folks at their conferences to see what people are excited about.

My talk was about the part of JavaScript implementations that are written in JavaScript itself. This is an area that isn't so well known, and it has its amusing quirks. I think it can be interesting to a curious JS hacker who wants to spelunk down a bit to see what's going on in their browsers. Intrepid explorers might even find opportunities to contribute patches. Anyway, nerdy stuff, but that's basically how I roll.

The slides are here: without images (350kB PDF) or with images (3MB PDF).

I haven't been to the UK in years, and being in a foreign country where everyone speaks my native language was quite refreshing. At the same time there was an awkward incident in which I was reminded that though close, American and English just aren't the same. I made this silly joke that if you get a polyfill into a JS implementation, then shucks, you have a "gollyfill", 'cause golly it made it in! In the US I think "golly" is just one of those milquetoast profanities, "golly" instead of "god" like saying "shucks" instead of "shit". Well in the UK that's a thing too I think, but there is also another less fortunate connotation, in which "golly" as a noun can be a racial slur. Check the Wikipedia if you're as ignorant as I was. I think everyone present understood that wasn't my intention, but if that is not the case I apologize. With slides though it's less clear, so I've gone ahead and removed the joke from the slides. It's probably not a ball to take and run with.

However I do have a ball that you can run with though! And actually, this was another terrible joke that wasn't bad enough to inflict upon my audience, but that now chance fate gives me the opportunity to use. So never fear, there are still bad puns in the slides. But, you'll have to click through to the PDF for optimal groaning.

Happy hacking, JavaScripters, and until next time.

by Andy Wingo at November 09, 2014 05:36 PM

November 04, 2014

Christian SchallerTransmageddon Video Transcoder version 1.5 released

(Christian Schaller)

So just a quick update. I pushed out the 1.5 release of Transmageddon today. No major new features just fixing a regression in terms of dealing with files where you only have a video track or where you want to drop the audio track as part of the transcoding process. I am also having some issues with Intel Hardware encoding atm, but I think those are somewhere lower in the stack, so I hope to file a bug against either GStreamer or the libva project for that issue, but for now I recommend not having the Intel VA plugins for GStreamer installed.

As always you find the latest release on linuxrising.org.

I also submitted a Transmageddon update to Fedora 21, so if you are a Fedora user please test the build there and give it some Karma

by uraeus at November 04, 2014 06:18 PM

Arun RaghavanNotes from the PulseAudio Mini Summit 2014

The third week of October was quite action-packed, with a whole bunch of conferences happening in Düsseldorf. The Linux audio developer community as well as the PulseAudio developers each had a whole day of discussions related to a wide range of topics. I’ll be summarising the events of the PulseAudio mini summit day here. The discussion was split into two parts, the first half of the day with just the current core developers and the latter half with members of the community participating as well.

I’d like to thank the Linux Foundation for sparing us a room to carry out these discussions — it’s fantastic that we are able to colocate such meetings with a bunch of other conferences, making it much easier than it would otherwise be for all of us to converge to a single place, hash out ideas, and generally have a good time in real life as well!

Incontrovertible proof that all our users are happy

Happy faces — incontrovertible proof that everyone loves PulseAudio!

With a whole day of discussions, this is clearly going to be a long post, so you might want to grab a coffee now. :)

Release plan

We have a few blockers for 6.0, and some pending patches to merge (mainly HSP support). Once this is done, we can proceed to our standard freeze → release candidate → stable process.

Build simplification for BlueZ HFP/HSP backends

For simplifying packaging, it would be nice to be able to build all the available BlueZ module backends in one shot. There wasn’t much opposition to this idea, and David (Henningsson) said he might look at this. (as I update this before posting, he already has)

srbchannel plans

We briefly discussed plans around the recently introduced shared ringbuffer channel code for communication between PulseAudio clients and the server. We talked about the performance benefits, and future plans such as direct communication between the client and server-side I/O threads.

Routing framework patches

Tanu (Kaskinen) has a long-standing set of patches to add a generic routing framework to PulseAudio, developed by notably Jaska Uimonen, Janos Kovacs, and other members of the Tizen IVI team. This work adds a set of new concepts that we’ve not been entirely comfortable merging into the core. To unblock these patches, it was agreed that doing this work in a module and using a protocol extension API would be more beneficial. (Tanu later did a demo of the CLI extensions that have been made for the new routing concepts)

module-device-manager

As a consequence of the discussion around the routing framework, David mentioned that he’d like to take forward Colin’s priority list work in the mean time. Based on our discussions, it looked like it would be possible to extend module-device-manager to make it port aware and get the kind functionality we want (the ability to have a priority-order list of devices). David was to look into this.

Module writing infrastructure

Relatedly, we discussed the need to export the PA internal headers to allow externally built modules. We agreed that this would be okay to have if it was made abundantly clear that this API would have absolutely no stability guarantees, and is mostly meant to simplify packaging for specialised distributions.

Which led us to the other bit of infrastructure required to write modules more easily — making our protocol extension mechanism more generic. Currently, we have a static list of protocol extensions in our core. Changing this requires exposing our pa_tagstruct structure as public API, which we haven’t done. If we don’t want to do that, then we would expose a generic “throw this blob across the protocol” mechanism and leave it to the module/library to take care of marshalling/unmarshalling.

Resampler quality evaluation

Alexander shared a number of his findings about resampler quality on PulseAudio, vs. those found on Windows and Mac OS. Some questions were asked about other parameters, such as relative CPU consumption, etc. There was also some discussion on how to try to carry this work to a conclusion, but no clear answer emerged.

It was also agreed on the basis of this work that support for libsamplerate and ffmpeg could be phased out after deprecation.

Addition of a “hi-fi” mode

The discussion came around to the possibility of having a mode where (if the hardware supports it), PulseAudio just plays out samples without resampling, conversion, etc. This has been brought up in the past for “audiophile” use cases where the card supports 88.2/96 kHZ and higher sample rates.

No objections were raised to having such a mode — I’d like to take this up at some point of time.

LFE channel module

Alexander has some code for filtering low frequencies for the LFE channel, currently as a virtual sink, that could eventually be integrated into the core.

rtkit

David raised a question about the current status of rtkit and whether it needs to exist, and if so, where. Lennart brought up the fact that rtkit currently does not work on systemd+cgroups based setups (I don’t seem to have why in my notes, and I don’t recall off the top of my head).

The conclusion of the discussion was that some alternate policy method for deciding RT privileges, possibly within systemd, would be needed, but for now rtkit should be used (and fixed!)

kdbus/memfd

Discussions came up about the possibility of using kdbus and/or memfd for the PulseAudio transport. This is interesting to me, there doesn’t seem to be an immediately clear benefit over our SHM mechanism in terms of performance, and some work to evaluate how this could be used, and what the benefit would be, needs to be done.

ALSA controls spanning multiple outputs

David has now submitted patches for controls that affect multiple outputs (such as “Headphone+LO”). These are currently being discussed.

Audio groups

Tanu would like to add code to support collecting audio streams into “audio groups” to apply collective policy to them. I am supposed to help review this, and Colin mentioned that module-stream-restore already uses similar concepts.

Stream and device objects

Tanu proposed the addition of new objects to represent streams and objects. There didn’t seem to be consensus on adding these, but there was agreement of a clear need to consolidate common code from sink-input/source-output and sink/source implementations. The idea was that having a common parent object for each pair might be one way to do this. I volunteered to help with this if someone’s taking it up.

Filter sinks

Alexander brough up the need for a filter API in PulseAudio, and this is something I really would like to have. I am supposed to sketch out an API (though implementing this is non-trivial and will likely take time).

Dynamic PCM for HDMI

David plans to see if we can use profile availability to help determine when an HDMI device is actually available.

Browser volumes

The usability of flat-volumes for browser use cases (where the volume of streams can be controlled programmatically) was discussed, and my patch to allow optional opt-out by a stream from participating in flat volumes came up. Tanu and I are to continue the discussion already on the mailing list to come up with a solution for this.

Handling bad rewinding code

Alexander raised concerns about the quality of rewinding code in some of our filter modules. The agreement was that we needed better documentation on handling rewinds, including how to explicitly not allow rewinds in a sink. The example virtual sink/source code also needs to be adjusted accordingly.

BlueZ native backend

Wim Taymans’ work on adding back HSP support to PulseAudio came up. Since the meeting, I’ve reviewed and merged this code with the change we want. Speaking to Luiz Augusto von Dentz from the BlueZ side, something we should also be able to add back is for PulseAudio to act as an HSP headset (using the same approach as for HSP gateway support).

Containers and PA

Takashi Iwai raised a question about what a good way to run PA in a container was. The suggestion was that a tunnel sink would likely be the best approach.

Common ALSA configuration

Based on discussion from the previous day at the Linux Audio mini-summit, I’m supposed to look at the possibility of consolidating the various mixer configuration formats we currently have to deal with (primarily UCM and its implementations, and Android’s XML format).

(thanks to Tanu, David and Peter for reviewing this)

by Arun at November 04, 2014 04:49 PM

November 03, 2014

Jean-François Fortin TamTricks or Tracebacks? Pitivi 0.94 is here

Dear werepenguins, we’re thrilled to announce the immediate availability of Pitivi 0.94! This is the fourth release for the new version of our video editor based on GES, the GStreamer Editing Services library. Take a look at my previous blog post to understand in what context 0.94 has been brewing. This is mainly a maintenance release, but it does pack a few interesting improvements & features in addition to the bug fixes.

The first thing you will notice is that the main toolbar, menubar and titlebar have been replaced by a unified GTK HeaderBar, saving a ton of precious vertical space and making better use of the horizontal space. Once you try it, you can’t go back. There is beauty in the equilibrium it has now, compared to the previously clunky and unbalanced layout:

pitivi 0.94 headerbar comparison 1

Pitivi 0.93 on the left, 0.94 on the right

The combined screenshot above allows you to get the “complete picture” of how this change affects the main window, but it’s hard to get a sense of scale and it does not really do justice to the awesomeness of client-side decorations like the GTK HeaderBar. So here’s a simplified version where all the “wasted space” is highlighted in red:

pitivi 0.94 headerbar comparison 2

Pitivi 0.93 above, 0.94 below

Pretty rad, huh?

Beyond that eye-popping novelty, many distro/setup-dependent startup crashes have been investigated and fixed:

  • Various Linux distributions have started shipping a broken version of CoGL in recent months, which led to crashes. Technically this is a bug in the CoGL library/packaging, but we found out that the functions we were calling in that particular case were not needed for Pitivi, so we dropped our use of those broken CoGL APIs. Problem solved.
  • People running Pitivi outside of GNOME Shell were seeing crashes due to Clutter GStreamer video output, so we ported the viewer widget to use GStreamer’s new GL video output (glimagesink) instead of the ClutterSink. We had to fix various bugs in GStreamer’s glimagesink to raise it to the quality we needed, and our fixes have been integrated in GStreamer 1.4 (this is why we depend on that version). The GL image sink is expected to be a more future-proof solution.
  • We found issues related to gobject introspection or the overrides provided by gst-python. Again, make sure you have version 1.4 for things to work properly.
  • On avant-garde Linux distributions, you would get a TypeError traceback (“unsupported operand type(s) for /: ‘int’ and ‘NoneType”) preventing startup, which we investigated as bug 735529. This is now fixed in Pitivi.

We also have a collection of bug fixes to the way we handle the various resizeable & detachable components of the main window:

  • The default positioning of UI components (when starting from a fresh install) has been improved to be balanced properly
  • Undocked window components do not shift position on startup anymore
  • Docked window components do not shift position on startup anymore, when the window is not maximized. When the window is maximized, the issue remains (your help to investigate this problem is very much welcome, see bug 723061)

The title editor’s UI has been simplified and works much more reliably:

pitivi 0.94 title editor

Also:

  • Undo/redo should be globally working again; please file specific bug reports for the parts that don’t.
  • Pitivi has been ported to Python 3
  • The user manual is now up to date with the state of the new Pitivi series
  • Educational infobars throughout the UI have been tweaked to make their colors less intrusive
  • Various other fixes as usual. Testing and providing detailed reports for issues you encounter certainly helps!
  • There’s more. Find out in the release notes for 0.94!

This release is meant to be the last release based on our current “buggy stack”; indeed, as I mentioned in my previous status update, the next release (0.95) will run on top of a refined and incredibly more robust backend thanks to the work that Mathieu and Thibault have been doing to replace GNonLin by NLE (the new non-linear engine for GES) and fixing videomixing issues.

We recommend all 0.91/0.92/0.93 users to upgrade to this release. Until this trickles down into distributions, you can download our all-in-one bundle and try out 0.94, right here and now. Enjoy!

by nekohayo at November 03, 2014 04:30 AM

November 01, 2014

Bastien NoceraHardware support news

(Bastien Nocera) Trackballs

I dusted off (literally) my Logitech Marble trackball to replace the Intuos tablet + mouse combination that I was using to cut down on the lateral movement of my right arm which led to back pains.

Not that you care about that one bit, but that meant that I needed a way to get a scroll wheel working with this scroll-wheel less trackball. That's now implemented in gnome-settings-daemon for GNOME 3.16. You'd run:


gsettings set org.gnome.settings-daemon.peripherals.trackball scroll-wheel-emulation-button 8

With "8" being the mouse button number to use to make the trackball ball into a wheel. We plan to add an interface to configure this in the Settings.

Touchscreens

Touchscreens are now switched off when the screensaver is on. This means you'll usually need to use one of the hardware buttons on tablets, or a mouse or keyboard on laptops to turn the screen back on.

Note that you'll need a kernel patch to avoid surprises when the touchscreen is re-enabled.

More touchscreens

The driver for the Goodix touchscreen found in the Onda v975w is now upstream as well.

by Bastien Nocera (noreply@blogger.com) at November 01, 2014 01:08 AM

October 30, 2014

Jan Schmidt2014 GStreamer Conference

(Jan Schmidt)

I’ve been home from Europe over a week, after heading to Germany for the annual GStreamer conference and Linuxcon Europe.

We had a really great turnout for the GStreamer conference this year

GstConf2k14

as well as an amazing schedule of talks. All the talks were recorded by Ubicast, who got all the videos edited and uploaded in record time. The whole conference is available for viewing at http://gstconf.ubicast.tv/channels/#gstreamer-conference-2014

I gave one of the last talks of the schedule – about my current work adding support for describing and handling stereoscopic (3D) video. That support should land upstream sometime in the next month or two, so more on that in a bit.

elephant

There were too many great talks to mention them individually, but I was excited by 3 strong themes across the talks:

  • WebRTC/HTML5/Web Streaming support
  • Improving performance and reducing resource usage
  • Building better development and debugging tools

I’m looking forward to us collectively making progress on all those things and more in the upcoming year.

by thaytan at October 30, 2014 12:35 PM

October 22, 2014

Christian SchallerGStreamer Conference 2014 talks online

(Christian Schaller)

For those of you who like me missed this years GStreamer Conference the recorded talks are now available online thanks to Ubicast. Ubicats has been a tremendous partner for GStreamer over the years making sure we have high quality talk recordings online shortly after the conference ends. So be sure to check out this years batch of great GStreamer talks.

Btw, I also done a minor release of Transmageddon today, which mostly includes a couple of bugfixes and a few less deprecated widgets :)

by uraeus at October 22, 2014 06:24 PM

October 21, 2014

Bastien NoceraA GNOME Kernel wishlist

(Bastien Nocera) GNOME has long had relationships with Linux kernel development, in that we would have some developers do our bidding, helping us solve hard problems. Features like inotify, memfd and kdbus were all originally driven by the desktop.

I've posted a wishlist of kernel features we'd like to see implemented on the GNOME Wiki, and referenced it on the kernel mailing-list.

I hope it sparks healthy discussions about alternative (and possibly existing) features, allowing us to make instant progress.

by Bastien Nocera (noreply@blogger.com) at October 21, 2014 11:06 AM

October 20, 2014

GStreamergst-validate, gst-editing-services, gst-python and gnonlin 1.4.0 stable release

(GStreamer)

The GStreamer team is pleased to announce the new the stable 1.4 release of gst-editing-services, gst-python, gnonlin, and GstValidate. The 1.4 release series is adding new features on top of the 1.0 and 1.2 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework.

Check out the release notes for gst-editing-services, gnonlin, gst-python, gst-validate, and download tarballs for gst-editing-services, gnonlin, gst-python. gst-validate.

October 20, 2014 01:00 PM

October 17, 2014

Andy Wingoffs ssl

(Andy Wingo)

I just set up SSLTLS on my web site. Everything can be had via https://wingolog.org/, and things appear to work. However the process of transitioning even a simple web site to SSL is so clownshoes bad that it's amazing anyone ever does it. So here's an incomplete list of things that can go wrong when you set up TLS on a web site.

You search "how to set up https" on the Googs and click the first link. It takes you here which tells you how to use StartSSL, which generates the key in your browser. Whoops, your private key is now known to another server on this internet! Why do people even recommend this? It's the worst of the worst of Javascript crypto.

OK so you decide to pay for a certificate, assuming that will be better, and because who knows what's going on with StartSSL. You've heard of RapidSSL so you go to rapidssl.com. WTF their price is 49 dollars for a stupid certificate? Your domain name was only 10 dollars, and domain name resolution is an actual ongoing service, unlike certificate issuance that just happens one time. You can't believe it so you click through to the prices to see, and you get this:

Whatttttttttt

OK so I'm using Epiphany on Debian and I think that uses the system root CA list which is different from what Chrome or Firefox do but Jesus this is shaking my faith in the internet if I can't connect to an SSL certificate provider over SSL.

You remember hearing something on Twitter about cheaper certs, and oh ho ho, it's rapidsslonline.com, not just RapidSSL. WTF. OK. It turns out Geotrust and RapidSSL and Verisign are all owned by Symantec anyway. So you go and you pay. Paying is the first thing you have to do on rapidsslonline, before anything else happens. Welp, cross your fingers and take out your credit card, cause SSLanta Clause is coming to town.

Recall, distantly, that SSL has private keys and public keys. To create an SSL certificate you have to generate a key on your local machine, which is your private key. That key shouldn't leave your control -- that's why the DigitalOcean page is so bogus. The certification authority (CA) then needs to receive your public key and then return it signed. You don't know how to do this, because who does? So you Google and copy and paste command line snippets from a website. Whoops!

Hey neat it didn't delete your home directory, cool. Let's assume that your local machine isn't rooted and that your server isn't rooted and that your hosting provider isn't rooted, because that would invalidate everything. Oh what so the NSA and the five eyes have an ongoing program to root servers? Um, well, water under the bridge I guess. Let's make a key. You google "generate ssl key" and this is the first result.

# openssl genrsa -des3 -out foo.key 1024

Whoops, you just made a 1024-bit key! I don't know if those are even accepted by CAs any more. Happily if you leave off the 1024, it defaults to 2048 bits, which I guess is good.

Also you just made a key with a password on it (that's the -des3 part). This is eminently pointless. In order to use your key, your web server will need the decrypted key, which means it will need the password to the key. Adding a password does nothing for you. If you lost your private key but you did have it password-protected, you're still toast: the available encryption cyphers are meant to be fast, not hard to break. Any serious attacker will crack it directly. And if they have access to your private key in the first place, encrypted or not, you're probably toast already.

OK. So let's say you make your key, and make what's called the "CRTCSR", to ask for the cert.

# openssl req -new -key foo.key -out foo.csr

Now you're presented with a bunch of pointless-looking questions like your country code and your "organization". Seems pointless, right? Well now I have to live with this confidence-inspiring dialog, because I left off the organization:

Don't mess up, kids! But wait there's more. You send in your CSR, finally figure out how to receive mail for hostmaster@yourdomain.org because that's what "verification" means (not, god forbid, control of the actual web site), and you get back a certificate. Now the fun starts!

How are you actually going to serve SSL? The truly paranoid use an out-of-process SSL terminator. Seems legit except if you do that you lose any kind of indication about what IP is connecting to your HTTP server. You can use a more HTTP-oriented terminator like bud but then you have to mess with X-Forwarded-For headers and you only get them on the first request of a connection. You could just enable mod_ssl on your Apache, but that code is terrifying, and do you really want to be running Apache anyway?

In my case I ended up switching over to nginx, which has a startlingly underspecified configuration language, but for which the Debian defaults are actually not bad. So you uncomment that part of the configuration, cross your fingers, Google a bit to remind yourself how systemd works, and restart the web server. Haich Tee Tee Pee Ess ahoy! But did you remember to disable the NULL authentication method? How can you test it? What about the NULL encryption method? These are actual things that are configured into OpenSSL, and specified by standards. (What is the use of a secure communications standard that does not provide any guarantee worth speaking of?) So you google, copy and paste some inscrutable incantation into your config, turn them off. Great, now you are a dilettante tweaking your encryption parameters, I hope you feel like a fool because I sure do.

Except things are still broken if you allow RC4! So you better make sure you disable RC4, which incidentally is exactly the opposite of the advice that people were giving out three years ago.

OK, so you took your certificate that you got from the CA and your private key and mashed them into place and it seems the web browser works. Thing is though, the key that signs your certificate is possibly not in the actual root set of signing keys that browsers use to verify the key validity. If you put just your key on the web site without the "intermediate CA", then things probably work but browsers will make an additional request to get the intermediate CA's key, slowing down everything. So you have to concatenate the text files with your key and the one with the intermediate CA's key. They look the same, just a bunch of numbers, but don't get them in the wrong order because apparently the internet says that won't work!

But don't put in too many keys either! In this image we have a cert for jsbin.com with one intermediate CA:

And here is the same but with an a different root that signed the GeoTrust Global CA certificate. Apparently there was a time in which the GeoTrust cert hadn't been added to all of the root sets yet, and it might not hurt to include them all:

Thing is, the first one shows up "green" in Chrome (yay), but the second one shows problems ("outdated security settings" etc etc etc). Why? Because the link from Equifax to Geotrust uses a SHA-1 signature, and apparently that's not a good idea any more. Good times? (Poor Remy last night was doing some basic science on the internet to bring you these results.)

Or is Chrome denying you the green because it was RapidSSL that signed your certificate with SHA-1 and not SHA-256? It won't tell you! So you Google and apply snakeoil and beg your CA to reissue your cert, hopefully they don't charge for that, and eventually all is well. Chrome gives you the green.

Or does it? Probably not, if you're switching from a web site that is also available over HTTP. Probably you have some images or CSS or Javascript that's being loaded over HTTP. You fix your web site to have scheme-relative URLs (like //wingolog.org/ instead of http://wingolog.org/), and make sure that your software can deal with it all (I had to patch Guile :P). Update all the old blog posts! Edit all the HTMLs! And finally, green! You're golden!

Or not! Because if you left on SSLv3 support you're still broken! Also, TLSv1.0, which is actually greater than SSLv3 for no good reason, also has problems; and then TLS1.1 also has problems, so you better stick with just TLSv1.2. Except, except, older Android phones don't support TLSv1.2, and neither does the Googlebot, so you don't get the rankings boost you were going for in the first place. So you upgrade your phone because that's a thing you want to do with your evenings, and send snarky tweets into the ether about scumbag google wanting to promote HTTPS but not supporting the latest TLS version.

So finally, finally, you have a web site that offers HTTPS and HTTP access. You're good right? Except no! (Catching on to the pattern?) Because what happens is that people just type in web addresses to their URL bars like "google.com" and leave off the HTTP, because why type those stupid things. So you arrange for http://www.wobsite.com to redirect https://www.wobsite.com for users that have visited the HTTPS site. Except no! Because any network attacker can simply strip the redirection from the HTTP site.

The "solution" for this is called HTTP Strict Transport Security, or HSTS. Once a visitor visits your HTTPS site, the server sends a response that tells the browser never to fetch HTTP from this site. Except that doesn't work the first time you go to a web site! So if you're Google, you friggin add your name to a static list in the browser. EXCEPT EVEN THEN watch out for the Delorean.

And what if instead they go to wobsite.com instead of the www.wobsite.com that you configured? Well, better enable HSTS for the whole site, but to do anything useful with such a web request you'll need a wildcard certificate to handle the multiple URLs, and those run like 150 bucks a year, for a one-bit change. Or, just get more single-domain certs and tack them onto your cert, using the precision tool cat, but don't do too many, because if you do you will overflow the initial congestion window of the TCP connection and you'll have to wait for an ACK on your certificate before you can actually exchange keys. Don't know what that means? Better look it up and be an expert, or your wobsite's going to be slow!

If your security goals are more modest, as they probably are, then you could get burned the other way: you could enable HSTS, something could go wrong with your site (an expired certificate perhaps), and then people couldn't access your site at all, even if they have no security needs, because HTTP is turned off.

Now you start to add secure features to your web app, safe with the idea you have SSL. But better not forget to mark your cookies as secure, otherwise they could be leaked in the clear, and better not forget that your website might also be served over HTTP. And better check up on when your cert expires, and better have a plan for embedded browsers that don't have useful feedback to the user about certificate status, and what about your CA's audit trail, and better stay on top of the new developments in security! Did you read it? Did you read it? Did you read it?

It's a wonder anything works. Indeed I wonder if anything does.

by Andy Wingo at October 17, 2014 02:33 PM

October 12, 2014

GStreamerGStreamer Conference 2014: Talks, Schedule and Speakers

(GStreamer)

This year's GStreamer Conference will take place on October 16-17 in Düsseldorf, Germany, alongside LinuxCon Europe, the Embedded Linux Conference Europe, and the Linux Plumbers Conference.

The conference schedule with full details about talks and speakers is now available on the conference website.

Topics covered include embedded systems, mobile platforms, optimisations, adaptive streaming, hardware-accelerated video decoding, testing and Q&A, OpenGL, editing, digital television support, latest codec developments, stereoscopic 3D video, and many others.

All talks will be recorded by Ubicast.

If you're not registered yet, it's not too late, you can still sign up, see the conference home page for details.

Many thanks go to our sponsors Centricular, Collabora, and Google without whom this conference would not have been possible in this form.

We hope to see you all in Düsseldorf next week!

October 12, 2014 01:30 PM

October 11, 2014

Bastien NoceraAnd now for some hardware (Onda v975w)

(Bastien Nocera) Prodded by Adam Williamson's fedlet work, and by my inability to getting an Android phone to display anything, I bought an x86 tablet.

At first, I was more interested in buying a brand-name one, such as the Dell Venue 8 Pro Adam has, or the Lenovo Miix 2 that Benjamin Tissoires doesn't seem to get enough time to hack on. But all those tablets are around 300€ at most retailers around, and have a smaller 7 or 8-inch screen.

So I bought a "not exported out of China" tablet, the 10" Onda v975w. The prospect of getting a no-name tablet scared me a little. Would it be as "good" (read bad) as a PadMini or an Action Pad?


Vrrrroooom.


Well, the hardware's pretty decent, and feels rather solid. There's a small amount of light leakage on the side of the touchscreen, but not something too noticeable. I wish it had a button on the bezel to mimick the Windows button on some other tablets, but the edge gestures should replace it nicely.

The screen is pretty gorgeous and its high DPI triggers the eponymous mode in GNOME.

With help of various folks (Larry Finger, and the aforementioned Benjamin and Adam), I got the tablet to a state where I could use it to replace my force-obsoleted iPad 1 to read comic books.

I've put up a wiki page with the status of hardware/kernel support. It's doesn't contain all my notes just yet (sound is working, touchscreen will work very very soon, and various "basic" features are being worked on).

I'll be putting up the fixed-up Wi-Fi driver and more instructions about installation on the Wiki page.

And if you want to make the jump, the tablets are available at $150 plus postage from Aliexpress.

Update: On Google+ and in comments of this blog, it was pointed out that the seller on Aliexpress was trying to scam people. All my apologies, I just selected the cheapest from this website. I personally bought it on Amazon.fr using NewTec24 FR as the vendor.

by Bastien Nocera (noreply@blogger.com) at October 11, 2014 05:57 PM

October 10, 2014

Zeeshan AliLife update

(Zeeshan Ali)
Like many others on planet.gnome, it seems I also don't feel like posting much on my blog any more since I post almost all major events of my life on social media (or SOME, as its for some reason now known as in Finland). To be honest, the thought usually doesn't even occur to me anymore. :( Well, anyway! Here is a brief of what's been up for the last many months:
  • Got divorced. Yeah, not nice at all but life goes on! At least I got to keep my lovely cat.

  • Its been almost an year (14 days less) that I moved to London. In a way it was good that I was in a new city at the time of divorce as its an opportunity to start a new life. I made some cool new friends, mostly the GNOME gang in here.

    London has its quirks but over all I'm pretty happy to be living here. One big issue is that most of my friends are in Finland so I miss them very much. Hopefully, in time I'll also make a lot more friends in London and also my friends from Finland will visit me too.

    The best thing about London is the weather! No, I'm not joking at all. Not only its a big improvement when compared to Helsinki, the rumours about "Its always raining in London" are greatly (I can't stress on this word enough) exaggerated.
  • I got my eyes Z-LASIK'ed so no more glasses!

  • Started taking:

    • Driving lessons. Failed the first driving test today. Having known what I did wrong, I'm sure I wont repeat the same mistakes again next time and will pass.
    • Helicopter flying lessons. Yes! I'm not joking. I grew up watching Airwolf and ever since then I've been fascinated by helicopters and wanted to fly them but never got around to doing it. Its very expensive, as you'd imagine so I'm only taking two lessons a month. With this pace, I should be have my PPL(H) by end of 2015.

      Turns out that I'm very good at one thing that most people find very challenging to master: Hovering. The rest isn't hard either in practice. Theory is the biggest challenge for me. Here is the video recording of the 15 mins trial lesson I started with.

October 10, 2014 06:09 PM

October 04, 2014

Jean-François Fortin TamAn update from the Pitivi 2014 summer battlefront

Hello gentle readers! You may have been wondering what has been going on since the 0.93 release and the Pitivi fundraising campaign. There are a few reasons why we’ve been quiet on the blogging side this summer:

  • Mathieu and Thibault have been working hard to bring us towards “1.0 quality”, improving and stabilizing various parts of GStreamer to make the backend of Pitivi more reliable (more details on this further below). They preferred to write code rather than spending their time doing marketing/fundraising. This is understandable, it is a better use of our scarce specialized resources.
  • Personally, I have been juggling with many obligations (my daily business, preparing for the conferences season, serving on the board of the GNOME Foundation, and Life in General), which left me with pretty much no time or energy to do development on marketing-related activities on Pitivi, just enough to participate in some discussions and help with administration/co-mentorship a bit. I did not have time to research blogging material about what others were doing, hence the lack of status updates in recent times.

Now that I finally have a little bit of time on my hands, I will provide you with the overdue high-level status update from the trenches.

Summer Wars. That’s totally how coding happens in the real world.

GUADEC, status of the 2014 fundraiser

For the curious among you, my account of GUADEC 2014 is here. Among the multiple presentations I gave there, my main talk was about Pitivi. I touched upon the status of the fundraiser in my talk; however, the recordings are not yet available, so I’ll share some select notes on this topic here:

  • Personally, I’ve always thought that, to be worth it, we should raise 200 thousand dollars per year, minimum (you’ll be able to hear the explanation for this belief of mine in the economic context I presented in my talk).
  • For this fundraiser, we aimed for a “modest” minimum target of 35k and an “optimistic” target of 100k. So, much less than 200k.
  • Early on after the campaign launch, we had to scale back on hopes of hitting the “optimistic” target and set 35k as the new “maximum” we could expect, as it became clear from the trend that we would not reach 100k.
  • Eventually, the fundraiser reached its plateau, at approximately 19K €, a little bit over half of our base target.

We had a flexible funding scheme, a great website and fundraising approach, we have the reputation and the skills to deliver, we had one of the best run campaigns out there (we actually got praised on that)… and yet, it seems that was not enough. shrugAfter four months spent preparing and curating our fundraiser, at one point we had to reassess our priorities and focus on “more urgent” things than full-time fundraising: improving code, earning a living, etc. Pushing further would have meant many more months of energy-draining marketing work which, as mentioned in the introduction of this post, was not feasible or productive for us at that point in time. Our friends at MediaGoblin certainly succeeded, in big part through their amazing focus and persistence (Chris Webber spent three months writing substantial motivational blog posts twice a week and applying for grants to achieve his goal. Think about it: fourteen blog articles!).

Okay so now you’re thinking, “But you still got a bit of money, so what have you guys done with that?”. We’ve accomplished some great QA/bugfixing work, just not as fast or as extensively as we’d like to. Pitivi 1.0 will happen but, short of seeing a large amount of donations, it will take more time to reach that goal (unless people step up with patches :).

What Mathieu & Thibault have been up to

For starters, they set up a continuous integration and multi-platform build system for quality assurance.

Then they worked on the GStreamer video mixer, basically doing a complete rework of our mixing stack, and made the beast thread-safe… this is supposed to fix a ton of deadlocks related to videomixing that were killing our user experience by causing frequent freezes. They are preparing a blog post specifically on this topic, but in the meantime you can see some gory details by looking at these commits they landed in GStreamer so far (more are on the way, pending review):

Then they pretty much rewrote all of GNonLin with a different, simpler design, and integrated it directly into GES under a new name: “NLE” (Non Linear Engine):

d82acb9454ea65266f23a4c6cc305bb4

The only part that survived from GNonLin, as far as I know, is the tree data structure generation. So, with “NLE” in GES, deadlocks from GNonLin should be a thing of the past; seeking around should be much more reliable and not cause freezes like it used to. This is still a major chunk of code: it represents around six thousand lines of new code in GES. Work is ongoing in this branch, expected to be completed and merged sometime in October, so I’m waiting to see what comes out of it in practice.

This is in addition to crazy bugs like bug 736896 and our regular bug fixing operations. See also the Pitivi tracker bug for GTK3, introspection and GST1 bugs and the Pitivi tracker bug for Python3 port issues.

The way forward

Now that a big chunk of the hardcore backend work has been done, Thibault and Mathieu will be able to focus on Pitivi (the UI) again. Here is the rough plan for coming months:

  1. “Scenarios” in Pitivi: each action from the user would be serialized (saved) as GstValidateScenario actions, allowing us to easily reproduce issues and bugs that you encounter.
  2. Go over the list of all reported Pitivi bugs and fix a crapton of them!
  3. At some point, we will call upon you to help us with extensive testing (including reporting the bugs and discussing with us to investigate them). We will then continue fixing bugs, release often and make sure we reach the quality you can expect of a 1.0 release.

More news to come

That’s it for today, but you can look forward to more status updates in the near future. The following topics shall be covered in separate blog posts:

  • Details on our thread-safe mixing elements reimplementation
  • The 0.94 release
  • A general status update on the outcome of our 2014 GSoC projects
  • “GNonLin is dead, long live NLE”
  • Closure of the fundraiser

by nekohayo at October 04, 2014 03:23 AM

October 03, 2014

Thomas Vander SticheleI think it’s better to look odd than to look normal

(Thomas Vander Stichele)

In the fall of ’98 I had a thing for a girl I didn’t want to have a thing for. I had also just seen one of my favorite movies, Much Ado About Nothing (the original Brannagh movie, not the Josh Whedon one that I didn’t know about until recently and have yet to see).

I decided to exorcise my feelings into a good old-fashioned mix cd (well, I guess that wasn’t old fashioned back in ’98). I cut up the movie dialogue into pieces, and interspersed them inbetween a song selection aiming to match the flow of the movie lyric-wise and, in places, matching them sound-wise too to the movie snippets. It ended up being two cd’s, and a bunch of my friends liked it as well so I think I ended up making about 30 copies of the thing.

Today I needed to recreate those two CD’s plus its original packaging. That means I had to actually buy CD-R’s (didn’t have any anymore after the move to the US), buy jewelcases (can you believe that I actually have actual boxes with actual empty jewelcases that I *kept* in storage in Belgium? These days if you want to buy them they’re a little harder to find than they used to be, even though I’m sure there must be landfills full of them all over the world), and go to a print shop to print the front and back covers.

Being the obsessive backupper that I am, it was easy to find the sound files back (actually, I took a morituri rip that I made at my best friend’s house, who has the CD’s, last time I was there – so that I would have a perfect .cue sheet that would stitch the tracks together). I knew I had the files for the fronts and backs somewhere as well, but they were a little harder to find because I couldn’t remember their names. But I trusted my OCD self that I had backups from fifteen years ago somewhere here with me in NY, and I started looking for files from the same timeframe, until I came across the files I was looking for hidden in a subdirectory.

But then when you find them, what do you do with .cdr CorelDraw files from 1998? I tried inkscape, which uses uniconvertor, which on my F-19 machine failed with a constructor with wrong arguments in Python, which seems like a silly bug. I rebuilt the F-21 version, which gets past that bug, but then doesn’t actually convert anything. I tried an online converter, and it only picked up on the images and none of the text.

So I went the illegal route – I downloaded CorelDraw 11 from the internet, installed it in wine (which was surprisingly easy, it just worked), and I could open the files. Except that it was missing fonts and so the layout was all wrong. Sigh. Hunt random font sites for the missing fonts, install them for wine, open again, rinse, repeat. Eventually the files opened with the right fonts, except that one of the titles was too big to fit on the CD inlay. Oh well, adjust them all manually, make it a little smaller, export to eps, load in gimp, adjust the page as it was perfectly measured for A4 printing but I’m in the US now and the US uses letter which is slightly different, export to pdf so I could go to any random print shop in New York and get it printed.

CD burnt, on to the print shop, fiddle with the printer as nobody in the store can figure out which tray number the tray is where they loaded the card stock paper, and it’s not like the driver on the windows machine knows either – I had to do 5 failed prints to different printers before we even knew which printer was the right one. Cut up the paper by hand with scissors (which I suck at), put it all together, and be on my way.

All this just to say that, while I can be as good about backups as I want to be to bring back to life something I did fifteen years ago, there is still a whole lot of real-world technology fails getting in the way, like outdated proprietary file formats, not having good interchange formats, missing fonts, paper sizes and general Imperial/metric nonsense, ages-old printer crap and just simple manual tasks, which we as humans will probably inflict upon ourselves for forever. I mean, I’d sure like to believe that in the future it will be as simple as pressing a button and getting this 15 year old CD project 3D-printed all at once, but experience has taught me that most likely I will be fiddling just as much with getting 2040’s 3D printer to work with 2025’s data files.

And so it is that I arrive just after 6 at Barnes and Noble in Tribeca, queue up in front of eight registers with only one open, buy a book, get a wristband, go to the back where Emma Thompson is reading from her Peter Rabbit book, in her perfectly English and genuinely funny way, queue after the reading, and hear her say “I think it’s better to look odd than to look normal” to the seven year old twin girls in front of me. I wholeheartedly agree with her. I hand her my copy to sign, give her my two cd’s and tell her what they are and say that I thought this was a good opportunity to give them to her, and she smiles and seems genuinely surprised and pleased.

I think my dad would be genuinely jealous at this point – he always seemed to appreciate seeing her on the screen, and after today I can’t say I blame him. I hope she enjoys the CD’s, and if someone can recommend a good website where I can put these online for others to listen to, that would be great!

flattr this!

by Thomas at October 03, 2014 02:54 AM

October 02, 2014

Sebastian DrögeGStreamer remote-controlled testing application for Android, iOS and more

(Sebastian Dröge)

Today I’ve released a little GStreamer testing application, mostly for iOS and Android. You can find it here: gst-launch-remote.

It starts a TCP server on port 9123 and allows to take commands from there, while the main application is a simple video widget and a play/pause button.

You can use it as following (note that every OK or NOK comes from the application to tell you about the success or failure of a command):

$ nc fancydevice 9123
videotestsrc ! autovideosink
OK
+PLAY
OK
+PAUSE
OK
+SEEK 10000
OK
audiotestsrc ! autoaudiosink
OK
+PLAY

Every GStreamer pipeline string is accepted here, but currently only a single video output is supported. And additionally you can also enable sending GStreamer debug output via UDP to some IP/port with:

+DEBUG bigworkstation:12345
OK

This will for now enable GST_LEVEL_DEBUG as debug level for everything. And you can listen for all the output with

$ nc -l -u 12345

Future ideas

Maybe this is something that could be integrated with gst-validate to be able to run the test scenarios on mobile devices, and get decent test coverage for them too.

by slomo at October 02, 2014 02:57 PM

Christian SchallerFedora Workstation Progress Report (Wayland and more)

(Christian Schaller)

So I am writing this blog entry using the current development snapshot of the Fedora Workstation and using Wayland as my display server. It is an important milestone for the Fedora Workstation, for Wayland and for me personally. There are many things here I am very happy about, first of all this is a major milestone for what in some sense was the first and biggest engineering effort we kicked off under the Fedora Workstation banner, meaning it was an effort we decided to put our weight behind with the vision we have for the Fedora Workstation being the primary motivator for doing it. And it has been a big success in more ways than I expected, I think it is fair to say that the level of engagement and support from the wider community took me by surprise, and I want to state that if it wasn’t for all the incredible effort from the wider community pushing Wayland forward we would not been able to provide something of this quality so soon.
The fact that Wayland now runs and works on non-Intel GPUs for this release, that XWayland is fully functional, that libinput is as far along as it is, are all thanks to the wider community. There are more people who have contributed than I can list, but I want to call out Adel Gadllah and Jonas Ådahl, who have contributed many crucial pieces to GNOME Shell, Wayland or libinput.

I would also like to specially say thank you to Jasper St.Pierre, because we would not be here today without his tireless effort on porting the GNOME Shell to Wayland. I think anyone who knows Jasper appreciates the amount of effort he puts in and the level of enthusiasm he brings to everything he does. So Jasper recently transferred from Red Hat to Endless Mobile and I am very happy that he will continue to contribute to both the GNOME Shell and Wayland as part of his job at Endless too, as he would be sorely missed both as a developer and as an individual otherwise.

Another person I want to call out at this point is of course Kristian Høgsberg, who created Wayland and got it to reach critical mass in terms of mindshare and functionality. Having been around linux for a long time I have seen efforts at replacing the X window system come and go so I know that achieving what Kristian has achieved here is not trivial at all. So a big thank you to Kristian for his incredible work and for his incredible level of persistence allowing Wayland become a reality where so many other projects have failed.

Wayland in Fedora Workstation 21 is also an important milestone as it exemplifies the new development philosophy we are embarking on. Fedora has for a long time been known to be a linux distribution where a lot of new pieces become available first. The problem here is that it has also given Fedora bit of a reputation for being not as dependable as some other distributions or operating systems, which has kept a lot of people away from Fedora that I think would be inclined to use it otherwise.

So we want to keep being a place where you do get access to new and exciting technologies first, but as you see with the Wayland effort we are now going to go the extra mile to make sure we offer this new technologies in a way that allows you to still use Fedora as your day to day working machine without worrying that these new features will hinder your work. So we will keep Wayland available as a separate non-default session until we feel very confident that our users are not going to be negatively impacted by the switch. Which means we want to fix and polish up the last remaining bits and pieces, make sure that performance is top notch, make sure all input hardware works flawlessly, work with NVidia and AMD to help them make their binary drivers available for Wayland before we make this the new default.

An crucial value for us at Red Hat and for the Fedora community is working closely with our upstreams. Which means we always aim at working with our upstream communities to get the features we need or bugfixes we want included in the upstream releases which we then integrate into Fedora (and Red Hat Enterprise Linux). Working closely with the upstream communities enables us to achieve a lot more than we would be able to do on our own. In preparation for Fedora Workstation 21 we have of course done a lot of work on improving the general Fedora desktop experience which has meant a lot of work has gone into GNOME 3.14. And while most of our upstream contributions here has been about code, not all of it is code. A major part of creating a modern and polished desktop experience is making sure that the applications you run conforms to a shared set of interface guidelines, to both bring a unique and polished look to the applications, but also to make using them easier as things like keybindings or work patterns you learn with one application will transfer over to the next. To help accelerate that process for the Fedora Workstation we had Allan Day work with the GNOME community to create am updated set of Human Interface Guidelines for GNOME 3 and thus implicitly for the Fedora Workstation.

Another crucial improvement that you will see in Fedora Workstation 21 is on software installation. There has been a range of things in Fedora in regards to software installation that has been suboptimal. On the command line and library level there has been a piece of Fedora that I know a lot of people have disliked, many to such a strong degree that they have kept away from Fedora, namely Yum. Yum for those who doesn’t know it is the tool you used either directly or indirectly to install new software on a Fedora system. Yum used to be very slow and while it has gotten a lot better over the years it was still considered a bit of an eyesore for many. So Aleš Kozumplík and others have worked writing a new set of tools to do the low level software handling over the last few years and I am happy to say that for Fedora Workstation 21 we will be using those tools to greatly improve the software installation and update experience. There is a new commandline tool called dnf that will work with the same command line parameters you know from yum, but will complete its task much quicker than before.

The desktop Software installer side Richard Hughes has been working on making the installer use the new libraries developed for dnf, called hawkeye and libsolve, to provide you with a much smoother software installation experience in Fedora Workstation 21. So if you tried the preview we offered of the Software tool in Fedora 20, then I think you will find Software to be a lot more responsive in Fedora Workstation 21.
Of course a good software installer is not just about how nice the user interface looks or how quickly it can perform an installation, it is also very much a product of the quality of your installation metadata. Richard Hughes got a blog entry outlining the great progress is being made on providing more and improved metadata, like application descriptions and screenshots, for Fedora Workstation 21. Ryan Lerch has been working with Richard to improve our cover greatly which means the quality of the software listings in Fedora Workstation 21 should be greatly improved over what you saw in Fedora 20. For more details and screenshots Kalev Lember got a great writeup of the state of the Software installer in Fedora Workstation 21.

This also highlights one of the advantages of the new Fedora product model where we have one clear desktop product we are targeting, that we can define operating system standards for things like application metadata and apply them to the system as a whole. So for Fedora 22 we expect to make appdata metadata a mandatory part of the application packaging for Fedora, ensuring that any desktop application packaged for Fedora is easily discover able by our users. In the old ‘bucket of parts’ model these things would in practice not happen as there was no clear target that everyone was expected to aim for.

There has also been a lot of general user interface polish work happening, both on the toolkit level with a lot of work being done by our UI designers to improve the default desktop theme called Adwaita. And since we want people to run all kinds of applications in Fedora Workstation 21 we are not only doing this for GTK+, but we also have Martin Briza working on bringing Adwaita to Qt for Fedora Workstation. We hope to get the Qt theme packaged soon, but for those interested in taking a look the Adwaita Qt code can be found here. In Fedora Workstation 21 we hope to cover Qt4 applications using the standard Adwaita theme, with wider support planed for Fedora Workstation 22, to cover more Qt versions and also make sure we have full coverage for the Adwaita Dark variant and accessibility versions. There is a chance we will miss the Fedora 21 cutoff date with this theme, but hopefully we can then get it included during the Fedora Workstation 21 lifespan.

We also worked on improving the shell animations. Things like animations might seem like their unimportant, but they contribute greatly to the general feeling of polish in the system. The team worked hard on improving these for Fedora Workstation 21, so in GNOME 3.14 you will for instance see that the animations in the shell overview has been greatly improved.

Last but not least I want to say that while I am very excited about what we have put together for Fedora Workstation 21 it is just the beginning. Being the first release under the new 3 product strategy a lot of time and effort has gone into re-jigging the whole Fedora development process to cater for having 3 different products instead of one, changing the way the Fedora community organize itself, get contributors on board and re-aligned with the new products and so on and also refocus our internal development teams at Red Hat to start thinking about their development process and goals with contributing to these 3 new products in mind. So my expectation is that as we go towards Fedora Workstation 22 the pace of innovation and progress will only pick up. So great things are ahead and I hope that once Fedora Workstation 21 is released regardless of if you are a long time Fedora users, a lapsed former Fedora users or someone who has never tried Fedora before you will be willing to give it a try and hopefully become as excited about it as we are.

by uraeus at October 02, 2014 10:51 AM