October 22, 2014

Christian SchallerGStreamer Conference 2014 talks online

(Christian Schaller)

For those of you who like me missed this years GStreamer Conference the recorded talks are now available online thanks to Ubicast. Ubicats has been a tremendous partner for GStreamer over the years making sure we have high quality talk recordings online shortly after the conference ends. So be sure to check out this years batch of great GStreamer talks.

Btw, I also done a minor release of Transmageddon today, which mostly includes a couple of bugfixes and a few less deprecated widgets :)

by uraeus at October 22, 2014 06:24 PM

October 21, 2014

Bastien NoceraA GNOME Kernel wishlist

(Bastien Nocera) GNOME has long had relationships with Linux kernel development, in that we would have some developers do our bidding, helping us solve hard problems. Features like inotify, memfd and kdbus were all originally driven by the desktop.

I've posted a wishlist of kernel features we'd like to see implemented on the GNOME Wiki, and referenced it on the kernel mailing-list.

I hope it sparks healthy discussions about alternative (and possibly existing) features, allowing us to make instant progress.

by Bastien Nocera (noreply@blogger.com) at October 21, 2014 11:06 AM

October 20, 2014

GStreamergst-validate, gst-editing-services, gst-python and gnonlin 1.4.0 stable release

(GStreamer)

The GStreamer team is pleased to announce the new the stable 1.4 release of gst-editing-services, gst-python, gnonlin, and GstValidate. The 1.4 release series is adding new features on top of the 1.0 and 1.2 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework.

Check out the release notes for gst-editing-services, gnonlin, gst-python, gst-validate, and download tarballs for gst-editing-services, gnonlin, gst-python. gst-validate.

October 20, 2014 11:47 AM

October 17, 2014

Andy Wingoffs ssl

(Andy Wingo)

I just set up SSLTLS on my web site. Everything can be had via https://wingolog.org/, and things appear to work. However the process of transitioning even a simple web site to SSL is so clownshoes bad that it's amazing anyone ever does it. So here's an incomplete list of things that can go wrong when you set up TLS on a web site.

You search "how to set up https" on the Googs and click the first link. It takes you here which tells you how to use StartSSL, which generates the key in your browser. Whoops, your private key is now known to another server on this internet! Why do people even recommend this? It's the worst of the worst of Javascript crypto.

OK so you decide to pay for a certificate, assuming that will be better, and because who knows what's going on with StartSSL. You've heard of RapidSSL so you go to rapidssl.com. WTF their price is 49 dollars for a stupid certificate? Your domain name was only 10 dollars, and domain name resolution is an actual ongoing service, unlike certificate issuance that just happens one time. You can't believe it so you click through to the prices to see, and you get this:

Whatttttttttt

OK so I'm using Epiphany on Debian and I think that uses the system root CA list which is different from what Chrome or Firefox do but Jesus this is shaking my faith in the internet if I can't connect to an SSL certificate provider over SSL.

You remember hearing something on Twitter about cheaper certs, and oh ho ho, it's rapidsslonline.com, not just RapidSSL. WTF. OK. It turns out Geotrust and RapidSSL and Verisign are all owned by Symantec anyway. So you go and you pay. Paying is the first thing you have to do on rapidsslonline, before anything else happens. Welp, cross your fingers and take out your credit card, cause SSLanta Clause is coming to town.

Recall, distantly, that SSL has private keys and public keys. To create an SSL certificate you have to generate a key on your local machine, which is your private key. That key shouldn't leave your control -- that's why the DigitalOcean page is so bogus. The certification authority (CA) then needs to receive your public key and then return it signed. You don't know how to do this, because who does? So you Google and copy and paste command line snippets from a website. Whoops!

Hey neat it didn't delete your home directory, cool. Let's assume that your local machine isn't rooted and that your server isn't rooted and that your hosting provider isn't rooted, because that would invalidate everything. Oh what so the NSA and the five eyes have an ongoing program to root servers? Um, well, water under the bridge I guess. Let's make a key. You google "generate ssl key" and this is the first result.

# openssl genrsa -des3 -out foo.key 1024

Whoops, you just made a 1024-bit key! I don't know if those are even accepted by CAs any more. Happily if you leave off the 1024, it defaults to 2048 bits, which I guess is good.

Also you just made a key with a password on it (that's the -des3 part). This is eminently pointless. In order to use your key, your web server will need the decrypted key, which means it will need the password to the key. Adding a password does nothing for you. If you lost your private key but you did have it password-protected, you're still toast: the available encryption cyphers are meant to be fast, not hard to break. Any serious attacker will crack it directly. And if they have access to your private key in the first place, encrypted or not, you're probably toast already.

OK. So let's say you make your key, and make what's called the "CRTCSR", to ask for the cert.

# openssl req -new -key foo.key -out foo.csr

Now you're presented with a bunch of pointless-looking questions like your country code and your "organization". Seems pointless, right? Well now I have to live with this confidence-inspiring dialog, because I left off the organization:

Don't mess up, kids! But wait there's more. You send in your CSR, finally figure out how to receive mail for hostmaster@yourdomain.org because that's what "verification" means (not, god forbid, control of the actual web site), and you get back a certificate. Now the fun starts!

How are you actually going to serve SSL? The truly paranoid use an out-of-process SSL terminator. Seems legit except if you do that you lose any kind of indication about what IP is connecting to your HTTP server. You can use a more HTTP-oriented terminator like bud but then you have to mess with X-Forwarded-For headers and you only get them on the first request of a connection. You could just enable mod_ssl on your Apache, but that code is terrifying, and do you really want to be running Apache anyway?

In my case I ended up switching over to nginx, which has a startlingly underspecified configuration language, but for which the Debian defaults are actually not bad. So you uncomment that part of the configuration, cross your fingers, Google a bit to remind yourself how systemd works, and restart the web server. Haich Tee Tee Pee Ess ahoy! But did you remember to disable the NULL authentication method? How can you test it? What about the NULL encryption method? These are actual things that are configured into OpenSSL, and specified by standards. (What is the use of a secure communications standard that does not provide any guarantee worth speaking of?) So you google, copy and paste some inscrutable incantation into your config, turn them off. Great, now you are a dilettante tweaking your encryption parameters, I hope you feel like a fool because I sure do.

Except things are still broken if you allow RC4! So you better make sure you disable RC4, which incidentally is exactly the opposite of the advice that people were giving out three years ago.

OK, so you took your certificate that you got from the CA and your private key and mashed them into place and it seems the web browser works. Thing is though, the key that signs your certificate is possibly not in the actual root set of signing keys that browsers use to verify the key validity. If you put just your key on the web site without the "intermediate CA", then things probably work but browsers will make an additional request to get the intermediate CA's key, slowing down everything. So you have to concatenate the text files with your key and the one with the intermediate CA's key. They look the same, just a bunch of numbers, but don't get them in the wrong order because apparently the internet says that won't work!

But don't put in too many keys either! In this image we have a cert for jsbin.com with one intermediate CA:

And here is the same but with an a different root that signed the GeoTrust Global CA certificate. Apparently there was a time in which the GeoTrust cert hadn't been added to all of the root sets yet, and it might not hurt to include them all:

Thing is, the first one shows up "green" in Chrome (yay), but the second one shows problems ("outdated security settings" etc etc etc). Why? Because the link from Equifax to Geotrust uses a SHA-1 signature, and apparently that's not a good idea any more. Good times? (Poor Remy last night was doing some basic science on the internet to bring you these results.)

Or is Chrome denying you the green because it was RapidSSL that signed your certificate with SHA-1 and not SHA-256? It won't tell you! So you Google and apply snakeoil and beg your CA to reissue your cert, hopefully they don't charge for that, and eventually all is well. Chrome gives you the green.

Or does it? Probably not, if you're switching from a web site that is also available over HTTP. Probably you have some images or CSS or Javascript that's being loaded over HTTP. You fix your web site to have scheme-relative URLs (like //wingolog.org/ instead of http://wingolog.org/), and make sure that your software can deal with it all (I had to patch Guile :P). Update all the old blog posts! Edit all the HTMLs! And finally, green! You're golden!

Or not! Because if you left on SSLv3 support you're still broken! Also, TLSv1.0, which is actually greater than SSLv3 for no good reason, also has problems; and then TLS1.1 also has problems, so you better stick with just TLSv1.2. Except, except, older Android phones don't support TLSv1.2, and neither does the Googlebot, so you don't get the rankings boost you were going for in the first place. So you upgrade your phone because that's a thing you want to do with your evenings, and send snarky tweets into the ether about scumbag google wanting to promote HTTPS but not supporting the latest TLS version.

So finally, finally, you have a web site that offers HTTPS and HTTP access. You're good right? Except no! (Catching on to the pattern?) Because what happens is that people just type in web addresses to their URL bars like "google.com" and leave off the HTTP, because why type those stupid things. So you arrange for http://www.wobsite.com to redirect https://www.wobsite.com for users that have visited the HTTPS site. Except no! Because any network attacker can simply strip the redirection from the HTTP site.

The "solution" for this is called HTTP Strict Transport Security, or HSTS. Once a visitor visits your HTTPS site, the server sends a response that tells the browser never to fetch HTTP from this site. Except that doesn't work the first time you go to a web site! So if you're Google, you friggin add your name to a static list in the browser. EXCEPT EVEN THEN watch out for the Delorean.

And what if instead they go to wobsite.com instead of the www.wobsite.com that you configured? Well, better enable HSTS for the whole site, but to do anything useful with such a web request you'll need a wildcard certificate to handle the multiple URLs, and those run like 150 bucks a year, for a one-bit change. Or, just get more single-domain certs and tack them onto your cert, using the precision tool cat, but don't do too many, because if you do you will overflow the initial congestion window of the TCP connection and you'll have to wait for an ACK on your certificate before you can actually exchange keys. Don't know what that means? Better look it up and be an expert, or your wobsite's going to be slow!

If your security goals are more modest, as they probably are, then you could get burned the other way: you could enable HSTS, something could go wrong with your site (an expired certificate perhaps), and then people couldn't access your site at all, even if they have no security needs, because HTTP is turned off.

Now you start to add secure features to your web app, safe with the idea you have SSL. But better not forget to mark your cookies as secure, otherwise they could be leaked in the clear, and better not forget that your website might also be served over HTTP. And better check up on when your cert expires, and better have a plan for embedded browsers that don't have useful feedback to the user about certificate status, and what about your CA's audit trail, and better stay on top of the new developments in security! Did you read it? Did you read it? Did you read it?

It's a wonder anything works. Indeed I wonder if anything does.

by Andy Wingo at October 17, 2014 02:33 PM

October 12, 2014

GStreamerGStreamer Conference 2014: Talks, Schedule and Speakers

(GStreamer)

This year's GStreamer Conference will take place on October 16-17 in Düsseldorf, Germany, alongside LinuxCon Europe, the Embedded Linux Conference Europe, and the Linux Plumbers Conference.

The conference schedule with full details about talks and speakers is now available on the conference website.

Topics covered include embedded systems, mobile platforms, optimisations, adaptive streaming, hardware-accelerated video decoding, testing and Q&A, OpenGL, editing, digital television support, latest codec developments, stereoscopic 3D video, and many others.

All talks will be recorded by Ubicast.

If you're not registered yet, it's not too late, you can still sign up, see the conference home page for details.

Many thanks go to our sponsors Centricular, Collabora, and Google without whom this conference would not have been possible in this form.

We hope to see you all in Düsseldorf next week!

October 12, 2014 01:30 PM

October 11, 2014

Bastien NoceraAnd now for some hardware (Onda v975w)

(Bastien Nocera) Prodded by Adam Williamson's fedlet work, and by my inability to getting an Android phone to display anything, I bought an x86 tablet.

At first, I was more interested in buying a brand-name one, such as the Dell Venue 8 Pro Adam has, or the Lenovo Miix 2 that Benjamin Tissoires doesn't seem to get enough time to hack on. But all those tablets are around 300€ at most retailers around, and have a smaller 7 or 8-inch screen.

So I bought a "not exported out of China" tablet, the 10" Onda v975w. The prospect of getting a no-name tablet scared me a little. Would it be as "good" (read bad) as a PadMini or an Action Pad?


Vrrrroooom.


Well, the hardware's pretty decent, and feels rather solid. There's a small amount of light leakage on the side of the touchscreen, but not something too noticeable. I wish it had a button on the bezel to mimick the Windows button on some other tablets, but the edge gestures should replace it nicely.

The screen is pretty gorgeous and its high DPI triggers the eponymous mode in GNOME.

With help of various folks (Larry Finger, and the aforementioned Benjamin and Adam), I got the tablet to a state where I could use it to replace my force-obsoleted iPad 1 to read comic books.

I've put up a wiki page with the status of hardware/kernel support. It's doesn't contain all my notes just yet (sound is working, touchscreen will work very very soon, and various "basic" features are being worked on).

I'll be putting up the fixed-up Wi-Fi driver and more instructions about installation on the Wiki page.

And if you want to make the jump, the tablets are available at $150 plus postage from Aliexpress.

Update: On Google+ and in comments of this blog, it was pointed out that the seller on Aliexpress was trying to scam people. All my apologies, I just selected the cheapest from this website. I personally bought it on Amazon.fr using NewTec24 FR as the vendor.

by Bastien Nocera (noreply@blogger.com) at October 11, 2014 05:57 PM

October 10, 2014

Zeeshan AliLife update

(Zeeshan Ali)
Like many others on planet.gnome, it seems I also don't feel like posting much on my blog any more since I post almost all major events of my life on social media (or SOME, as its for some reason now known as in Finland). To be honest, the thought usually doesn't even occur to me anymore. :( Well, anyway! Here is a brief of what's been up for the last many months:
  • Got divorced. Yeah, not nice at all but life goes on! At least I got to keep my lovely cat.

  • Its been almost an year (14 days less) that I moved to London. In a way it was good that I was in a new city at the time of divorce as its an opportunity to start a new life. I made some cool new friends, mostly the GNOME gang in here.

    London has its quirks but over all I'm pretty happy to be living here. One big issue is that most of my friends are in Finland so I miss them very much. Hopefully, in time I'll also make a lot more friends in London and also my friends from Finland will visit me too.

    The best thing about London is the weather! No, I'm not joking at all. Not only its a big improvement when compared to Helsinki, the rumours about "Its always raining in London" are greatly (I can't stress on this word enough) exaggerated.
  • I got my eyes Z-LASIK'ed so no more glasses!

  • Started taking:

    • Driving lessons. Failed the first driving test today. Having known what I did wrong, I'm sure I wont repeat the same mistakes again next time and will pass.
    • Helicopter flying lessons. Yes! I'm not joking. I grew up watching Airwolf and ever since then I've been fascinated by helicopters and wanted to fly them but never got around to doing it. Its very expensive, as you'd imagine so I'm only taking two lessons a month. With this pace, I should be have my PPL(H) by end of 2015.

      Turns out that I'm very good at one thing that most people find very challenging to master: Hovering. The rest isn't hard either in practice. Theory is the biggest challenge for me. Here is the video recording of the 15 mins trial lesson I started with.

October 10, 2014 06:09 PM

October 04, 2014

Jean-François Fortin TamAn update from the Pitivi 2014 summer battlefront

Hello gentle readers! You may have been wondering what has been going on since the 0.93 release and the Pitivi fundraising campaign. There are a few reasons why we’ve been quiet on the blogging side this summer:

  • Mathieu and Thibault have been working hard to bring us towards “1.0 quality”, improving and stabilizing various parts of GStreamer to make the backend of Pitivi more reliable (more details on this further below). They preferred to write code rather than spending their time doing marketing/fundraising. This is understandable, it is a better use of our scarce specialized resources.
  • Personally, I have been juggling with many obligations (my daily business, preparing for the conferences season, serving on the board of the GNOME Foundation, and Life in General), which left me with pretty much no time or energy to do development on marketing-related activities on Pitivi, just enough to participate in some discussions and help with administration/co-mentorship a bit. I did not have time to research blogging material about what others were doing, hence the lack of status updates in recent times.

Now that I finally have a little bit of time on my hands, I will provide you with the overdue high-level status update from the trenches.

Summer Wars. That’s totally how coding happens in the real world.

GUADEC, status of the 2014 fundraiser

For the curious among you, my account of GUADEC 2014 is here. Among the multiple presentations I gave there, my main talk was about Pitivi. I touched upon the status of the fundraiser in my talk; however, the recordings are not yet available, so I’ll share some select notes on this topic here:

  • Personally, I’ve always thought that, to be worth it, we should raise 200 thousand dollars per year, minimum (you’ll be able to hear the explanation for this belief of mine in the economic context I presented in my talk).
  • For this fundraiser, we aimed for a “modest” minimum target of 35k and an “optimistic” target of 100k. So, much less than 200k.
  • Early on after the campaign launch, we had to scale back on hopes of hitting the “optimistic” target and set 35k as the new “maximum” we could expect, as it became clear from the trend that we would not reach 100k.
  • Eventually, the fundraiser reached its plateau, at approximately 19K €, a little bit over half of our base target.

We had a flexible funding scheme, a great website and fundraising approach, we have the reputation and the skills to deliver, we had one of the best run campaigns out there (we actually got praised on that)… and yet, it seems that was not enough. shrugAfter four months spent preparing and curating our fundraiser, at one point we had to reassess our priorities and focus on “more urgent” things than full-time fundraising: improving code, earning a living, etc. Pushing further would have meant many more months of energy-draining marketing work which, as mentioned in the introduction of this post, was not feasible or productive for us at that point in time. Our friends at MediaGoblin certainly succeeded, in big part through their amazing focus and persistence (Chris Webber spent three months writing substantial motivational blog posts twice a week and applying for grants to achieve his goal. Think about it: fourteen blog articles!).

Okay so now you’re thinking, “But you still got a bit of money, so what have you guys done with that?”. We’ve accomplished some great QA/bugfixing work, just not as fast or as extensively as we’d like to. Pitivi 1.0 will happen but, short of seeing a large amount of donations, it will take more time to reach that goal (unless people step up with patches :).

What Mathieu & Thibault have been up to

For starters, they set up a continuous integration and multi-platform build system for quality assurance.

Then they worked on the GStreamer video mixer, basically doing a complete rework of our mixing stack, and made the beast thread-safe… this is supposed to fix a ton of deadlocks related to videomixing that were killing our user experience by causing frequent freezes. They are preparing a blog post specifically on this topic, but in the meantime you can see some gory details by looking at these commits they landed in GStreamer so far (more are on the way, pending review):

Then they pretty much rewrote all of GNonLin with a different, simpler design, and integrated it directly into GES under a new name: “NLE” (Non Linear Engine):

d82acb9454ea65266f23a4c6cc305bb4

The only part that survived from GNonLin, as far as I know, is the tree data structure generation. So, with “NLE” in GES, deadlocks from GNonLin should be a thing of the past; seeking around should be much more reliable and not cause freezes like it used to. This is still a major chunk of code: it represents around six thousand lines of new code in GES. Work is ongoing in this branch, expected to be completed and merged sometime in October, so I’m waiting to see what comes out of it in practice.

This is in addition to crazy bugs like bug 736896 and our regular bug fixing operations. See also the Pitivi tracker bug for GTK3, introspection and GST1 bugs and the Pitivi tracker bug for Python3 port issues.

The way forward

Now that a big chunk of the hardcore backend work has been done, Thibault and Mathieu will be able to focus on Pitivi (the UI) again. Here is the rough plan for coming months:

  1. “Scenarios” in Pitivi: each action from the user would be serialized (saved) as GstValidateScenario actions, allowing us to easily reproduce issues and bugs that you encounter.
  2. Go over the list of all reported Pitivi bugs and fix a crapton of them!
  3. At some point, we will call upon you to help us with extensive testing (including reporting the bugs and discussing with us to investigate them). We will then continue fixing bugs, release often and make sure we reach the quality you can expect of a 1.0 release.

More news to come

That’s it for today, but you can look forward to more status updates in the near future. The following topics shall be covered in separate blog posts:

  • Details on our thread-safe mixing elements reimplementation
  • A general status update on the outcome of our 2014 GSoC projects
  • The 0.94 release
  • “GNonLin is dead, long live NLE”
  • Closure of the fundraiser

by nekohayo at October 04, 2014 03:23 AM

October 03, 2014

Thomas Vander SticheleI think it’s better to look odd than to look normal

(Thomas Vander Stichele)

In the fall of ’98 I had a thing for a girl I didn’t want to have a thing for. I had also just seen one of my favorite movies, Much Ado About Nothing (the original Brannagh movie, not the Josh Whedon one that I didn’t know about until recently and have yet to see).

I decided to exorcise my feelings into a good old-fashioned mix cd (well, I guess that wasn’t old fashioned back in ’98). I cut up the movie dialogue into pieces, and interspersed them inbetween a song selection aiming to match the flow of the movie lyric-wise and, in places, matching them sound-wise too to the movie snippets. It ended up being two cd’s, and a bunch of my friends liked it as well so I think I ended up making about 30 copies of the thing.

Today I needed to recreate those two CD’s plus its original packaging. That means I had to actually buy CD-R’s (didn’t have any anymore after the move to the US), buy jewelcases (can you believe that I actually have actual boxes with actual empty jewelcases that I *kept* in storage in Belgium? These days if you want to buy them they’re a little harder to find than they used to be, even though I’m sure there must be landfills full of them all over the world), and go to a print shop to print the front and back covers.

Being the obsessive backupper that I am, it was easy to find the sound files back (actually, I took a morituri rip that I made at my best friend’s house, who has the CD’s, last time I was there – so that I would have a perfect .cue sheet that would stitch the tracks together). I knew I had the files for the fronts and backs somewhere as well, but they were a little harder to find because I couldn’t remember their names. But I trusted my OCD self that I had backups from fifteen years ago somewhere here with me in NY, and I started looking for files from the same timeframe, until I came across the files I was looking for hidden in a subdirectory.

But then when you find them, what do you do with .cdr CorelDraw files from 1998? I tried inkscape, which uses uniconvertor, which on my F-19 machine failed with a constructor with wrong arguments in Python, which seems like a silly bug. I rebuilt the F-21 version, which gets past that bug, but then doesn’t actually convert anything. I tried an online converter, and it only picked up on the images and none of the text.

So I went the illegal route – I downloaded CorelDraw 11 from the internet, installed it in wine (which was surprisingly easy, it just worked), and I could open the files. Except that it was missing fonts and so the layout was all wrong. Sigh. Hunt random font sites for the missing fonts, install them for wine, open again, rinse, repeat. Eventually the files opened with the right fonts, except that one of the titles was too big to fit on the CD inlay. Oh well, adjust them all manually, make it a little smaller, export to eps, load in gimp, adjust the page as it was perfectly measured for A4 printing but I’m in the US now and the US uses letter which is slightly different, export to pdf so I could go to any random print shop in New York and get it printed.

CD burnt, on to the print shop, fiddle with the printer as nobody in the store can figure out which tray number the tray is where they loaded the card stock paper, and it’s not like the driver on the windows machine knows either – I had to do 5 failed prints to different printers before we even knew which printer was the right one. Cut up the paper by hand with scissors (which I suck at), put it all together, and be on my way.

All this just to say that, while I can be as good about backups as I want to be to bring back to life something I did fifteen years ago, there is still a whole lot of real-world technology fails getting in the way, like outdated proprietary file formats, not having good interchange formats, missing fonts, paper sizes and general Imperial/metric nonsense, ages-old printer crap and just simple manual tasks, which we as humans will probably inflict upon ourselves for forever. I mean, I’d sure like to believe that in the future it will be as simple as pressing a button and getting this 15 year old CD project 3D-printed all at once, but experience has taught me that most likely I will be fiddling just as much with getting 2040′s 3D printer to work with 2025′s data files.

And so it is that I arrive just after 6 at Barnes and Noble in Tribeca, queue up in front of eight registers with only one open, buy a book, get a wristband, go to the back where Emma Thompson is reading from her Peter Rabbit book, in her perfectly English and genuinely funny way, queue after the reading, and hear her say “I think it’s better to look odd than to look normal” to the seven year old twin girls in front of me. I wholeheartedly agree with her. I hand her my copy to sign, give her my two cd’s and tell her what they are and say that I thought this was a good opportunity to give them to her, and she smiles and seems genuinely surprised and pleased.

I think my dad would be genuinely jealous at this point – he always seemed to appreciate seeing her on the screen, and after today I can’t say I blame him. I hope she enjoys the CD’s, and if someone can recommend a good website where I can put these online for others to listen to, that would be great!

flattr this!

by Thomas at October 03, 2014 02:54 AM

October 02, 2014

Sebastian DrögeGStreamer remote-controlled testing application for Android, iOS and more

(Sebastian Dröge)

Today I’ve released a little GStreamer testing application, mostly for iOS and Android. You can find it here: gst-launch-remote.

It starts a TCP server on port 9123 and allows to take commands from there, while the main application is a simple video widget and a play/pause button.

You can use it as following (note that every OK or NOK comes from the application to tell you about the success or failure of a command):

$ nc fancydevice 9123
videotestsrc ! autovideosink
OK
+PLAY
OK
+PAUSE
OK
+SEEK 10000
OK
audiotestsrc ! autoaudiosink
OK
+PLAY

Every GStreamer pipeline string is accepted here, but currently only a single video output is supported. And additionally you can also enable sending GStreamer debug output via UDP to some IP/port with:

+DEBUG bigworkstation:12345
OK

This will for now enable GST_LEVEL_DEBUG as debug level for everything. And you can listen for all the output with

$ nc -l -u 12345

Future ideas

Maybe this is something that could be integrated with gst-validate to be able to run the test scenarios on mobile devices, and get decent test coverage for them too.

by slomo at October 02, 2014 02:57 PM

Christian SchallerFedora Workstation Progress Report (Wayland and more)

(Christian Schaller)

So I am writing this blog entry using the current development snapshot of the Fedora Workstation and using Wayland as my display server. It is an important milestone for the Fedora Workstation, for Wayland and for me personally. There are many things here I am very happy about, first of all this is a major milestone for what in some sense was the first and biggest engineering effort we kicked off under the Fedora Workstation banner, meaning it was an effort we decided to put our weight behind with the vision we have for the Fedora Workstation being the primary motivator for doing it. And it has been a big success in more ways than I expected, I think it is fair to say that the level of engagement and support from the wider community took me by surprise, and I want to state that if it wasn’t for all the incredible effort from the wider community pushing Wayland forward we would not been able to provide something of this quality so soon.
The fact that Wayland now runs and works on non-Intel GPUs for this release, that XWayland is fully functional, that libinput is as far along as it is, are all thanks to the wider community. There are more people who have contributed than I can list, but I want to call out Adel Gadllah and Jonas Ådahl, who have contributed many crucial pieces to GNOME Shell, Wayland or libinput.

I would also like to specially say thank you to Jasper St.Pierre, because we would not be here today without his tireless effort on porting the GNOME Shell to Wayland. I think anyone who knows Jasper appreciates the amount of effort he puts in and the level of enthusiasm he brings to everything he does. So Jasper recently transferred from Red Hat to Endless Mobile and I am very happy that he will continue to contribute to both the GNOME Shell and Wayland as part of his job at Endless too, as he would be sorely missed both as a developer and as an individual otherwise.

Another person I want to call out at this point is of course Kristian Høgsberg, who created Wayland and got it to reach critical mass in terms of mindshare and functionality. Having been around linux for a long time I have seen efforts at replacing the X window system come and go so I know that achieving what Kristian has achieved here is not trivial at all. So a big thank you to Kristian for his incredible work and for his incredible level of persistence allowing Wayland become a reality where so many other projects have failed.

Wayland in Fedora Workstation 21 is also an important milestone as it exemplifies the new development philosophy we are embarking on. Fedora has for a long time been known to be a linux distribution where a lot of new pieces become available first. The problem here is that it has also given Fedora bit of a reputation for being not as dependable as some other distributions or operating systems, which has kept a lot of people away from Fedora that I think would be inclined to use it otherwise.

So we want to keep being a place where you do get access to new and exciting technologies first, but as you see with the Wayland effort we are now going to go the extra mile to make sure we offer this new technologies in a way that allows you to still use Fedora as your day to day working machine without worrying that these new features will hinder your work. So we will keep Wayland available as a separate non-default session until we feel very confident that our users are not going to be negatively impacted by the switch. Which means we want to fix and polish up the last remaining bits and pieces, make sure that performance is top notch, make sure all input hardware works flawlessly, work with NVidia and AMD to help them make their binary drivers available for Wayland before we make this the new default.

An crucial value for us at Red Hat and for the Fedora community is working closely with our upstreams. Which means we always aim at working with our upstream communities to get the features we need or bugfixes we want included in the upstream releases which we then integrate into Fedora (and Red Hat Enterprise Linux). Working closely with the upstream communities enables us to achieve a lot more than we would be able to do on our own. In preparation for Fedora Workstation 21 we have of course done a lot of work on improving the general Fedora desktop experience which has meant a lot of work has gone into GNOME 3.14. And while most of our upstream contributions here has been about code, not all of it is code. A major part of creating a modern and polished desktop experience is making sure that the applications you run conforms to a shared set of interface guidelines, to both bring a unique and polished look to the applications, but also to make using them easier as things like keybindings or work patterns you learn with one application will transfer over to the next. To help accelerate that process for the Fedora Workstation we had Allan Day work with the GNOME community to create am updated set of Human Interface Guidelines for GNOME 3 and thus implicitly for the Fedora Workstation.

Another crucial improvement that you will see in Fedora Workstation 21 is on software installation. There has been a range of things in Fedora in regards to software installation that has been suboptimal. On the command line and library level there has been a piece of Fedora that I know a lot of people have disliked, many to such a strong degree that they have kept away from Fedora, namely Yum. Yum for those who doesn’t know it is the tool you used either directly or indirectly to install new software on a Fedora system. Yum used to be very slow and while it has gotten a lot better over the years it was still considered a bit of an eyesore for many. So Aleš Kozumplík and others have worked writing a new set of tools to do the low level software handling over the last few years and I am happy to say that for Fedora Workstation 21 we will be using those tools to greatly improve the software installation and update experience. There is a new commandline tool called dnf that will work with the same command line parameters you know from yum, but will complete its task much quicker than before.

The desktop Software installer side Richard Hughes has been working on making the installer use the new libraries developed for dnf, called hawkeye and libsolve, to provide you with a much smoother software installation experience in Fedora Workstation 21. So if you tried the preview we offered of the Software tool in Fedora 20, then I think you will find Software to be a lot more responsive in Fedora Workstation 21.
Of course a good software installer is not just about how nice the user interface looks or how quickly it can perform an installation, it is also very much a product of the quality of your installation metadata. Richard Hughes got a blog entry outlining the great progress is being made on providing more and improved metadata, like application descriptions and screenshots, for Fedora Workstation 21. Ryan Lerch has been working with Richard to improve our cover greatly which means the quality of the software listings in Fedora Workstation 21 should be greatly improved over what you saw in Fedora 20. For more details and screenshots Kalev Lember got a great writeup of the state of the Software installer in Fedora Workstation 21.

This also highlights one of the advantages of the new Fedora product model where we have one clear desktop product we are targeting, that we can define operating system standards for things like application metadata and apply them to the system as a whole. So for Fedora 22 we expect to make appdata metadata a mandatory part of the application packaging for Fedora, ensuring that any desktop application packaged for Fedora is easily discover able by our users. In the old ‘bucket of parts’ model these things would in practice not happen as there was no clear target that everyone was expected to aim for.

There has also been a lot of general user interface polish work happening, both on the toolkit level with a lot of work being done by our UI designers to improve the default desktop theme called Adwaita. And since we want people to run all kinds of applications in Fedora Workstation 21 we are not only doing this for GTK+, but we also have Martin Briza working on bringing Adwaita to Qt for Fedora Workstation. We hope to get the Qt theme packaged soon, but for those interested in taking a look the Adwaita Qt code can be found here. In Fedora Workstation 21 we hope to cover Qt4 applications using the standard Adwaita theme, with wider support planed for Fedora Workstation 22, to cover more Qt versions and also make sure we have full coverage for the Adwaita Dark variant and accessibility versions. There is a chance we will miss the Fedora 21 cutoff date with this theme, but hopefully we can then get it included during the Fedora Workstation 21 lifespan.

We also worked on improving the shell animations. Things like animations might seem like their unimportant, but they contribute greatly to the general feeling of polish in the system. The team worked hard on improving these for Fedora Workstation 21, so in GNOME 3.14 you will for instance see that the animations in the shell overview has been greatly improved.

Last but not least I want to say that while I am very excited about what we have put together for Fedora Workstation 21 it is just the beginning. Being the first release under the new 3 product strategy a lot of time and effort has gone into re-jigging the whole Fedora development process to cater for having 3 different products instead of one, changing the way the Fedora community organize itself, get contributors on board and re-aligned with the new products and so on and also refocus our internal development teams at Red Hat to start thinking about their development process and goals with contributing to these 3 new products in mind. So my expectation is that as we go towards Fedora Workstation 22 the pace of innovation and progress will only pick up. So great things are ahead and I hope that once Fedora Workstation 21 is released regardless of if you are a long time Fedora users, a lapsed former Fedora users or someone who has never tried Fedora before you will be willing to give it a try and hopefully become as excited about it as we are.

by uraeus at October 02, 2014 10:51 AM

September 30, 2014

Bastien NoceraGTK+ widget templates now in Javascript

(Bastien Nocera) Let's get the features in early!

If you're working on a Javascript application for GNOME, you'll be interested to know that you can now write GTK+ widget templates in gjs.

Many thanks to Giovanni for writing the original patches. And now to a small example:

const MyComplexGtkSubclass = new Lang.Class({
Name: 'MyComplexGtkSubclass',
Extends: Gtk.Grid,
Template: 'resource:///org/gnome/myapp/widget.xml',
Children: ['label-child'],

_init: function(params) {
this.parent(params);

this._internalLabel = this.get_template_child(MyComplexGtkSubclass,
'label-child');
}
});

And now you just need to create your widget:

let content = new MyComplexGtkSubclass();
content._internalLabel.set_label("My updated label");

You'll need gjs from git master to use this feature. And if you see anything that breaks, don't hesitate to file a bug against gjs in the GNOME Bugzilla.

by Bastien Nocera (noreply@blogger.com) at September 30, 2014 02:56 PM

September 29, 2014

GStreamerGStreamer Core, Plugins and RTSP server 1.4.3 stable release

(GStreamer)

Note that this announcement includes everything from 1.4.2 too, which was never officially released as some critical bugs were found.

The GStreamer team is pleased to announce a bugfix release of the stable 1.4 release series. The 1.4 release series is adding new features on top of the 1.2 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework that contains new features. The 1.4.x bugfix releases only contain important bugfixes compared to 1.4.0.

Binaries for Android, iOS, Mac OS X and Windows are provided by the GStreamer project for this release.

The 1.x series is a stable series targeted at end users. It is not API or ABI compatible with the 0.10.x series. It can, however, be installed in parallel with the 0.10.x series and will not affect an existing 0.10.x installation.

The stable 1.4.x release series is API and ABI compatible with 1.0.x and any other 1.x release series in the future. Compared to 1.0.x it contains some new features and more intrusive changes that were considered too risky as a bugfix.

Check out the release notes for GStreamer core (1.4.3), gst-plugins-base (1.4.3), gst-plugins-good (1.4.3), gst-plugins-ugly (1.4.3), gst-plugins-bad (1.4.3), gst-libav (1.4.3), or gst-rtsp-server (1.4.3), or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, or gst-libav, or gst-rtsp-server.

Check the release announcement mail for details and the release notes above for a list of changes.

Also available are binaries for Android, iOS, Mac OS X and Windows.

September 29, 2014 05:30 PM

GStreamerGstValidate 1.3.90 release candidate

(GStreamer)

The GStreamer team is pleased to announce the first release candidate of the stable 1.4 version of the GstValidate developper tool.

This release candidate will hopefully shortly be followed by the stable 1.4.0 release if no bigger regressions or bigger issues are detected, and enough testing of the release candidate happened. The new API that was added during the 1.3 release series is not expected to change anymore at this point.

Check out the release notes for here, and download tarballs here,

September 29, 2014 01:00 PM

September 24, 2014

GStreamergst-editing-services, gst-python and gnonlin 1.3.90 release candidate

(GStreamer)

The GStreamer team is pleased to announce the first release candidate of the stable 1.4 release series of gst-editing-services, gst-python and gnonlin. The 1.4 release series is adding new features on top of the 1.0 and 1.2 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework.

This release candidate will hopefully shortly be followed by the stable 1.4.0 release if no bigger regressions or bigger issues are detected, and enough testing of the release candidate happened. The new API that was added during the 1.3 release series is not expected to change anymore at this point.

The stable 1.4 release series is API and ABI compatible with 1.0.x, 1.2.x and any other 1.x release series in the future. Compared to 1.2.x it contains some new features and more intrusive changes that were considered too risky as a bugfix.

Check out the release notes for gst-editing-services, gnonlin, gst-python, and download tarballs for gst-editing-services, gnonlin, gst-python.

September 24, 2014 01:00 PM

September 22, 2014

Jan SchmidtMysterious Parcel

(Jan Schmidt)

I received a package in the mail today!
Mysterious Package

Everything arrived all nicely packaged up in a hobby box and ready for assembly.
Opening the box

Lots of really interesting goodies in the box!
Out on the table

After a little while, I’ve got the first part together.First part assembled

The rest will have to wait for another day. In the meantime, have fun guessing what it is, and enjoy this picture of a cake I baked on the weekend:
Strawberry Sponge Cake

See you later!

by thaytan at September 22, 2014 04:40 PM

Bastien NoceraOn fêtera la sortie de GNOME 3.14 mardi soir à Lyon

(Bastien Nocera) In French, for a change :)

Mardi soir, le 23 septembre, quelques-uns d'entre nous se retrouveront vers 18h30 au Smoking Dog pour quelques boissons, et poursuivront avec un dîner indien prés du métro St-Jean.

N'hésitez pas à vous inscrire sur le Wiki, que vous soyez utilisateurs de GNOME, développeurs ou simplement des amis du logiciel libre.

À mardi!

by Bastien Nocera (noreply@blogger.com) at September 22, 2014 09:00 AM

Bastien NoceraFresh software from the 3.14 menu

(Bastien Nocera) Here is a small recap of the GNOME 3.14 features I worked on. Some are already well publicised, through blogs:
And obviously loads of bug fixes, and patch reviews. And I do mean loads :)

To look forward to

If all goes according to plan, I'll be able to merge the aforementioned automatic rotation support into systemd/udev. The kernel API is pretty bad, which makes the user-space code look bad...

The first parts of ebooks support in gnome-documents have already been written, scheduled for 3.16.

And my favourites

Note: With links that will open up like a Christmas present when GNOME 3.14 is released.

There are a lot of big, new features in GNOME 3.14. The Adwaita rewrite made it possible to polish the theme greatly. The captive portals support is very useful, the travelling you will enjoy this (I certainly have!).

But my favourite new feature has to be the gestures support in gnome-shell. I'll make good use of that :)

by Bastien Nocera (noreply@blogger.com) at September 22, 2014 12:21 AM

September 19, 2014

Sebastian DrögeGStreamer with hardware video codecs on iOS

(Sebastian Dröge)

Update: GIT master of cerbero should compile fine with XCode 6 for x86/x86-64 (simulator) too now

In the last few days I spent some time on getting GStreamer to compile properly with the XCode 6 preview release (which is since today available as a stable release), and make sure everything still works with iOS 8. This should be the case now with GIT master of cerbero.

So much for the boring part. But more important, iOS 8 finally makes the VideoToolbox API available as public API. This allows us to use the hardware video decoders and encoders directly, and opens lots of new possibilities for GStreamer usage on iOS. Before iOS 8 it was only possible to directly decode local files with the hardware decoders via the AVAssetReader API, which of course only allows rather constrained GStreamer usage.

We already had elements (for OS X) using the VideoToolbox API in the applemedia plugin in gst-plugins-bad, so I tried making them work on iOS too. This required quite a few changes, and in the end I rewrote big parts of the encoder element (which should also make it work better on OS X btw). But with GIT master of GStreamer you can now directly use the hardware codecs on iOS 8 by using the vtdec decoder element or the vtenc_h264 encoder element. There’s still a lot of potential for improvements but it’s working.

Notes

If you compile everything from GIT master, it should still be possible to use the same application binary with iOS 7 and earlier versions. Just make sure to use “-weak_framework VideoToolbox” for linking your application instead of “-framework VideoToolbox”. On earlier versions you just won’t be able to use the hardware codecs.

Currently compiling cerbero GIT master for iOS x86 and x86-64 will fail in libffi. Only the ARM variants work. So don’t build with “./cerbero-uninstalled -c config/cross-ios-universal.cbc” but the “cross-ios-arm7.cbc”. And if you need to run bootstrap first, run it from the 1.4 branch for now and then switch back to the master branch. I’m working on fixing that next week.

by slomo at September 19, 2014 12:08 PM

September 17, 2014

Bastien NoceraA follow up to yesterday's Videos new for 3.14

(Bastien Nocera)
The more astute (or Wayland testing) amongst you will recognise mutter running a nested Wayland compositor. Yes, it means that Videos will work natively under Wayland.

Got to love indie films

It's not perfect, as I'm still seeing hangs within the Intel driver for a number of operations, but basic playback works, and the playback is actually within the same window and correctly hidden when in the overview ;)

by Bastien Nocera (noreply@blogger.com) at September 17, 2014 08:24 PM

September 16, 2014

Bastien NoceraVideos 3.14 features

(Bastien Nocera) We've added a few, but nonetheless interesting features to Videos in GNOME 3.14.

Auto-rotation of videos

If you capture videos in portrait orientation on your phone, we are now able to rotate them automatically in the movie player, as well as in the thumbnails.

Better streaming

You can now seek anywhere inside streamed videos, even if we didn't download all the way to that point. That's particularly useful for long videos, or slow servers (or a combination of both).

Thumbnails generation

Finally, videos without thumbnails in your videos directory will have thumbnails automatically generated, without having to browse them in Files. This makes the first experience of videos more pleasing to the eye.

What's next?

We'll work on integrating Victor Toso's work on grilo plugins, to show information about the film or TV series on your computer, such as grouping episodes of a series together, showing genres, covers and synopsis for films.

With a bit of luck, we should also be able to provide you with more video content as well, through partners.

by Bastien Nocera (noreply@blogger.com) at September 16, 2014 04:39 PM

September 02, 2014

Andy Wingohigh-performance packet filtering with pflua

(Andy Wingo)

Greets! I'm delighted to be able to announce the release of Pflua, a high-performance packet filtering toolkit written in Lua.

Pflua implements the well-known libpcap packet filtering language, which we call pflang for short.

Unlike other packet filtering toolkits, which tend to use the libpcap library to compile pflang expressions bytecode to be run by the kernel, Pflua is a completely new implementation of pflang.

why lua?

At this point, regular readers are asking themselves why this Schemer is hacking on a Lua project. The truth is that I've always been looking for an excuse to play with the LuaJIT high-performance Lua implementation.

LuaJIT is a tracing compiler, which is different from other JIT systems I have worked on in the past. Among other characteristics, tracing compilers only emit machine code for branches that are taken at run-time. Tracing seems a particularly appropriate strategy for the packet filtering use case, as you end up with linear machine code that reflects the shape of actual network traffic. This has the potential to be much faster than anything static compilation techniques can produce.

The other reason for using Lua was because it was an excuse to hack with Luke Gorrie, who for the past couple years has been building the Snabb Switch network appliance toolkit, also written in Lua. A common deployment environment for Snabb is within the host virtual machine of a virtualized server, with Snabb having CPU affinity and complete control over a high-performance 10Gbit NIC, which it then routes to guest VMs. The administrator of such an environment might want to apply filters on the kinds of traffic passing into and out of the guests. To this end, we plan on integrating Pflua into Snabb so as to provide a pleasant, expressive, high-performance filtering facility.

Given its high performance, it is also reasonable to deploy Pflua on gateway routers and load-balancers, within virtualized networking appliances.

implementation

Pflua compiles pflang expressions to Lua source code, which are then optimized at run-time to native machine code.

There are actually two compilation pipelines in Pflua. The main one is fairly traditional. First, a custom parser produces a high-level AST of a pflang filter expression. This AST is lowered to a primitive AST, with a limited set of operators and ways in which they can be combined. This representation is then exhaustively optimized, folding constants and tests, inferring ranges of expressions and packet offset values, hoisting assertions that post-dominate success continuations, etc. Finally, we residualize Lua source code, performing common subexpression elimination as we go.

For example, if we compile the simple Pflang expression ip or ip6 with the default compilation pipeline, we get the Lua source code:

return function(P,length)
   if not (length >= 14) then return false end
   do
      local v1 = ffi.cast("uint16_t*", P+12)[0]
      if v1 == 8 then return true end
      do
         do return v1 == 56710 end
      end
   end
end

The other compilation pipeline starts with bytecode for the Berkeley packet filter VM. Pflua can load up the libpcap library and use it to compile a pflang expression to BPF. In any case, whether you start from raw BPF or from a pflang expression, the BPF is compiled directly to Lua source code, which LuaJIT can gnaw on as it pleases. Compiling ip or ip6 with this pipeline results in the following Lua code:

return function (P, length)
   local A = 0
   if 14 > length then return 0 end
   A = bit.bor(bit.lshift(P[12], 8), P[12+1])
   if (A==2048) then goto L2 end
   if not (A==34525) then goto L3 end
   ::L2::
   do return 65535 end
   ::L3::
   do return 0 end
   error("end of bpf")
end

We like the independence and optimization capabilities afforded by the native pflang pipeline. Pflua can hoist and eliminate bounds checks, whereas BPF is obligated to check that every packet access is valid. Also, Pflua can work on data in network byte order, whereas BPF must convert to host byte order. Both of these restrictions apply not only to Pflua's BPF pipeline, but also to all other implementations that use BPF (for example the interpreter in libpcap, as well as the JIT compilers in the BSD and Linux kernels).

However, though Pflua does a good job in implementing pflang, it is inevitable that there may be bugs or differences of implementation relative to what libpcap does. For that reason, the libpcap-to-bytecode pipeline can be a useful alternative in some cases.

performance

When Pflua hits the sweet spots of the LuaJIT compiler, performance screams.


(full image, analysis)

This synthetic benchmark runs over a packet capture of a ping flood between two machines and compares the following pflang implementations:

  1. libpcap: The user-space BPF interpreter from libpcap

  2. linux-bpf: The old Linux kernel-space BPF compiler from 2011. We have adapted this library to work as a loadable user-space module (source)

  3. linux-ebpf: The new Linux kernel-space BPF compiler from 2014, also adapted to user-space (source)

  4. bpf-lua: BPF bytecodes, cross-compiled to Lua by Pflua.

  5. pflua: Pflang compiled directly to Lua by Pflua.

To benchmark a pflang implementation, we use the implementation to run a set of pflang expressions over saved packet captures. The result is a corresponding set of benchmark scores measured in millions of packets per second (MPPS). The first set of results is thrown away as a warmup. After warmup, the run is repeated 50 times within the same process to get multiple result sets. Each run checks to see that the filter matches the the expected number of packets, to verify that each implementation does the same thing, and also to ensure that the loop is not dead.

In all cases the same Lua program is used to drive the benchmark. We have tested a native C loop when driving libpcap and gotten similar results, so we consider that the LuaJIT interface to C is not a performance bottleneck. See the pflua-bench project for more on the benchmarking procedure and a more detailed analysis.

The graph above shows that Pflua can stream in packets from memory and run some simple pflang filters them at close to the memory bandwidth on this machine (100 Gbit/s). Because all of the filters are actually faster than the accept-all case, probably due to work causing prefetching, we actually don't know how fast the filters themselves can run. At any case, in this ideal situation, we're running at a handful of nanoseconds per packet. Good times!


(full image, analysis)

It's impossible to make real-world tests right now, especially since we're running over packet captures and not within a network switch. However, we can get more realistic. In the above test, we run a few filters over a packet capture from wingolog.org, which mostly operates as a web server. Here we see again that Pflua beats all of the competition. Oddly, the new Linux JIT appears to fare marginally worse than the old one. I don't know why that would be.

Sadly, though, the last tests aren't running at that amazing flat-out speed we were seeing before. I spent days figuring out why that is, and that's part of the subject of my last section here.

on lua, on luajit

I implement programming languages for a living. That doesn't mean I know everything there is to know about everything, or that everything I think I know is actually true -- in particular, I was quite ignorant about trace compilers, as I had never worked with one, and I hardly knew anything about Lua at all. With all of those caveats, here are some ignorant first impressions of Lua and LuaJIT.

LuaJIT has a ridiculously fast startup time. It also compiles really quickly: under a minute. Neither of these should be important but they feel important. Of course, LuaJIT is not written in Lua, so it doesn't have the bootstrap challenges that Guile has; but still, a fast compilation is refreshing.

LuaJIT's FFI is great. Five stars, would program again.

As a compilation target, Lua is OK. On the plus side, it has goto and efficient bit operations over 32-bit numbers. However, and this is a huge downer, the result range of bit operations is the signed int32 range, not the unsigned range. This means that bit.band(0xffffffff, x) might be negative. No one in the history of programming has ever wanted this. There are sensible meanings for negative results to bit operations, but only if an argument was negative. Grr. Otherwise, Lua shares the same concerns as other languages whose numbers are defined as 64-bit doubles.

Sometimes people get upset that Lua starts its indexes (in "arrays" or strings) with 1 instead of 0. It's foreign to me, so it's sometimes a challenge, but it can work as well as anything else. The problem comes in when working with the LuaJIT FFI, which starts indexes with 0, leading me to make errors as I forget which kind of object I am working on.

As a language to implement compilers, Lua desperately misses a pattern matching facility. Otherwise, a number of small gripes but no big ones; tables and closures abound, which leads to relatively terse code.

Finally, how well does trace compilation work for this task? I offer the following graph.


(full image, analysis)

Here the tests are paired. The first test of a pair, for example the leftmost portrange 0-6000, will match most packets. The second test of a pair, for example the second-from-the-left portrange 0-5, will reject all packets. The generated Lua code will be very similar, except for some constants being different. See portrange-0-6000.md for an example.

The Pflua performance of these filters is very different: the one that matches is slower than the one that doesn't, even though in most cases the non-matching filter will have to do more work. For example, a non-matching filter probably checks both src and dst ports, whereas a successful one might not need to check the dst.

It hurts to see Pflua's performance be less than the Linux JIT compilers, and even less than libpcap at times. I scratched my head for a long time about this. The Lua code is fine, and actually looks much like the BPF code. I had taken a look at the generated assembly code for previous traces and it looked fine -- some things that were not as good as they should be (e.g. a fair bit of conversions between integers and doubles, where these traces have no doubles), but things were OK. What changed?

Well. I captured the traces for portrange 0-6000 to a file, and dove in. Trace 66 contains the inner loop. It's interesting to see that there's a lot of dynamic checks in the beginning of the trace, although the loop itself is not bad (scroll down to see the word LOOP:), though with the double conversions I mentioned before.

It seems that trace 66 was captured for a packet whose src port was within range. Later, we end up compiling a second trace if the src port check fails: trace 67. The trace starts off with an absurd amount of loads and dynamic checks -- to a similar degree as trace 66, even though trace 66 dominates trace 67. It seems that there is a big penalty for transferring from one trace to another, even though they are both compiled.

Finally, once trace 67 is done -- and recall that all it has to do is check the destination port, and then update the counters from the inner loop) -- it jumps back to the top of trace 66 instead of the top of the loop, repeating all of the dynamic checks in trace 66! I can only think this is a current deficiency of LuaJIT, and not with trace compilation in general, although the amount of state transfer points to a lack of global analysis that you would get in a method JIT. I'm sure that values are being transferred that are actually dead.

This explains the good performance for the match-nothing cases: the first trace that gets compiled residualizes the loop expecting that all tests fail, and so only matching cases or variations incur the trace transfer-and-re-loop cost.

It could be that the Lua code that Pflua residualizes is in some way not idiomatic or not performant; tips in that regard are appreciated.

conclusion

I was going to pass some possible slogans by our marketing department, but we don't really have one, so I pass them on to you and you can tell me what you think:

  • "Pflua: A Totally Adequate Pflang Implementation"

  • "Pflua: Sometimes Amazing Performance!!!!1!!"

  • "Pflua: Organic Artisanal Network Packet Filtering"

Pflua was written by Igalians Diego Pino, Javier Muñoz, and myself for Snabb Gmbh, fine purveyors of high-performance networking solutions. If you are interested in getting Pflua in a Snabb context, we'd be happy to talk; drop a note to the snabb-devel forum. For Pflua in other contexts, file an issue or drop me a mail at wingo@igalia.com. Happy hackings with Pflua, the totally adequate pflang implementation!

by Andy Wingo at September 02, 2014 10:15 AM

September 01, 2014

Thiago SantosAdaptive Demuxer baseclass

If you haven't read yet, Sebastian has a good overview of adaptive streaming support (client-side) in GStreamer: https://coaxion.net/blog/2014/05/http-adaptive-streaming-with-gstreamer/

Currently, GStreamer works with all 3 adaptive formats out there: HLS with hlsdemux, SmoothStreaming with mssdemux and DASH with dashdemux. And while it all works quite well for the most common scenarios, all 3 elements are very similar and share a lot of code. Large portions of code were actually copy and pasted from one to another while they were being developed and in early stages of stabilization. At the moment it is common to have to fix the same issue or implement the same feature and copy over for the other elements. This is not nice.

The Solution

As most of the code is actually the same, the obvious solution is to write a base class: GstAdaptiveDemux. It will handle the common logic that is now copied on all 3 elements:

  • Receive the manifest from upstream and merge it into a single buffer
  • Start a thread for each stream available in the manifest and create the source element that will fetch the fragments and push downstream
  • Calculate the download rate of fragments to select the best bitrate for the context
  • Fragment download retry attempts
  • Disabling not-linked streams to save bandwidth (multi-audio media)
  • Standard query and event handling for streams
  • Manifest updates for live streams
  • Thread locking

On the other hand, the subclass is responsible for parsing the received manifest and maintaining the data structures for the streams. Most of the implementation will focus on extracting the fragments' URLs from the manifest and providing the metadata for the streams (duration, timestamps, formats).

As there doesn't seem to be any other adaptive format, this baseclass is going to be private API. This actually gives us an advantage of not needing to stabilize it before merging as we can change the API if needed without worrying about API/ABI breaks.

Where is it?

The latest implementation can be found at http://cgit.freedesktop.org/~thiagoss/gst-plugins-bad/log/?h=adaptivedemux while it is not merged upstream.

What's keeping it from usptream?

More testing. Unfortunatelly I don't have a full setup that would allow me to test all scenarios. Live streams is the hardest to test at the moment and likely where regressions are still to be discovered. A bug has already been filed to start the merge discussion: https://bugzilla.gnome.org/show_bug.cgi?id=735848
If you've been using adaptive formats with GStreamer, please give it a go and report regressions/issues at the bug. Thanks!

by Thiago Santos at September 01, 2014 09:23 PM

Christian SchallerLeaving Brno

(Christian Schaller)

So two years ago my family and I moved to Brno in the Czech Republic due to me starting a new job at Red Hat. It has been two roller coaster years with a lot of changes happening both inside Red Hat and with the world that the Linux desktop operates in. During those years my wife and I have gotten to love Brno, which both of us find a bit surprising as we where both quite skeptical to the city in the outset.

I think having grown up in west europe during the cold war I had some preconceptions about what life was like in the former east europe and Brno specifically is struggling a bit with being the second city in Czech after Prague, due to Prague so often being hailed internationally as a beautiful and exciting city.

But I think during these two years Brno has proven itself to us as a place that is great to live, especially if you have a little child. Brno has a lot of beautiful outdoors areas which are great for hiking or relaxing, it is packed full of these childrens cafes where you can take your kid to play while you sit down and have a coffee or a tea, a vibrant expat community, affordable housing, a good range of restaurants, short distance to major cities like Vienna, Prague and Budapest. And lot of old castles and towns around to explore in the vicinity. I think Telc has to be one of our topmost favorites in that regard. And it has very little crime, my wife has been telling her friends how Brno is the first city she has ever lived in where she feels that as a woman she can walk along through the city in the evening or at night and feel safe.

But that said the time has come for us to move on. Due to one of these changes inside Red Hat I mentioned I am getting moved to our US Engineering office in Westford, Massachusetts. For those not familiar with Westford it is close to a city you probably do know, Boston.

So tomorrow the moving company will arrive at our flat here in Brno and pack up everything for the transport to the US. The furniture will take some time to arrive there, so while our stuff is sailing across the ocean we will live with my family in Norway, while I take advantage of the Red Hat office in downtown Oslo. So by mid-October I expect us to be fully set up in the Boston area, although we are heading over there next week for a final house hunting trip so that the furniture has a place to arrive to :)

So goodbye to Brno for now, and looking forward to seeing new and old friends in Boston!

by uraeus at September 01, 2014 12:53 PM

August 29, 2014

Edward HerveyWow, 7 years….

(Edward Hervey)

Originally post to Collabora co-workers:

7 years since starting the Collabora Multimedia adventure,
7 years of challenges, struggles, and proving we could tackle them
7 years of success, pushing FOSS in more and more areas (I can still hear Christian say “de facto” !)
7 years of friendship, jokes, rants,
7 years of being challenged to outperform one self,
7 years of realizing you were working with the smartest and brightest engineers out there,
7 years of pushing the wtf-meter all the way to 11, yet translating that in a politically correct fashion to the clients
7 years of life …
7 years … that will never be forgotten, thanks to all of you

It’s never easy … but it’s time for me to take a long overdue break, see what other exciting things life has to offer, and start a new chapter.

So today is my last day at Collabora. I’ve decided that after 17 years of non-stop study and work (i.e. since I last took more than 2 weeks vacation in a row), it was time to take a break.

What’s next ? Tackling that insane todo-list one compiles over time but never gets to tackle :). Some hacking and GStreamer (obviously), some other life related stuff, traveling, visiting friends, exploring new technologies and fields I haven’t had time to look deeper into until now, maybe do some part-time teaching, write more articles and blogposts, take on some freelance work here and there, … But essentially, be in full control of what I’m doing for the next 6-12 months.

Who knows what will happen. It’s both scary … and tremendously exciting :)

PS 1: While my position at Collabora as Multimedia Domain Lead has already been taken over by the insane(ly amazing) Olivier Crete (“tester” from GStreamer fame), Collabora is looking for more Multimedia engineers. If you’re up for the challenge, contact them :)

PS 2. wtf-meter : http://www.osnews.com/story/19266/WTFs_m

PS 3. My non-Collabora email address is <my nickname>@<my nickname> dot com

by Edward Hervey at August 29, 2014 05:09 AM

August 28, 2014

Edward HerveyGStreamer continuous testing (Part 1)

(Edward Hervey)

History so far

For the past 6-9 months, as part of some of the tasks I’ve been handling at Collabora, I’ve been working on setting up a continuous build and testing system for GStreamer. For those who’ve been following GStreamer for long enough, you might remember we had a buildbot instance back around 2005-2006, which continuously built and ran checks on every commit. And when it failed, it would also notify the developers on IRC (in more or less polite terms) that they’d broken the build.

The result was that master (sorry, I mean main, we were using CVS back then) was guaranteed to always be in a buildable state and tests always succeeded. Great, no regressions, no surprises.

At some point in time (around 2007 I think ?) the buildbot was no longer used/maintained… And eventually subtle issues crept in, you wouldn’t be guaranteed checkouts always compile, tests eventually broke, you’d need to track what introduced a regression (git bisect makes that easier, but avoiding it in the first place is even better), etc…

What to test

Fast-forward to 2013, after talking so much about it, it was time to bring back such a system in place. Quite a few things have changed since:

  • There’s a lot more code. In 2005, when 0.10 was released, the GStreamer project was around 400kLOC. We’re now around 1.5MLOC ! And I’m not even taking into account all the dependency code we use in cerbero, the system for building binary SDK releases.
  • There are more usages that we didn’t have back then. New modules (rtsp-server, editing-services, orc now under the GStreamer project umbrella, ..)
  • We provide binary releases for Windows, MacOSX, iOS, Android, …

The problems to tackle were “What do we test ? How do we spot regressions ? How to make it as useful as possible to developers ?”.

In order for a CI system to be useful, you want to limit the Signal-To-Noise ratio as much as possible. Just enabling a massive bunch of tests/use-cases with millions of things to fix is totally useless. Not only is it depressing to see millions of failed tests, but also you can’t spot regressions easily and essentially people don’t care anymore (it’s just noise). You want the system to become a simple boolean (Either everything passes, or something failed. And if it failed, it was because of that last commit(s)). In order to cope with that, you gradually activate/add items to do and check. The bare minimum was essentially testing whether all of GStreamer compiled on a standard linux setup. That serves as a reference point. If someone breaks the build, it becomes useful, you’ve spotted a regression, you can fix it. As time goes by, you start adding other steps and builds (make check passes on gstreamer core, activate that, passes on gst-plugins-base, activate that, cerbero builds fully/cleanly on debian, activate that, etc…).

The other important part is that you want to know as quickly as possible whether a regression was introduced. If you need to wait 3 hours for the CI system to report a regression … that person will have gone to sleep or be taken up by something else. If you know within 10-15mins, then it’s still fresh in their head, they are most likely still online, and you can correct the issue as quickly as possible.

Finally, what do we test ? GStreamer has gotten huge. in that sentence GStreamer is actually not just one module, but a whole collection (GStreamer core, gst-plugins*, but also ORC, gst-rtsp-server, gnonlin, gst-editing-services, ….). Whatever we produce for every release … must be covered. So this now includes the binary releases (formerly from gstreamer.com, but which are handled by the GStreamer project itself since 1.x). So we also need to make sure nothing breaks on all the platforms we target (Linux, Android, OSX, iOS, Windows, …).

To summarize

  1. CI system must be set-up progressively (to detect regressions)
  2. CI system must be fast (so person who introduced the regression can fix it ASAP)
  3. CI system must cover all our offering (including cerbero binary builds)

The result is here (yes, I know, we’re working on fixing the certificates once it moves to the final namespace).

How this was implemented, and what challenges were encountered and handled will be covered in a next post.

by Edward Hervey at August 28, 2014 09:34 AM

August 27, 2014

GStreamerGStreamer Core, Plugins and RTSP server 1.4.1 stable release

(GStreamer)

The GStreamer team is pleased to announce a bugfix release of the stable 1.4 release series. The 1.4 release series is adding new features on top of the 1.2 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework that contains new features. The 1.4.x bugfix releases only contain important bugfixes compared to 1.4.0.

Binaries for Android, iOS, Mac OS X and Windows are provided by the GStreamer project for this release.

The 1.x series is a stable series targeted at end users. It is not API or ABI compatible with the 0.10.x series. It can, however, be installed in parallel with the 0.10.x series and will not affect an existing 0.10.x installation.

The stable 1.4.x release series is API and ABI compatible with 1.0.x and any other 1.x release series in the future. Compared to 1.0.x it contains some new features and more intrusive changes that were considered too risky as a bugfix.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, or gst-rtsp-server, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, or gst-libav, or gst-rtsp-server.

Check the release announcement mail for details and the release notes above for a list of changes.

Also available are binaries for Android, iOS, Mac OS X and Windows.

August 27, 2014 05:00 PM

Andy Wingoa wingolog user's manual

(Andy Wingo)

Greetings, dear readers!

Welcome to my little corner of the internet. This is my place to share and write about things that are important to me. I'm delighted that you stopped by.

Unlike a number of other personal sites on the tubes, I have comments enabled on most of these blog posts. It's gratifying to me to hear when people enjoy an article. I also really appreciate it when people bring new information or links or things I hadn't thought of.

Of course, this isn't like some professional peer-reviewed journal; it's above all a place for me to write about my wanderings and explorations. Most of the things I find on my way have already been found by others, but they are no less new to me. As Goethe said, quoted in the introduction to The Joy of Cooking: "That which thy forbears have bequeathed to thee, earn it anew if thou wouldst possess it."

In that spirit I would enjoin my more knowledgeable correspondents to offer their insights with the joy of earning-anew, and particularly to recognize and banish the spectre of that moldy, soul-killing "well-actually" response that is present on so many other parts of the internet.

I've had a good experience with comments on this site, and I'm a bit lazy, so I take an optimistic approach to moderation. By default, comments are posted immediately. Every so often -- more often after a recent post, less often in between -- I unpublish comments that I don't feel contribute to the piece, or which I don't like for whatever reason. It's somewhat arbitrary, but hey, welcome to my corner of the internet.

This has the disadvantage that some unwanted comments end up published, then they go away. If you notice this happening to someone else's post, it's best to just ignore it, and in particular to not "go meta" and ask in the comments why a previous comment isn't there any more. If it happens to you, I'd ask you to re-read this post and refrain from unwelcome comments in the future. If you think I made an error -- it can happen -- let me know privately.

Finally, and it really shouldn't have to be said, but racism, sexism, homophobia, transphobia, and ableism are not welcome here. If you see such a comment that I should delete and have missed, let me know privately. However even among well-meaning people, and that includes me, there are ways of behaving that reinforce subtle bias. Please do point out such instances in articles or comments, either publicly or privately. Working on ableist language is a particular challenge of mine.

You can contact me via comments (anonymous or not), via email (wingo@pobox.com), twitter (@andywingo), or IRC (wingo on freenode). Thanks for reading, and happy hacking :)

by Andy Wingo at August 27, 2014 08:37 AM

Sebastian DrögeConcatenate multiple streams gaplessly with GStreamer

(Sebastian Dröge)

Earlier this month I wrote a new GStreamer element that is now integrated into core and will be part of the 1.6 release. It solves yet another commonly asked question on the mailing lists and IRC: How to concatenate multiple streams without gaps between them as if they were a single stream. This is solved by the concat element now.

Here are some examples about how it can be used:

# 100 frames of the SMPTE test pattern, then the ball pattern
gst-launch-1.0 concat name=c ! videoconvert ! videoscale ! autovideosink  videotestsrc num-buffers=100 ! c.   videotestsrc num-buffers=100 pattern=ball ! c.

# Basically: $ cat file1 file2 > both
gst-launch-1.0 concat name=c ! filesink location=both   filesrc location=file1 ! c.   filesrc location=file2 ! c.

# Demuxing two MP4 files with h264 and passing them through the same decoder instance
# Note: this works better if both streams have the same h264 configuration
gst-launch-1.0 concat name=c ! queue ! avdec_h264 ! queue ! videoconvert ! videoscale ! autovideosink   filesrc location=1.mp4 ! qtdemux ! h264parse ! c.   filesrc location=2.mp4 ! qtdemux ! h264parse ! c.

If you run this in an application that also reports time and duration you will see that concat preserves the stream time, i.e. the position reporting goes back to 0 when switching to the next stream and the duration is always the one of the current stream. However the running time will be continuously increasing from stream to stream.

Also as you can notice, this only works for a single stream (i.e. one video stream or one audio stream, not a container stream with audio and video). To gaplessly concatenate multiple streams that contain multiple streams (e.g. one audio and one video track) one after another a more complex pipeline involving multiple concat elements and the streamsynchronizer element will be necessary to keep everything synchronized.

Details

The concat element has request sinkpads, and it concatenates streams in the order in which those sinkpads were requested. All streams except for the currently playing one are blocked until the currently playing one sends an EOS event, and then the next stream will be unblocked. You can request and release sinkpads at any time, and releasing the currently playing sinkpad will cause concat to switch to the next one immediately.

Currently concat only works with segments in GST_FORMAT_TIME and GST_FORMAT_BYTES format, and requires all streams to have the same segment format.

From an application side you could implement the same behaviour as concat implements by using pad probes (waiting for EOS) and using pad offsets (gst_pad_set_offset()) to adjust the running times. But by using the concat element this should be a lot easier to implement.

by slomo at August 27, 2014 08:14 AM

August 25, 2014

GStreamerGStreamer Conference 2014 schedule of talks available now

(GStreamer)

The GStreamer team is pleased to announce that the schedule of talks for this year's GStreamer Conference is now available on the GStreamer Conference website.

The GStreamer Conference takes place on 16-17 October 2014 in Düsseldorf (Germany), alongside LinuxCon Europe, the Embedded Linux Conference Europe, and the Linux Plumbers Conference.

Topics covered include GStreamer on embedded systems, set-top boxes and mobile platforms; testing, tracing and debugging; HTML5, WebRTC and the Web; Wayland, 3D video and OpenGL; secure RTP and real-time streaming; adaptive streaming (HLS, DASH, MSS); latest codec developments; and multi-device synchronisation.

There will be two parallel tracks over two days, and a social event for attendees on the Thursday evening.

All talks will be recorded by Ubicast.

We hope to see you in Düsseldorf in October!

The GStreamer Conference 2014 is sponsored by Google.

August 25, 2014 03:00 PM