June 18, 2017

GStreamerGStreamer 1.10.5 stable release (binaries)

(GStreamer)

Pre-built binary images of the 1.10.5 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

June 18, 2017 10:00 PM

June 15, 2017

GStreamerGStreamer 1.10.5 stable release

(GStreamer)

The GStreamer team is pleased to announce the fifth bugfix release in the stable 1.10 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.10.0. It is most likely the last release in the stable 1.10 release series

See /releases/1.10/ for the full release notes.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

June 15, 2017 11:30 AM

Zeeshan AliHelp me test gps-share

(Zeeshan Ali)

For gps-share to be useful to people, it needs to be tested against various GPS dongles. If you have a GPS dongle, I'd appreciate it if you could test gps-share. If you don't use the hardware, please consider donating it to me and that way I'll ensure that it keeps working with gps-share.

Thanks!

June 15, 2017 10:07 AM

June 12, 2017

Gustavo Orrilloac

After many years of having the codeanticode blog as my main web presence, I was able to create am updated homepage at andrescolubri.net, which will also function as my new blog. All the posts here will remain online as an archival. Thanks for reading (and for your comments)!

by ac at June 12, 2017 03:22 PM

June 11, 2017

Jean-François Fortin TamPainting two old friends—Tintin vs Sephiroth

A little over a year ago, I wrote about my humble restoration of an old drawing of Cubitus and Sénéchal, which was a straightforward operation. Shortly after, I began working on another piece of artwork to decorate one of my walls, this time my own bespoke painting, made 100% with free software.

I did not want to buy some generic framed/canvas picture in a retail store, nor did I want to simply showcase some other artist’s illustration work (even though there is no shortage of jawdropping illustrators on DeviantArt): that would have been too easy, where’s the satisfaction in that? Having far too few occasions to do illustration work in my life, I wanted to challenge myself to create something meaningful (the depicted scene needed to tell a story) and that I can be artistically proud of.

I also wanted something quite big, so I planned to work in “Arch D” format (24×36 inches) and to have it on canvas material. I wanted to push myself to a level of detail and perfectionism way beyond my previous illustration works; no shortcuts and no time limit! Beyond the fact that 24×36 allows cramming a lot of detail, I thought, “If I am to hang it on a wall and look at it more than once, it needs to be perfect”.

The concept

As a fan of Final Fantasy VII since its inception around 1997, I found the Advent Children movie fairly amusing. The ridiculously over-the-top physics-defying action scenes paired with inside jokes and japanese-style screenwriting, certainly make this movie palatable only to those who played the game and are used to the quirks of japanese animation & movie subculture (for non-fans, I can imagine it being a big messy incomprehensible embarrassment).

In particular, there is the climatic 10-minutes-long fight between Cloud and the most infamous videogame supervillain of all time, which contains a short-lived action shot around the 4 minutes 30 seconds mark where Cloud and Sephiroth land on a gigantic tiled floor laid atop massive concrete, a platform that then suddenly fractures and folds its halves to a perpendicular angle, depicting the two combattants in a pretty epic visual composition.

Another reason why I remember this scene in particular: it is the only moment in the entire movie where you can see Sephiroth almost losing his footing, which adds to the unusual nature of this short segment. Here’s a frozen frame (not to be confused with the frozen flame, different RPG universe):

There is so much chaos (motion blur, dust, débris falling) that it’s really hard to see what is going on there unless you watched it in motion. Here we can see Sephiroth looking into the dust cloud emanating from the fracture, and we can barely make out the shine from Cloud’s two swords behind the dust. So I decided to reproduce this frame, albeit with more clarity—after all, who would care about some “dust Cloud”, besides the Fremen of Arrakis?

For my illustration however, I decided that Sephiroth would not be facing Cloud this time, but another blonde guy with rebel hair: Tintin. Why? Because Tintin is a badass, and Cloud is just “a puppet” who let the bad guy kill my waifu back in 1997, and all I got afterwards was this white materia (ˇò_ó)

I found the White Materia in a parking lot one day in 2014. Not kidding.

Moodboard & sketching

As part of my research and planning, I assembled a moodboard. Some character design reference art, some combat stances, object reference designs, and even some poses I shot with a camera to simulate what I wanted and compare with real world anatomy:

After all, the most time consuming part would be to make a sketch (and lineart) that would be “flawless”—I wouldn’t want to look back at the finished artwork and find flaws that would make it “un-watchable” for me.

At first, I thought I would simply give Cloud’s “buster sword” to Tintin, but a quick “proof of concept” sketch revealed that idea to be suboptimal. The two swords would seem to be parallel, depth and scale would be tricky, and Tintin would be too much on the defensive. And it just wouldn’t be classy enough:

One of the initial sketches I did in GIMP

And then it struck me: I would instead expose Tintin’s badassity by arming him with nothing more than Ottokar’s sceptre to fend off Sephiroth’s infamous eight-feet-long Masamune.

I started sketching near the end of November 2015, focusing on getting the poses and anatomy “right”, finishing around mid-January 2016. The process involved many revisions (including some drastic changes in poses) before I felt confident that the whole thing was “correct”. The original pose was this one, which seemed strange to everybody I asked:

Thus many other variants were attempted:

Many thanks to those who provided constructive feedback and forced me to correct anatomic mistakes or confusing aspects of the lineart. I must have been a real pest for the following people in particular: Mélodie Gauthier, Aude Motillon, Élaine Cloutier, Andreas Nilsson, Jens Reuterberg, Sedeto.

Colouring, shading, and insane hardware requirements

I was originally hoping to do something in the style of Alfons Mucha (whose “decorative art” works impressed me when I visited Prague in 2013), with strong outlines combined with subdued colours and semi-flat shading. However, a pure Mucha style would have been problematic in this particular situation (lots of dust needing to be overlaid on top of characters without obscuring them, grain and imperfections required to give life to the very industrial landscape, and shadows required to provide depth and perspective). I had to take a hybrid approach, trying to blend Mucha, Hergé and Nekohayo’s approaches.

I’m not as masterful as famous painters of centuries past, and my spare time is very limited, so I need a computer with productivity-enhancing software to do something good in a reasonable amount of time (with the ability to undo/redo, erase anything cleanly, and the ability to have dozens of layers to experiment and composite things). We live in an unprecedented era in human history when it comes to access to powerful creativity tools—software freely available today makes it feels so damned easy.

I have, by the way, been humbled seeing the amazing work Mark Ferrari does with 8-bit colour palettes and fake animation through palette colour cycling—a true master of light and colour theory, whose existence was revealed to me by Jakub’s post—but I’m definitely not patient enough to be doing pixel art on such a big scale.

So, for this occasion, I used:

  • The development version of MyPaint, self-compiled from Git (because version 1.3 was not yet released back then, and, well, #YOLO) for the whole painting. GIMP was briefly used for the concept research sketching phase, and Scribus for the final print layout/titling. My operating system is Fedora. So, for those who thought Photoshop + Windows/Mac OS is the only game in town, let this be yet another testimonial to the quality of libre graphics software we have at our disposal, as this was made exclusively with the Free/Libre and Open-Source software listed above.
  • A 24″ Wacom Cintiq borrowed from the office.
  • A workstation-class machine with a Xeon W3520 and 24 GB of RAM, that I purchased and repurposed specifically for this project.

“Wait, 24 GB of RAM and a Xeon, just for painting?” Yup. 24×36 inches at over 300 DPI (“Because! What if I want it bigger?”) meant my image was 113 megapixels (9000×12600) multiplied by about 50 layers, which means humongous memory and CPU requirements.

Sure, if you look at the painting’s filesize you may think, “a 350 MB file? Doesn’t look so big to me”…

…but if you consider RAM usage, we’re playing a different game:

My painting required around 10 gigabytes of memory on average (MyPaint is the “python2.7” process at the top of the list). Twitter is extra.

Even saving and loading the file was a bit painful (I reported bugs regarding MyPaint lacking a progressbar when loading and under-utilizing the CPU and doing extraneous work when saving project files).

In any case, I also wanted that overpowered workstation to enjoy my daily research work “without having to worry about RAM”. I’ll soon blog about my long quest to stabilize it for general use, other than just “painting”. That’s a whole other topic.

I did the colouring, shading/lighting and texturing over the course of a week or two in January 2016. The only tricky part was nailing Tintin’s ground and leg shadows, to provide the correct impression of depth and perspective while keeping the “light source” consistent with the rest of the scene’s shading. For example, this is what I initially had:

But it had a number of issues, one of which was perceived incorrect anatomy. So here’s what I did, to give the proper perception of foreleg angles and depth with the light coming from the top (the sky) instead of as a spot-light emanating from Sephiroth’s fiery gaze:

I then worked on the titling and final print adjustments during the spring (I had to extend the painting some more, as I realized I had miscalculated my canvas size and aspect ratio), until I came to a final version in May that year.

I had, so far, spent a total of roughly 47 hours working on this painting (21.5 hours sketching and 25 hours colouring/shading/compositing), and I was happy with the result:

“Tintin et le Projet Jenova”, J.-F. Fortin Tam, jet d’encre sur toile (61×91 cm), 2016. Collection privée.

I was now ready to bring it from digital to print.

Canvas transfer

My goal was to have the artwork on canvas material to get the look & feel of a “traditional painting”. However, I considered that typical canvas printing services at 200$ were “too expensive” and that “I could do that myself for 50$ by transferring a printed poster onto canvas with Mod Podge.”

Big, big mistake.

When I found the time to attempt this, in June-July 2016, I bought 10-15$ worth of Mod Podge, a 13$ roller, a 23$ arch D canvas, and had the artwork printed as a regular poster for 12$. With all the equipment on hand, I thought, “Now I just need to do the transfer myself. This will be a cinch!”

I tested mod-podging a sheet of paper on a small surface:

No problem! As seen on TV YouTube! So I went to do the same with the full-size poster and, well, this is what happened:

Even though I had applied the bare minimum of Mod Podge to the canvas surface, the print would get creased after the fact, forming big air pockets as the material would react to the Mod Podge’s humidity, making it impossible to have a straight surface. I even devised a special “acupuncture” technique to try to get rid of the pockets as they formed, no luck. I read every guide and watched every video under the sun, and couldn’t figure out how to solve this; there was no way to keep the air pockets from forming after the initial press-rolling, not with a poster of such dimensions. I guess my case was a special case. Either that, or all those sons-of-a-bitches on the Internet lied.

Luckily, I was able to at least salvage the canvases every time by scraping off the poster and Mod Podge in extremis and washing the canvases with a garden hose. The posters themselves were, of course, trashed:

In the process of learning this lesson, I wasted three printed posters and purchased a bunch of equipment (even a wood “canvas” as an alternative to the traditional canvas) for nothing.

In despair, I shelved the project for some months… and lo and behold, in November 2016, Staples lowered their 24×36 canvas printing prices to 100$. Whatsmore, there was an additional 20% rebate I could use, bringing the price down to a laughable 80$. Well, damn. I just ordered the canvas print and was done with it, exactly one year later.

As the old saying goes, there are cases where trying to save money will cost you more. Or, in this rather particular turn of events (where prices plummeted after a year of messing around), exactly the same.

Epilogue

I have now my canvas print on display, and a bunch of blank canvases (and Mod Podge®!) for which I have no use (because I don’t have any more wall space to hang artwork even if I wanted to do traditional painting!)

At least I have a story to tell about this painting.

by Jeff at June 11, 2017 04:01 PM

May 30, 2017

Víctor JáquezGstSpringHackfest2017: a quick report

Two weeks ago was the GStreamer Spring Hackfest 2017 and I am very happy about how it went. I have the feeling that most of the attendees had a good time, and made some progress in their projects. I want to thank all the people that participated, in some way or another.

Along that weekend when the hackfest happened, besides my duties as organizer (with a lot of help from my colleagues at Igalia), I managed to hack a bit on GstPlayer, proposing the missing API for setting the subtitles font description (782858). Also I helped Nicolas a bit with the upstreaming of the v4l2 video encoder (728438). Julien Isource and I talked about the missing parts of DMABuf support in gstreamer-vaapi, in particular the action path when the new libva API, for importing and exporting DMABuf, got merged (779146). With Thibault we played with the idea of a Jenkins server doing CI for gstreamer-vaapi. Also I did some kernel debugging, and found out why kmssink failed in db410c when the caps changed from RGB to YUV, thus Rob Clark cooked a patch.

Finally, I worked on a time-lapse video of the hackfest’s main room, only using GStreamer with gstreamer-vaapi in an Atom-based NUC. You can glance the code of the video grabber. Thanks to Luis de Bethencourt for the original idea and code.

by vjaquez at May 30, 2017 03:54 PM

Zeeshan AliIntroducing gps-share

(Zeeshan Ali)

So yesterday, I rolled out the first release of gps-share.

gps-share is a utility to share your GPS device on local network. It has two goals:

  • Share your GPS device on the local network so that all machines in your home or office can make use of it.
  • Enable support for standalone (i-e not part of a cellular modem) GPS devices in Geoclue. Since Geoclue has been able to make use of network NMEA sources since 2015, gps-share works out of the box with Geoclue.

The latter means that it is a replacement for GPSD and Gypsy. While "why not GPSD?" has already been documented, Gypsy has been unmaintained for many years now. I did not feel like reviving a dead project and I really wanted to code in Rust language so I decided to create gps-share.

Dependencies


While cargo manages the Rust crates gps-share depend on, you'll also
need the following on your host:

  • libdbus
  • libudev
  • libcap
  • xz-libs

Supported devices


gps-share currently only supports GPS devices that present themselves as serial port (RS232). Many USB are expected to work out of the box but bluetooth devices need manual intervention to be mounted as serial port devices through rfcomm command. The following command worked on my Fedora 25 machine for a TomTom Wireless GPS MkII.


sudo rfcomm connect 0 00:0D:B5:70:54:75

gps-share can autodetect the device to use if it's already mounted as a serial port but it assumes a baudrate of 38400. You can manually set the device node to use by passing the device node path as argument and set the baudrate using the '-b' commandline option. Pass '--help' for a full list of supported options.

May 30, 2017 11:06 AM

May 09, 2017

Víctor JáquezGStreamer Spring Hackfest 2017 & GStreamer 1.12

Greetings earthlings!

Two things:

One

GStreamer 1.12 is out! And with it, gstreamer-vaapi. Among other new features and improvements we have:

  • GstVaapiDisplay now inherits from GstObject, thus the VA display logging messages are better and tracing the context sharing is more readable.
  • When uploading raw images into a VA surfaces now VADeriveImages are tried first, improving the upload performance, if it is possible.
  • The decoders and the post-processor now can push dmabuf-based buffers to downstream under certain conditions. For example:
    GST_GL_PLATFORM=egl gst-play-1.0 video-sample.mkv --videosink=glimagesink
  • Refactored the wrapping of VA surface into gstreamer memory, adding lock when mapping and unmapping, and many other fixes.
  • Now vaapidecodebin loads vaapipostproc dynamically. It is possible to avoid it usage with the environment variable GST_VAAPI_DISABLE_VPP=1.
  • Regarding encoders: they have primary rank again, since they can discover, in run-time, the color formats they can use for upstream raw buffers and caps renegotiation is now possible. Also the encoders push encoding info downstream via tags.
  • About specific encoders: added constant bit-rate encoding mode for VP8 and H265 encoder handles P010_10LE color format.
  • Regarding decoders, flush operation has been improved, now the internal VA encoder is not recreated at each flush. Also there are several improvements in the handling of H264 and H265 streams.
  • VAAPI plugins try to create their own GstGL context (when available) if they cannot find it in the pipeline, to figure out what type of VA Display they should create.
  • Regarding vaapisink for X11, if the backend reports that it is unable to render correctly the current color format, an internal VA post-processor, is instantiated (if available) and converts the color format.

And

Two

GStreamer Spring Hackfest 2017 is in less than two weeks!

It is going to be held at Igalia premises inCoruña. Keep an eye on it 😉

by vjaquez at May 09, 2017 11:14 AM

May 08, 2017

Zeeshan AliRust Memory Management

(Zeeshan Ali)

In the light of my latest fascination with Rust programming language, I've started to make small presentation about Rust at my office, since I'm not the only one at our company who is interested in Rust. My first presentation in Feb was about a very general introduction to the language but at that time I had not yet really used the language for anything real myself so I was a complete novice myself and didn't have a very good idea of how memory management really works. While working on my gps-share project in my limited spare time, I came across quite a few issues related to memory management but I overcame all of them with help from kind folks at #rust-beginners IRC channel and the small but awesome Rust-GNOME community.

Having learnt some essentials of memory management, I thought I share my knowledge/experience with folks at the office. The talk was not well-attended due to conflicts with other meetings at office but the few folks who attended were very interested and asked some interesting and difficult questions (i-e the perfect audience). One of the questions was if I could put this up as a blog post so here I am. :)

Basics


Let's start with some basics: In Rust,

  1. stack allocation is preferred over the heap allocation and that's where everything is allocated by default.
  2. There is strict ownership semantics involved so each value can only and only have one owner at a particular time.
  3. When you pass a value to a function, you move the ownership of that value to the function argument and similarly, when you return a value from a function, you pass the ownership of the return value to the caller.

Now these rules make Rust very secure but at the same time if you had no way to allocate on the heap or be able to share data between different parts of your code and/or threads, you can't get very far with Rust. So we're provided with mechanisms to (kinda) work around these very strict rules, without compromising on safety these rules provide. Let's start with a simple code that will work fine in many other languages:

fn add_first_element(v1: Vec<i32>, v2: Vec<i32>) -> i32 {
return v1[0] + v2[0];
}

fn main() {
let v1 = vec![1, 2, 3];
let v2 = vec![1, 2, 3];

let answer = add_first_element(v1, v2);

// We can use `v1` and `v2` here!
println!("{} + {} = {}", v1[0], v2[0], answer);
}

This gives us an error from rustc:

error[E0382]: use of moved value: `v1`
--> sample1.rs:13:30
|
10 | let answer = add_first_element(v1, v2);
| -- value moved here
...
13 | println!("{} + {} = {}", v1[0], v2[0], answer);
| ^^ value used here after move
|
= note: move occurs because `v1` has type `std::vec::Vec<i32>`, which does not implement the `Copy` trait

error[E0382]: use of moved value: `v2`
--> sample1.rs:13:37
|
10 | let answer = add_first_element(v1, v2);
| -- value moved here
...
13 | println!("{} + {} = {}", v1[0], v2[0], answer);
| ^^ value used here after move
|
= note: move occurs because `v2` has type `std::vec::Vec<i32>`, which does not implement the `Copy` trait

What's happening is that we passed 'v1' and 'v2' to add_first_element() and hence we passed its ownership to add_first_element() as well and hence we can't use it afterwards. If Vec was a Copy type (like all primitive types), we won't get this error because Rust will copy the value for add_first_element and pass those copies to it. In this particular case the solution is easy:

Borrowing


fn add_first_element(v1: &Vec<i32>, v2: &Vec<i32>) -> i32 {
return v1[0] + v2[0];
}

fn main() {
let v1 = vec![1, 2, 3];
let v2 = vec![1, 2, 3];

let answer = add_first_element(&v1, &v2);

// We can use `v1` and `v2` here!
println!("{} + {} = {}", v1[0], v2[0], answer);
}

This one compiles and runs as expected. What we did was to convert the arguments into reference types. References are Rust's way of borrowing the ownership. So while add_first_element() is running, it owns 'v1' and 'v2' but not after it returns. Hence this code works.

While borrowing is very nice and very helpful, in the end it's temporary. The following code won't build:

struct Heli {
reg: String
}

impl Heli {
fn new(reg: String) -> Heli {
Heli { reg: reg }
}

fn hover(& self) {
println!("{} is hovering", self.reg);
}
}

fn main() {
let reg = "G-HONI".to_string();
let heli = Heli::new(reg);

println!("Registration {}", reg);
heli.hover();
}

rustc says:

error[E0382]: use of moved value: `reg`
--> sample3.rs:20:33
|
18 | let heli = Heli::new(reg);
| --- value moved here
19 |
20 | println!("Registration {}", reg);
| ^^^ value used here after move
|
= note: move occurs because `reg` has type `std::string::String`, which does not implement the `Copy`

If String had Copy trait implemented for it, this code would have compiled. But if efficiency is a concern at all for you (it is for Rust), you wouldn't want most values to be copied around all the time. We can't use a reference here as Heli::new() above needs to keep the passed 'reg'. Also note that the issue here is not that 'reg' was passed to Heli:new() and used afterwards by Heli::hover() afterwards but the fact that we tried to use 'reg' after we have given its ownership to Heli instance through Heli::new().

I realize that the above code doesn't make use of borrowing but if we were to make use of that, we'll have to declare lifetimes for the 'reg' field and the code still won't work because we want to keep the 'reg' in our Heli struct. There is a better solution here:

Rc


use std::rc::Rc;                                                                                         

struct Heli {
reg: Rc<String>
}

impl Heli {
fn new(reg: Rc<String>) -> Heli {
Heli { reg: reg }
}

fn hover(& self) {
println!("{} is hovering", self.reg);
}
}

fn main() {
let reg = Rc::new("G-HONI".to_string());
let heli = Heli::new(reg.clone());

println!("Registration {}", reg);
heli.hover();
}

This code builds and runs successfully. Rc stands for "Reference Counted" so by putting data into this generic container, adds reference counting to the data in question. Note that while you had to explicitly call clone() method of Rc to increment its refcount, you don't need to do anything to decrease the refcount. Each time an Rc reference goes out of scope, the reference is decremented automatically and when it reaches 0, the container Rc and its contained data are freed.

Cool, Rc is super easy to use so we can just use it in all situations where we need shared ownership? Not quite! You can't use Rc to share data between threads. So this code won't compile:

use std::rc::Rc;                                                                                         
use std::thread;

struct Heli {
reg: Rc<String>
}

impl Heli {
fn new(reg: Rc<String>) -> Heli {
Heli { reg: reg }
}

fn hover(& self) {
println!("{} is hovering", self.reg);
}
}

fn main() {
let reg = Rc::new("G-HONI".to_string());
let heli = Heli::new(reg.clone());

let t = thread::spawn(move || {
heli.hover();
});
println!("Registration {}", reg);

t.join().unwrap();
}

It results in:

error[E0277]: the trait bound `std::rc::Rc<std::string::String>: std::marker::Send` is not satisfied in `[closure@sample5.rs:22:27: 24:6 heli:Heli]`
--> sample5.rs:22:13
|
22 | let t = thread::spawn(move || {
| ^^^^^^^^^^^^^ within `[closure@sample5.rs:22:27: 24:6 heli:Heli]`, the trait `std::marker::Send` is not implemented for `std::rc::Rc<std::string::String>`
|
= note: `std::rc::Rc<std::string::String>` cannot be sent between threads safely
= note: required because it appears within the type `Heli`
= note: required because it appears within the type `[closure@sample5.rs:22:27: 24:6 heli:Heli]`
= note: required by `std::thread::spawn`

The issue here is that to be able to share data between more than one threads, the data must be of a type that implements Send trait. However not only implementing Send for all types would be very impractical solution, there is also performance penalties associated with implementing Send (which is why Rc doesn't implement Send).

Introducing Arc


Arc stands for Atomic Reference Counting and it's the thread-safe sibling of Rc.

use std::sync::Arc;                                                                                      
use std::thread;

struct Heli {
reg: Arc<String>
}

impl Heli {
fn new(reg: Arc<String>) -> Heli {
Heli { reg: reg }
}

fn hover(& self) {
println!("{} is hovering", self.reg);
}
}

fn main() {
let reg = Arc::new("G-HONI".to_string());
let heli = Heli::new(reg.clone());

let t = thread::spawn(move || {
heli.hover();
});
println!("Registration {}", reg);

t.join().unwrap();
}

This one works and the only difference is that we used Arc instead of Rc. Cool, so now we have a very efficient by thread-unsafe way to share data between different parts of the code but also a thread-safe mechanism as well. We're done then? Not quite! This code won't work:

use std::sync::Arc;                                                                                      
use std::thread;

struct Heli {
reg: Arc<String>,
status: Arc<String>
}

impl Heli {
fn new(reg: Arc<String>, status: Arc<String>) -> Heli {
Heli { reg: reg,
status: status }
}

fn hover(& self) {
self.status.clear();
self.status.push_str("hovering");
println!("{} is {}", self.reg, self.status);
}
}

fn main() {
let reg = Arc::new("G-HONI".to_string());
let status = Arc::new("".to_string());
let mut heli = Heli::new(reg.clone(), status.clone());

let t = thread::spawn(move || {
heli.hover();
});
println!("main: {} is {}", reg, status);

t.join().unwrap();
}

This gives us two errors:

error: cannot borrow immutable borrowed content as mutable
--> sample7.rs:16:9
|
16 | self.status.clear();
| ^^^^^^^^^^^ cannot borrow as mutable

error: cannot borrow immutable borrowed content as mutable
--> sample7.rs:17:9
|
17 | self.status.push_str("hovering");
| ^^^^^^^^^^^ cannot borrow as mutable

The issue is that Arc is unable to handle mutation of data from difference threads and hence doesn't give you mutable reference to contained data.

Mutex


For sharing mutable data between threads, you need another type in combination with Arc: Mutex. Let's make the above code work:

use std::sync::Arc;                                                                                      
use std::sync::Mutex;
use std::thread;

struct Heli {
reg: Arc<String>,
status: Arc<Mutex<String>>
}

impl Heli {
fn new(reg: Arc<String>, status: Arc<Mutex<String>>) -> Heli {
Heli { reg: reg,
status: status }
}

fn hover(& self) {
let mut status = self.status.lock().unwrap();
status.clear();
status.push_str("hovering");
println!("thread: {} is {}", self.reg, status.as_str());
}
}

fn main() {
let reg = Arc::new("G-HONI".to_string());
let status = Arc::new(Mutex::new("".to_string()));
let heli = Heli::new(reg.clone(), status.clone());

let t = thread::spawn(move || {
heli.hover();
});

println!("main: {} is {}", reg, status.lock().unwrap().as_str());

t.join().unwrap();
}

This code will work. Notice how you don't have to explicitly unlock the mutex after using. Rust is all about scopes. When the unlocked value goes out of the scope, mutex is automatically unlocked.

Other container types


Mutexes are rather expensive and sometimes you have shared date between threads but not all threads are mutating it (all the time) and that's where RwLock becomes useful. I won't go into details here but it's almost identical to Mutex, except that threads can take read-only locks and since it's possible to safely share non-mutable state between threads, it's a lot more efficient than threads locking other threads each time they access the data.

Another container types I didn't mention above, is Box. The basic use of Box is that it's a very generic and simple way of allocating data on the heap. It's typically used to turn an unsized type into a sized type. The module documentation has a simple example on that.

What about lifetimes


One of my colleagues who had had some experience with Rust was surprised that I didn't cover lifetimes in my talk. Firstly, I think it deserves a separate talk of it's own. Secondly, if you make clever use of the container types available to you and described above, most often you don't have to deal with lifetimes. Thirdly, lifetimes is Rust is something that I still struggle with, each time I have to deal with it so I feel a bit unqualified to teach others about how they work.

The end


I hope you find some of the information above useful. If you are looking for other resources on learning Rust, the Rust book is currently your best bet. I am still a newbie at Rust so if you see some mistakes in this post, please do let me know in the comments section.

Happy safe hacking!

May 08, 2017 07:04 AM

May 06, 2017

GStreamerGStreamer 1.12.0 stable release (binaries)

(GStreamer)

Pre-built binary images of the 1.12.0 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

May 06, 2017 12:30 PM

May 04, 2017

GStreamerGStreamer 1.12.0 stable release

(GStreamer)

The GStreamer team is pleased to announce the first release in the stable 1.12 release series. The 1.12 release series is adding new features on top of the 1.0, 1.2, 1.4, 1.6, 1.8 and 1.10 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework.

Full release notes can be found here.

Binaries for Android, iOS, Mac OS X and Windows will be provided in the next days.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

May 04, 2017 04:00 PM

May 01, 2017

GStreamerGStreamer 1.12.0 release candidate 2 (1.11.91, binaries)

(GStreamer)

Pre-built binary images of the 1.12.0 release candidate 2 of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

May 01, 2017 04:00 PM

April 27, 2017

GStreamerGStreamer 1.12.0 release candidate 2 (1.11.91)

(GStreamer)

The GStreamer team is pleased to announce the second release candidate of the stable 1.12 release series. The 1.12 release series is adding new features on top of the 1.0, 1.2, 1.4, 1.6, 1.8 and 1.10 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework.

Full release notes will be provided with the 1.12.0 release, highlighting all the new features, bugfixes, performance optimizations and other important changes. A initial, unfinished version of the release notes can be found here already.

Binaries for Android, iOS, Mac OS X and Windows will be provided in the next days.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

April 27, 2017 03:00 PM

April 26, 2017

Sebastian DrögeRTP for broadcasting-over-IP use-cases in GStreamer: PTP, RFC7273 for Ravenna, AES67, SMPTE 2022 & SMPTE 2110

(Sebastian Dröge)

It’s that time of year again where the broadcast industry gathers at NAB show, which seems like a good time to me to revisit some frequently asked questions about GStreamer‘s support for various broadcasting related standards

Even more so as at this year’s NAB there seems to be a lot of hype about the new SMPTE 2110 standard, which defines how to transport and synchronize live media streams over IP networks, and which fortunately (unlike many other attempts) is based on previously existing open standards like RTP.

While SMPTE 2110 is the new kid on the block here, there are various other standards based on similar technologies. There’s for example AES67 by the Audio Engineering Society for audio-only, Ravenna which is very similar, the slightly older SMPTE 2022 and VSF TR3/4 by the Video Services Forum.

Other standards, like MXF for storage of media (which is supported by GStreamer since years), are also important in the broadcasting world, but let’s ignore these other use-cases here for now and focus on streaming live media.

Media Transport

All of these standards depend on RTP in one way or another, use PTP or similar services for synchronization and are either fully (as in the case of AES67/Ravenna) supported by GStreamer already or at least to a big part, with the missing parts being just some extensions to existing code.

There’s not really much to say here about the actual media transport as GStreamer has had solid support for RTP for a very long time and has a very flexible and feature-rich RTP stack that includes support for many optional extensions to RTP and is successfully used for broadcasting scenarios, real-time communication (e.g. WebRTC and SIP) and live-video streaming as required by security cameras for example.

Over the last months and years, many new features have been added to GStreamer’s RTP stack for various use-cases and the code was further optimized, and thanks to all that the amount of work needed for new standards based on RTP, like the beforementioned ones, is rather limited. For AES67 no additional work was needed to support it, for example.

The biggest open issue for the broadcasting-related standards currently is the need of further optimizations for high-resolution, high-framerate streaming of video. In these cases we currently run into performance problems due to the high amount of packets per second, and some serious optimizations would be needed. However there are already various ideas how to improve this situation that are just waiting to be implemented.

Synchronization

I previously already wrote about PTP in GStreamer, which is supported in GStreamer for synchronizing media and that support is now merged and has been included since the 1.8 release. In addition to that NTP is also supported now since 1.8.

In theory other clocks could also be used in some scenarios, like clocks based on a GPS receiver, but that’s less common and not yet supported by GStreamer. Nonetheless all the infrastructure for supporting arbitrary clocks exists, so adding these when needed is not going to be much work.

Clock Signalling

One major new feature that was added in the last year, for the 1.10 release of GStreamer, was support for RFC7273. While support for PTP in theory allows you to synchronize media properly if both sender and receiver are using the same clock, what was missing before is a way to signal what this specific clock exactly is and what offsets have to be applied. This is where RFC7273 becomes useful, and why it is used as part of many of the standards mentioned before. It defines a common interface for specifying this information in the SDP, which commonly is used to describe how to set up an RTP session.

The support for that feature was merged for the 1.10 release and is readily available.

Help needed? Found bugs or missing features?

While the basics are all implemented in GStreamer, there are still various missing features for optional extensions of the before mentioned standards or even, in some cases, required parts of the standard. In addition to that some optimizations may still be required, depending on your use-case.

If you run into any problems with the existing code, or need further features for the various standards implemented, just drop me a mail.

GStreamer is already used in the broadcasting world in various areas, but let’s together make sure that GStreamer can easily be used as a batteries-included solution for broadcasting use-cases too.

by slomo at April 26, 2017 04:01 PM

April 25, 2017

Christian SchallerRed Hat job opening for Linux Graphics stack developer

(Christian Schaller)

So we have a new job available for someone interested in joing our team and work on improving the Linux graphics stack. The focus of this job will be on GPU compute related work, but you should also expect to be spending time on improving the graphics driver stack in general. We are looking for someone at the Principal Engineer level, but I do recommend that even if you don’t feel you are quite at that level yet you should apply because to be fair the amount of people with the kind of experience we are looking for are few and far between, so in the end there is a chance we will hire two more junior developers instead if we have candidates with the right profile.

We are quite flexible on working location for this job, so for the right candidate working remotely is definitely a possibility. And of course if you are interested in joining us at one of our offices that is an option too, for instance we have existing team members working out of our Boston (USA), Brno(Czech Republic), Brisbane (Australia) and Munich (Germany) offices.

GPU Compute is rapidly growing in importance and use so this is your chance to be in the middle of it and work for what I personally think is one of the best companies in the world to work for.

So be sure to submit an application though the Red Hat hiring portal.

by uraeus at April 25, 2017 05:53 PM

April 10, 2017

Zeeshan AliGNOME ❤ Rust Hackfest in Mexico

(Zeeshan Ali)
While I'm known as a Vala fanboy in GNOME, I've tried to stress time and again that I see Vala as more a practical solution than an ideal one. "Safe programming" has always been something that intrigued me, having dealt with numerous crashes and other hard-to-debug runtime issues in the past. So when I first heard of Rust some years back, it got me super excited but it was not exactly stable  and there was no integration with GNOME libraries or D-Bus and hence it was not at all a viable option for developing desktop code. Lately (in past 2 years) things have significantly changed. Not only we have Rust 1.0 but we also have crates that provide integration with GNOME libraries and D-Bus. On top of that, some of us took steps to start converting some C code into Rust and many of us started seriously talking with Rust hackers to make Rust a first class programming language for GNOME.

To make things really go foward, we decided to arrange a hackfest, which took place last week at the Red Hat offices in Mexico city. The event was a big success in my opinion. The actual work done and started during the hackfest aside, it brought two communities much closer together and we learnt quite a lot from each other in a very short amount of time. The main topics at the hackfest were:
  • GObject-introspection consumption by Rust.
  • GObject creation from Rust.
  • Better out of the box Rust support in GNOME Builder
  • GMainLoop and Tokio integration
  • D-Bus bindings
While most folks were focused on the first three and I did participate in discussions on all these topics (except for Builder, of which I don't know anything), I spent most of my time looking into the last one. D-Bus is widely used in automotive industry these days and I serve that industry these days so it made sense, aside from my person interest in D-Bus. We established (some of it before the hackfest) that to make Rust attractive to C and Vala developers, we need to provide:
  1. Syntactic sugar for making D-Bus coding simple

    Very similar to what Vala offers. Antoni already had a project, dbus-macros that targets this goal through the use of Rust's (powerful) macro system. So I spent a lot of time fixing and improving dbus-macros crate. Having Antoni and other Rust experts in the same room, greatly helped me get around some very hard to decipher compiler issues. I found out (the hard way) that while rustc is very good at spotting errors, it fails miserably to give you the proper context when it comes to macros. I complained enough about this to Mozilla folks that I'm sure they'll be looking into fixing that experience at some point in near future. :)

    We also contacted the author of dbus crate, David Henningsson over e-mail about a few D-Bus related subjects (more below) including this one. (I was surprised to find out that he also lives in Sweden). His opinion was that we probably should be using procedural macros for this. I agree with him, except that procedural macros are not yet in stable Rust. So for now, I decided to continue with current approach of the project.

    During the hackfest, I became the maintainer of the dbus-macros crate since the first thing I did was to reduce the very small amount of code by 70%. Next, I created a backlog for myself and worked my way through it one issue at a time. I'm going to continue with that.

  2. Asynchronous D-Bus methods

    While ability to make D-Bus method calls asynchronously from clients is very important (you don't want to block the UI for your IPC), it would be also very nice for services to be able to asynchronously handle method calls as well. Brian Anderson from Mozilla was working on this during the hackfest. His approach was to hack dbus crate to add async API through the use of tokio crate. I spent most of the second day of hackfest, sitting next to Brian for some peer-programming. The author of tokio, Alex Crichton, sitting next to us helped us a lot in understanding tokio API. In the end, Brian submitted a working proof of concept for client-side async calls, which will hopefully provide a very good bases for David's actual implementation.

  3. Code generation from D-Bus introspection XML

    With both GLib and Qt providing utilities to generate code for handling D-Bus for a decade now, most projects doing D-Bus make use of this. My intention was too look into this during the hackfest but just before, I found out that David had not only already started this work in dbus crate but also his approach is exactly what I'd have gone for. So while I decided not to work on this, I did have lengthy (electronic) conversations with David about how to consolidate code generation with dbus-macros.

    Ideally, the API of the generated code should be very similar to one you'd manually create using dbus-macros to make it easy for developers to switch from one approach to another. But since David and I didn't agree with current dbus-macros approach, I kind of gave-up on this goal, at least for now. Once macro procedures stabilize, there is a good chance we will change dbus-macros (though it'll be a completely new version or maybe even a different crate) to make use of them and we can revisit consolidation of code generation and dbus-macros.
A few weeks prior to the event, I decided to create a new project, gps-share. The aim is to provide ability to share your (standalone) GPS device from your laptop/desktop to other devices on the network and at the same time add standalone GPS device support into Geoclue (without any new feature code in Geoclue). I decided to write it in Rust for a few reasons, one of them being my desire to learn enough about the language before the event (I hadn't wrote any serious/complicated code in Rust before) and another one was to have an actual test case for D-Bus adventures (it's supposed to talk to Avahi on D-Bus). I'm glad that I did that since I encountered a few issues with dbus-macros when using them in gps-share and the awesome Mozilla folks were able to help me figure them out very quickly. Otherwise it would have taken me a very long time to figure the issues.




On the last day of hackfest, after a delicious lunch, we decided to go for a long stroll around Mexico city and hang out in the park, where we had more interesting conversations, about life, universe and everything (including Rust and GNOME).

After the hackfest, I stayed around for 3 more days. On Saturday, I mostly hung out with Federico, Christian, Antoni and Joaquín. We walked around the city center and watched Federico and Joaquín interviewed by Rancho Electronico folks. I was really excited to see that they use GNOME for their desktop and GStreamer for streaming. The guy handling the streaming was very glad to meet someone with GStreamer experience.

On Sunday, I rented a car and went to a hike at Tepoztlán with Felipe. Driving in Mexico wasn't easy so having a Mexican with me, helped a lot.


And on Monday, we drove to the Sun pyramid.


I would like to thank both GNOME Foundation and my employer, Pelagicore for sponsoring my participation to this event.


April 10, 2017 07:11 PM

GStreamerGStreamer 1.12.0 release candidate 1 (1.11.90, binaries)

(GStreamer)

Pre-built binary images of the 1.12.0 release candidate 1 of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

April 10, 2017 02:00 PM

April 07, 2017

GStreamerGStreamer 1.12.0 release candidate 1 (1.11.90)

(GStreamer)

The GStreamer team is pleased to announce the first release candidate of the stable 1.12 release series. The 1.12 release series is adding new features on top of the 1.0, 1.2, 1.4, 1.6, 1.8 and 1.10 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework.

Full release notes will be provided with the 1.12.0 release, highlighting all the new features, bugfixes, performance optimizations and other important changes.

Binaries for Android, iOS, Mac OS X and Windows will be provided in the next days.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

April 07, 2017 02:00 PM

April 06, 2017

Christian SchallerYou get what you pay for so start paying for your media

(Christian Schaller)

Warning! This is not a directly Linux/Tech related blog post (it is not the only thing I care about in this world :).

One thing that has been on my mind for a while is the state of journalism. The quality of journalism seems to have
been declining over the last decade and I think it is clear that the new internet driven expectation that news content
is free for the consumer is a big part of the explanation. We all know that newspapers and TV news teams have seen their
staff cut as advertising revenue has not been strong enough to keep staffing up. And in my opinion advertising is in itself
a horrible way to finance something like news and information as it drivers a lot of unwanted behaviour both in terms of
avoiding critical journalism that might drive away advertisers and an intensive drive for ‘clicks’ per news article that often
comes at the expense of level of accuracy and making news very scandal driven.

So I have come to believe over the last few years that if we want to see quality journalism and a healthy democracy we need
to move away from free news content to accepting that we get what we pay for and start paying for our news again. This is true
both in terms of mainstream media, but also in terms of topical media like tech media.

So as a start I began paying for some of my most use news sites last year. I am now a paying subscriber to the Economist, which is one
of the best sources of quality news in my opinion and on the tech side I am a paying subscriber to Phoronix (I am also a paying subscriber to LWN through work). Anyway, I feel strongly enough about this to write this blog and hope that other people reading it agree with my thinking here and start paying for the content you enjoy, be that through subscriptions or patreons or similar. And maybe we can be part of a process to change the expectation and understanding of the value of well funded independent media. Lets help make news something that is made to help inform us as readers and not something that is made to help someone sell something to us.

by uraeus at April 06, 2017 05:30 PM

Christian SchallerWelcoming Ubuntu to GNOME and Wayland

(Christian Schaller)

So as most of you probably know Mark Shuttleworth just announced that they will be switching to GNOME 3 and Wayland again for Ubuntu. So I would like to on behalf of the Red Hat Desktop and Fedora teams to welcome them and say that we look forward to keep working with great Canonical and Ubuntu people like Allison Lortie and Robert Ancell on projects of shared interest around GNOME, Wayland and hopefully Flatpak.

It is worth mentioning that even as we been competing with Unity and Ubuntu we have also been collaborating with them, most recently on working with them to integrate the features they wanted from GNOME Software like the user reviews, but of course now sharing a bigger set of technologies collaboration will be even easier.

I am personally happy to see this convergence of efforts happening because I have for a long time felt that the general level of investment in the Linux desktop has not been great enough to justify the plethora of Linux desktops out there, so by now having reached a position where Canonical, Endless, Red Hat and Suse again share one desktop technology stack and along with consulting companies like Centricular, CodeThink, Collabora and Igalia helping push parts of the stack forward we are at least all pulling in the same direction.
This change should also make life easier for ISV who now have a more clear target if they want to try to integrate their UI with the Linux desktop as ‘the linux desktop’ becomes a more meaningful term with this change.

And to state the obvious, we will continue our effort around Fedora Workstation to continue to lead on innovation and engineering and setting the direction for where Desktop Linux goes in the future.

by uraeus at April 06, 2017 03:16 PM

April 03, 2017

Christian SchallerHacker News feedback on what they want from their Desktop – We got it

(Christian Schaller)

So there is a thread on Hacker News based on a question from a Canonical employee asking for feedback on what people want from the next version of Ubuntu. I always try to read such threads even when they are not about Fedora or Red Hat. I fact I often read such articles and threads about non-Linux systems too to help understand what people are looking for and thus enable us to prioritize what we do with Fedora Workstation even better.
Fedora Workstation

Over the last few years I do feel we managed to nail down what the major pain points are and crossed them out one by one or gotten people assigned to work on them. So a lot of the items people asked for in that thread we already have in Fedora Workstation or have already in our roadmap. So I thought it would be nice to write them up and maybe encourage people to take a look at Fedora Workstation if you haven’t done so already. The list below is my trying to go through the long thread and pick up important and recurring topics, so I hope I got most of them, but if I missed something feel free to add a comment and I will try to answer.

1. Handling of DPI scaling and HiDPI
This has been something we been working on for quite a while. I think we where the first distribution to implemented general HiDPi and put a lot of engineering time into updating Wayland and GNOME to make it happen. That said things are not perfect yet. But we are working on resolving those. Jonas Ådahl and Rui Matos are currently trying to resolve the two main issues we still see. The first item is non-integer UI scaling. Currently we only offer integer scaling meaning that we only offer 2x scaling. This is to much however so we are working on a solution to offer fractional scaling like 1.5 for instance. We are certain to have that ready for Fedora Workstation 27, but there is a small hope we can finalize it already for Fedora Workstation 26. The other item is dealing with applications relying on XWayland because they do not support DPI scaling across two or more monitors, unlike native Wayland applications. We are dealing with that in two ways, one being working with upstreams to get their applications Wayland native like the work we been doing with LibreOffice and Firefox. We are also trying to come up with a scaling solution for XWayland using applications, but we haven’t been able to come up with a solution there yet.

2. Multitouch gestures like 3-finger swipe to change workspace.
This is another item we have put significant effort into. Over the last few years we made sure that we went from almost no touch support in the desktop to now supporting touch throughout the stack. The one big remaining item that was holding items like proper gestures back is that until kernel 4.12 is out the Synaptics touchpads are using PS/2. This causes only 2 touches to be reported, which is not great when you want 3 fingers gestures. With the new kernel, we will be using a different bus for those Synaptics touchpads, and we will have proper 5 fingers support. Benjamin Tissoires on our team has spent 4 years to get this code upstream but it’s finally here. We plan on backport this code to Fedora Workstation 26. Of course application developers will need to make use of the infrastructure in their applications for this feature to be fully realized everywhere.
Multitouch

3. Battery life
This is something we realize is a major issue and it has been on our agenda for a long time. It is a really hard issue to resolve because it is tied into a lot of things outside of control, like hardware used and in some cases third party drivers. That said, as many of you might know we recently set up a Laptop team here inside the bigger Red Hat desktop team and battery life is one of their top priorities. Christian Kellner is our point man on battery life and he has taken over the GNOME Battery bench tool that was originally created by Owen Taylor when we starting looking at battery life. He is currently working on improving GNOME battery bench and talking to hardware vendors to figure out what we can do. We are also actively speaking with NVidia to ensure that we can provide good battery life for hybrid graphics users when the binary NVidia driver is installed. We hope to agree with them on interfaces that should allow us to provide top notch battery life for such systems, but we are beholden to changes in the binary drivers to make that happen so it is also an example of the limits of what we can do on our own here.
GNOME Battery benchGNOME Battery bench

4. UEFI issues
There where people on the Hacker News thread talking about issues with UEFI. Once again this is an area where we have a dedicated engineer assigned to UEFI and making sure it works great. In fact Peter Jones who is our UEFI point man is on the UEFI standards commitee doing ongoing work to ensure the standard is open source friendly and well supported by Linux. It is also worth mentioning that we created the Linux Vendor Firmware Service to make updating UEFI firmware and easy process. So if you see firmware updates offered in GNOME Software for your laptop or other devices that is because of the work we put into this service. We expect to have most of the major vendors signed up by the end of the year, so if your system is currently not supported that is hopefully a temporary thing. So this is both something that works well under Fedora and RHEL due to having someone dedicated to the effort and it is another example us doing the heavy lifting to make things actually happen.
UEFI firmware updatesUEFI Firmware updates

5. We got Wayland
A lot of people in the thread asked about Wayland support and well, we got Wayland! And this is another area where we dedicated serious engineering resources to it and are continuing to do so. For instance in addition to the above mentioned multi-DPI system work we are working on items such as HDR (High dynamic range) and next generation hybrid graphics support in Wayland. We are also working with NVidia to ensure their binary driver works well with Wayland.
Wayland Graphics

6. Something like Redshift
Some time ago we picked up on the growing popularity of tools such as Redshift and f.lux and this was another often repeated request in that Hacker News thread. Well we once again invested our resources into this and thus in the newly released GNOME 3.24 there is built in support for this feature, called Night Light. We drove this feature work and it will of course be available alongside GNOME 3.24 in Fedora Workstation 26.
GNOME Night lightNightlight

7. Improved GPU driver update
This is another item we be spending significant time and resources on. We have a team dedicated to work on the linux graphics stack, which includes people like the graphics subsystem kernel maintainer and RADV creator Dave Airlie, Nouveau maintainer Ben Skeggs, core X, Mesa and Wayland dev Adam Jackson, Freedreeno creator and maintainer Rob Clark and more. This team is pushing the linux graphics stack forward alongside their colleagues at Intel, AMD and NVidia. The one thing we recently been working on for instance is dealing with the NVidia binary driver which has been a pain for a long time due to the file level conflict with Mesa. We didn’t want to do a workaround or hack, so what we did was work with NVidia on their glvnd proposal to make that a reality. This included supporting glvnd in Mesa in addition to the NVidia driver, but also working with the OS level tools to ensure fallbacks and autodetection worked fine. We got the basics of glvnd support already in Fedora and are polishing it up. Hans de Goede who took over that work from Adam Jackson has recently be working with the fine folks at rpmfusion and negativo17 to make sure we have some good packages available taking full advantage of his work, and thus enabling easy install and upgrade of these drivers. We are also planning to start offering a COPR with the latest and greatest Mesa drivers going forward to ensure you can always have the latest drivers available if you want to test and try them out.
NVidia driver installNVidia driver install

8.Improved printer support
Even in this digital work printing is still important and thus we got people dedicated to this task too. So Marek Kasik is working on ensuring we keep CUPS working well and Felipe Borges recently wrote a blog entry talking about the redesigned printer control panel. So this is another area we are spending serious resources on and continuously trying to improve.
New Printer panelNew Printer Panel

9. Improve Bluetooth
Bastien Nocera on our team is probably the person who has done the single most to make sure desktop bluetooth is working at all. We decided to boost that effort by having Christian Kellner work on this too, so he has been working on patches for various bluetooth related issues with Bastien providing guidance and code review. We are also working on coming up with some kind of bluetooth testing harness to allow us to catch regressions more easily and verify support on new hardware. Christian focus currently is improving the handling of Bluetooth Audio.

Summary
If you are contemplating giving Fedora a try I think the items above illustrate one thing very strongly and that is how many of these issues we are the primary force behind, so by using Fedora you are not only getting access to them first and at the same time have some assurance that the integration work has been done right, but you are also supporting the effort of moving these technologies forward and also putting yourself in a position to more directly interact with the engineers working on these and a long slew of other important technologies in the desktop and beyond. And our efforts are not just limited to writing code, like for example our current effort to clear the legal hurdles blocking Linux systems from supporting various media codecs. So if you haven’t already I strongly suggest you go to the Get Fedora website and grab our convenient Fedora installer or an ISO image. And as I said initially if you have other pressing items I didn’t cover here, feel free to post a comment and I will be happy to try to answer any questions I get.

by uraeus at April 03, 2017 06:34 PM

March 30, 2017

Sebastian DrögeWriting GStreamer Elements in Rust (Part 4): Logging, COWs and Plugins

(Sebastian Dröge)

This is part 4, the older parts can be found here: part 1, part 2 and part 3

It’s been quite a while since the last update again, so I thought I should write about the biggest changes since last time again even if they’re mostly refactoring. They nonetheless show how Rust is a good match for writing GStreamer plugins.

Apart from actual code changes, also the code was relicensed from the LGPL-2 to a dual MIT-X11/Apache2 license to make everybody’s life a bit easier with regard to static linking and building new GStreamer plugins on top of this.

I’ll also speak about all this and more at RustFest.EU 2017 in Kiev on the 30th of April, together with Luis.

The next steps after all this will be to finally make the FLV demuxer feature-complete, for which all the base-work is already done now.

Logging

One thing that was missing so far and made debugging problems always a bit annoying was the missing integration with the GStreamer logging infrastructure. Adding println!() everywhere just to remove them again later gets boring after a while.

The GStreamer logging infrastructure is based, like many other solutions, on categories in which you log your messages and levels that describe the importance of the message (error, warning, info, …). Logging can be disabled at compile time, up to a specific level, and can also be enabled/disabled at runtime for each category to a specific level, and performance impact for disabled logging should be close to zero. This now has to be mapped somehow to Rust.

During last year’s “24 days of Rust” in December, slog was introduced (see this also for some overview how slog is used). And it seems like the perfect match here due to being able to implement new “output backends”, called a Drain in slog and very low performance impact. So how logging works now is that you create a Drain per GStreamer debug category (which will then create the category if needed), and all logging to that Drain goes directly to GStreamer:

// The None parameter is a GStreamer Element, which allows the logging system to
// print the element name and other things on the GStreamer side
// The 0 is for defining a color for the logging in that category
let logger = Logger::root(GstDebugDrain::new(None,
                                             "mycategory",
                                             0,
                                             "Some description"),
                                             None);
debug!(logger, "Some output with a number {}", 1);

With lazy_static we can then make sure that the Drain is only created once and can be used from multiple places.

All the implementation for the Drain can be found here, and it’s all rather straightforward plumbing. The interesting part here however is that slog makes sure that the message string and all its formatting arguments (the integer in the above example) are passed down to the Drain without doing any formatting. As such we can skip the whole formatting step if the category is not enabled or its level is too low, which gives us almost zero-cost logging for the cases when it is disabled. And of course slog also allows disabling logging up to a specific level at compile time via cargo’s features feature, making it really zero-cost if disabled at compile time.

Safe & simple Copy-On-Write

In GStreamer, buffers and similar objects are inheriting from a base class called GstMiniObject. This base class provides infrastructure for reference counting, copying (cloning) of the objects and a dynamic (at runtime, not to be confused with Rust’s COW type) Copy-On-Write mechanism (writable access requires a reference count of 1, or a copy has to be made). This is very similar to Rust’s Arc, which for a contained type that implements Clone provides the make_mut() and get_mut() functions that work the same way.

Now we can’t unfortunately use Arc directly here for wrapping the GStreamer types, as the reference counting is already done inside GStreamer and adding a second layer of reference counting on top is not going to make things work better. So there’s now a GstRc, which provides more or less the same API as Arc and wraps structs that implement the GstMiniObject trait. The latter provides GstRc functions for getting the raw pointer, swap the raw pointer and create new instances from a raw pointer. The actual structs for buffers and other types don’t do any reference counting or otherwise instance handling, and only have unsafe constructors. The general idea here is that they will never exist outside a GstRc, which will then can provide you with (mutable or not) references to them.

With all this we now have a way to let Rust do the reference counting for us and enforce the writability rules of the GStreamer API automatically without leaving any chance of doing things wrong. Compared to C where you have to do the reference counting yourself and could accidentally try to modify a non-writable (reference count > 1) object (which would give an assertion), this is a big improvement.

And as a bonus this is all completely without overhead: all that is passed around in the Rust code is (once compiled) the raw C pointer of the objects, and the functions calls directly map to the C functions too. Let’s take an example:

// This gives a GstRc
let mut buffer = Buffer::new_from_vec(vec![1, 2, 3, 4]).unwrap();

{ // A new block to keep the &mut Buffer scope (and mut borrow) small
  // This would fail (return None) if the buffer was not writable
  let buffer_ref = buffer.get_mut().unwrap();
  buffer_ref.set_pts(Some(1));
}

// After this the reference count will be 2
let mut buffer_copy = buffer.clone();

{
  // buffer.get_mut() would return None, the below creates a copy
  // of the buffer instead, which makes it writable again
  let buffer_copy_ref = buffer.make_mut().unwrap();
  buffer_copy_ref.set_pts(Some(2));
}

// Access to Buffer functions that only require a &mut Buffer can
// be done directly thanks to the Deref trait
assert_ne!(buffer.get_pts(), buffer_copy.get_pts());

After reading this code you might ask why DerefMut is not implemented in addition, which would then do make_mut() internally if needed and would allow getting around the extra method call. The reason for this is that make_mut() might do a (expensive!) copy, and as such DerefMut could do a copy implicitly without the code having any explicit indication that a copy might happen here. I would be worried that it could cause non-obvious performance problems.

The last change I’m going to write about today is that the repository was completely re-organized. There is now a base crate and separate plugin crates (e.g. gst-plugin-file). The former is a normal library crate and contains some C code and all the glue between GStreamer and Rust, the latter don’t contain a single line of C code (and no unsafe code either at this point) and compile to a standalone GStreamer plugin.

The only tricky bit here was generating the plugin entry point from pure Rust code. GStreamer requires a plugin to export a symbol with a specific name, which provides access to a description struct. As the struct also contains strings, and generating const static strings with ‘\0’ terminator is not too easy, this is still a bit ugly currently. With the upcoming changes in GStreamer 1.14 this will become better, as we can then just export a function that can dynamically allocate the strings and return the struct from there.

All the boilerplate for creating the plugin entry point is hidden by the plugin_define!() macro, which can then be used as follows (and you’ll understand what I mean with ugly ‘\0’ terminated strings then):

plugin_define!(b"rsfile\0",
               b"Rust File Plugin\0",
               plugin_init,
               b"1.0\0",
               b"MIT/X11\0",
               b"rsfile\0",
               b"rsfile\0",
               b"https://github.com/sdroege/rsplugin\0",
               b"2016-12-08\0");

As a side-note, handling multiple crates next to each other is very convenient with the workspace feature of cargo and the “build –all”, “doc –all” and “test –all” commands since 1.16.

by slomo at March 30, 2017 12:34 PM

March 22, 2017

Christian SchallerAnother media codec on the way!

(Christian Schaller)

One of the thing we are working hard at currently is ensuring you have the codecs you need available in Fedora Workstation. Our main avenue for doing this is looking at the various codecs out there and trying to determine if the intellectual property situation allows us to start shipping all or parts of the technologies involved. This was how we were able to start shipping mp3 playback support for Fedora Workstation 25. Of course in cases where this is obviously not the case we have things like the agreement with our friends at Cisco allowing us to offer H264 support using their licensed codec, which is how OpenH264 started being available in Fedora Workstation 24.

As you might imagine clearing a codec for shipping is a slow and labour intensive process with lawyers and engineers spending a lot of time reviewing stuff to figure out what can be shipped when and how. I am hoping to have more announcements like this coming out during the course of the year.

So I am very happy to announce today that we are now working on packaging the codec known as AC3 (also known as A52) for Fedora Workstation 26. The name AC3 might not be very well known to you, but AC3 is part of a set of technologies developed by Dolby and marketed as Dolby Surround. This means that if you have video files with surround sound audio it is most likely something we can playback with an AC3 decoder. AC3/A52 is also used for surround sound TV broadcasts in the US and it is the audio format used by some Sony and Panasonic video cameras.

We will be offering AC3 playback in Fedora Workstation 26 and we are looking into options for offering an encoder. To be clear there are nothing stopping us from offering an encoder apart from finding an implementation that is possible to package and ship with Fedora with an reasonable amount of effort. The most well known open source implementation we know about is the one found in ffmpeg/libav, but extracting a single codec to ship from ffmpeg or libav is a lot of work and not something we currently have the resources to do. We found another implementation called aften, but that seems to be unmaintaned for years, but we will look at it to see if it could be used.
But if you are interested in AC3 encoding support we would love it if someone started working on a standalone AC3 encoder we could ship, be that by picking up maintership of Aften, splitting out AC3 encoding from libav or ffmpeg or writting something new.

If you want to learn more about AC3 the best place to look is probably the Wikipedia page for Dolby Digital or the a52 ATSC audio standard document for more of a technical deep dive.

by uraeus at March 22, 2017 05:02 PM

March 18, 2017

Jean-François Fortin TamDefence against the Dark Arts involves controlling your hardware

In light of the Vault 7 documents leak (and the rise to power of Lord Voldemort this year), it might make sense to rethink just how paranoid we need to be.  Jarrod Carmichael puts it quite vividly:

I find the general surprise… surprising. After all, this is in line with what Snowden told us years ago, which was already in line with what many computer geeks thought deep down inside for years prior. In the good words of monsieur Crête circa 2013, the CIA (and to an extent the NSA, FBI, etc.) is a spy agency. They are spies. Spying is what they’re supposed to do! 😁

Well, if these agencies are really on to you, you’re already in quite a bit of trouble to begin with. Good luck escaping them, other than living in an embassy or airport for the next decade or so. But that doesn’t mean the repercussions of their technological recklessness—effectively poisoning the whole world’s security well—are not something you should ward against.

It’s not enough to just run FLOSS apps. When you don’t control the underlying OS and hardware, you are inherently compromised. It’s like driving over a minefield with a consumer-grade Hummer while dodging rockets (at least use a hovercraft or something!) and thinking “Well, I’m not driving a Ford Pinto!” (but see this post where Todd weaver explains the implications much more eloquently—and seriously—than I do).

Considering the political context we now find ourselves in, pushing for privacy and software freedom has never been more relevant, as Karen Sandler pointed out at the end of the year. This is why I’m excited that Purism’s work on coreboot is coming to fruition and that it will be neutralizing the Intel Management Engine on its laptops, because this is finally providing an option for security-concerned people other than running exotic or technologically obsolete hardware.

by Jeff at March 18, 2017 02:00 PM

March 17, 2017

Víctor JáquezGStreamer VAAPI 1.11.x (development branch)

Greetings GstFolks!

Last month the unstable release 1.11.2 of GStreamer hit the streets, and I would like to share with you all a quick heads-up of what we are working on in gstreamer-vaapi, since there are a lot of new stuff:

  1. GstVaapiDisplay inherits from GstObject

    GstVaapiDisplay is a wrapper for VADisplay. Before it was a custom C structure shared among the pipeline through the GstContext mechanism. Now it is a GObject based object, which can be queried, introspected and, perhaps later on, exposed in a separated library.

  2. Direct rendering and upload

    Direct rendering and upload are mechanisms based on using ">">vaDeriveImage to upload an raw image into a VASurface, or to download a VASurface into a raw image, which is faster rather than exporting the VASurface to a VAImage.

    Nonetheless we have found some issues with the direct rendering in new Intel hardware (Skylake and above), and we are still assessing if we keep it as default.

  3. Improve the GstValidate pass rate

    GstValidate provides a battery of tests for the whole GStreamer object, sadly, using gstreamer-vaapi, the battery didn’t output the same pass rate as without it. Though we still have some issues with the vaapsink that might need to be tackled in VA API, the pass rate has increased a lot.

  4. Refactor the GstVaapiVideoMemory

    We had refactor completely the internals of the VAAPI video memory (related with the work done for the direct download and upload). Also we have added locks when mapping and unmapping, to avoid race conditions.

  5. Support dmabuf sharing with downstream

    gstreamer-vaapi already had support to share dmabuf-based buffers with upstream (e.g. cameras) but now it is also capable to share dmabuf-based buffers with downstream with sinks capable of importing them (e.g. glimagesink under supported EGL).

  6. Support compilation with meson

    Meson is a new compilation machinery in GStreamer, along with autotools, and now it is supported also by gstreamer-vaapi.

  7. Headless rendering improvements

    There has been a couple of improvements in the DRM backend for vaapisink, for headless environments.

  8. Wayland backend improvements

    Also there has been improvements for the Wayland backend for vaapisink and GstVaapiDisplay.

  9. Dynamically reports the supported raw caps

    Now the elements query in run-time the VA backend to know which color formats does it support, so either the source or sink caps are negotiated correctly, avoiding possible error conditions (like negotiating a unsupported color space). This has been done for encoders, decoders and the post-processor.

  10. Encoders enhancements

    We have improve encoders a lot, adding new features such as constant bit rate support for VP8, the handling of stream metadata through tags, etc.

And many, many more changes, improvements and fixes. But there is still a long road to the stable release (1.12) with many pending tasks and bugs to tackle.

Thanks a bunch to Hyunjun Ko, Julien Isorce, Scott D Phillips, Stirling Westrup, etc. for all their work.

Also, Intel Media and Audio For Linux was accepted in the Google Summer Of Code this year! If your are willing to face this challenge, you can browse the list of ideas to work on, not only in gstreamer-vaapi, but in the driver, or other projects surrounding VAAPI.

Finally, do not forget these dates: 20th and 21h of May @ A Coruña (Spain), where the GStreamer Spring Hackfest is going to take place. Sign up!

by vjaquez at March 17, 2017 04:53 PM

March 15, 2017

Andy Wingoguile 2.2 omg!!!

(Andy Wingo)

Oh, good evening my hackfriends! I am just chuffed to share a thing with yall: tomorrow we release Guile 2.2.0. Yaaaay!

I know in these days of version number inflation that this seems like a very incremental, point-release kind of a thing, but it's a big deal to me. This is a project I have been working on since soon after the release of Guile 2.0 some 6 years ago. It wasn't always clear that this project would work, but now it's here, going into production.

In that time I have worked on JavaScriptCore and V8 and SpiderMonkey and so I got a feel for what a state-of-the-art programming language implementation looks like. Also in that time I ate and breathed optimizing compilers, and really hit the wall until finally paging in what Fluet and Weeks were saying so many years ago about continuation-passing style and scope, and eventually came through with a solution that was still CPS: CPS soup. At this point Guile's "middle-end" is, I think, totally respectable. The backend targets a quite good virtual machine.

The virtual machine is still a bytecode interpreter for now; native code is a next step. Oddly my journey here has been precisely opposite, in a way, to An incremental approach to compiler construction; incremental, yes, but starting from the other end. But I am very happy with where things are. Guile remains very portable, bootstrappable from C, and the compiler is in a good shape to take us the rest of the way to register allocation and native code generation, and performance is pretty ok, even better than some natively-compiled Schemes.

For a "scripting" language (what does that mean?), I also think that Guile is breaking nice ground by using ELF as its object file format. Very cute. As this seems to be a "Andy mentions things he's proud of" segment, I was also pleased with how we were able to completely remove the stack size restriction.

high fives all around

As is often the case with these things, I got the idea for removing the stack limit after talking with Sam Tobin-Hochstadt from Racket and the PLT group. I admire Racket and its makers very much and look forward to stealing fromworking with them in the future.

Of course the ideas for the contification and closure optimization passes are in debt to Matthew Fluet and Stephen Weeks for the former, and Andy Keep and Kent Dybvig for the the latter. The intmap/intset representation of CPS soup itself is highly endebted to the late Phil Bagwell, to Rich Hickey, and to Clojure folk; persistent data structures were an amazing revelation to me.

Guile's virtual machine itself was initially heavily inspired by JavaScriptCore's VM. Thanks to WebKit folks for writing so much about the early days of Squirrelfish! As far as the actual optimizations in the compiler itself, I was inspired a lot by V8's Crankshaft in a weird way -- it was my first touch with fixed-point flow analysis. As most of yall know, I didn't study CS, for better and for worse; for worse, because I didn't know a lot of this stuff, and for better, as I had the joy of learning it as I needed it. Since starting with flow analysis, Carl Offner's Notes on graph algorithms used in optimizing compilers was invaluable. I still open it up from time to time.

While I'm high-fiving, large ups to two amazing support teams: firstly to my colleagues at Igalia for supporting me on this. Almost the whole time I've been at Igalia, I've been working on this, for about a day or two a week. Sometimes at work we get to take advantage of a Guile thing, but Igalia's Guile investment mainly pays out in the sense of keeping me happy, keeping me up to date with language implementation techniques, and attracting talent. At work we have a lot of language implementation people, in JS engines obviously but also in other niches like the networking group, and it helps to be able to transfer hackers from Scheme to these domains.

I put in my own time too, of course; but my time isn't really my own either. My wife Kate has been really supportive and understanding of my not-infrequent impulses to just nerd out and hack a thing. She probably won't read this (though maybe?), but it's important to acknowledge that many of us hackers are only able to do our work because of the support that we get from our families.

a digression on the nature of seeking and knowledge

I am jealous of my colleagues in academia sometimes; of course it must be this way, that we are jealous of each other. Greener grass and all that. But when you go through a doctoral program, you know that you push the boundaries of human knowledge. You know because you are acutely aware of the state of recorded knowledge in your field, and you know that your work expands that record. If you stay in academia, you use your honed skills to continue chipping away at the unknown. The papers that this process reifies have a huge impact on the flow of knowledge in the world. As just one example, I've read all of Dybvig's papers, with delight and pleasure and avarice and jealousy, and learned loads from them. (Incidentally, I am given to understand that all of these are proper academic reactions :)

But in my work on Guile I don't actually know that I've expanded knowledge in any way. I don't actually know that anything I did is new and suspect that nothing is. Maybe CPS soup? There have been some similar publications in the last couple years but you never know. Maybe some of the multicore Concurrent ML stuff I haven't written about yet. Really not sure. I am starting to see papers these days that are similar to what I do and I have the feeling that they have a bit more impact than my work because of their medium, and I wonder if I could be putting my work in a more useful form, or orienting it in a more newness-oriented way.

I also don't know how important new knowledge is. Simply being able to practice language implementation at a state-of-the-art level is a valuable skill in itself, and releasing a quality, stable free-software language implementation is valuable to the world. So it's not like I'm negative on where I'm at, but I do feel wonderful talking with folks at academic conferences and wonder how to pull some more of that into my life.

In the meantime, I feel like (my part of) Guile 2.2 is my master work in a way -- a savepoint in my hack career. It's fine work; see A Virtual Machine for Guile and Continuation-Passing Style for some high level documentation, or many of these bloggies for the nitties and the gritties. OKitties!

getting the goods

It's been a joy over the last two or three years to see the growth of Guix, a packaging system written in Guile and inspired by GNU stow and Nix. The laptop I'm writing this on runs GuixSD, and Guix is up to some 5000 packages at this point.

I've always wondered what the right solution for packaging Guile and Guile modules was. At one point I thought that we would have a Guile-specific packaging system, but one with stow-like characteristics. We had problems with C extensions though: how do you build one? Where do you get the compilers? Where do you get the libraries?

Guix solves this in a comprehensive way. From the four or five bootstrap binaries, Guix can download and build the world from source, for any of its supported architectures. The result is a farm of weirdly-named files in /gnu/store, but the transitive closure of a store item works on any distribution of that architecture.

This state of affairs was clear from the Guix binary installation instructions that just have you extract a tarball over your current distro, regardless of what's there. The process of building this weird tarball was always a bit ad-hoc though, geared to Guix's installation needs.

It turns out that we can use the same strategy to distribute reproducible binaries for any package that Guix includes. So if you download this tarball, and extract it as root in /, then it will extract some paths in /gnu/store and also add a /opt/guile-2.2.0. Run Guile as /opt/guile-2.2.0/bin/guile and you have Guile 2.2, before any of your friends! That pack was made using guix pack -C lzip -S /opt/guile-2.2.0=/ guile-next glibc-utf8-locales, at Guix git revision 80a725726d3b3a62c69c9f80d35a898dcea8ad90.

(If you run that Guile, it will complain about not being able to install the locale. Guix, like Scheme, is generally a statically scoped system; but locales are dynamically scoped. That is to say, you have to set GUIX_LOCPATH=/opt/guile-2.2.0/lib/locale in the environment, for locales to work. See the GUIX_LOCPATH docs for the gnarlies.)

Alternately of course you can install Guix and just guix package -i guile-next. Guix itself will migrate to 2.2 over the next week or so.

Welp, that's all for this evening. I'll be relieved to push the release tag and announcements tomorrow. In the meantime, happy hacking, and yes: this blog is served by Guile 2.2! :)

by Andy Wingo at March 15, 2017 10:56 PM

March 07, 2017

Zeeshan AliGDP meets GSoC

(Zeeshan Ali)
Are you a student? Passionate about Open Source? Want your code to run on next generation of automobiles? You're in luck! Genivi Development Platform will be participating in Google Summer of Code this summer and you are welcome to participate. We have collected a bunch of ideas for what would be a good 3 month project for a student but you're more than welcome to suggest your own project. The ideas page, also has instructions on how to get started with GDP.

We look forward to your participation!

March 07, 2017 07:08 PM

March 06, 2017

Andy Wingoit's probably spam

(Andy Wingo)

Greetings, peoples. As you probably know, these words are served to you by Tekuti, a blog engine written in Scheme that uses Git as its database.

Part of the reason I wrote this blog software was that from the time when I was using Wordpress, I actually appreciated the comments that I would get. Sometimes nice folks visit this blog and comment with information that I find really interesting, and I thought it would be a shame if I had to disable those entirely.

But allowing users to add things to your site is tricky. There are all kinds of potential security vulnerabilities. I thought about the ones that were important to me, back in 2008 when I wrote Tekuti, and I thought I did a pretty OK job on preventing XSS and designing-out code execution possibilities. When it came to bogus comments though, things worked well enough for the time. Tekuti uses Git as a log-structured database, and so to delete a comment, you just revert the change that added the comment. I added a little security question ("what's your favorite number?"; any number worked) to prevent wordpress spammers from hitting me, and I was good to go.

Sadly, what was good enough in 2008 isn't good enough in 2017. In 2017 alone, some 2000 bogus comments made it through. So I took comments offline and painstakingly went through and separated the wheat from the chaff while pondering what to do next.

an aside

I really wondered why spammers bothered though. I mean, I added the rel="external nofollow" attribute on links, which should prevent search engines from granting relevancy to the spammer's links, so what gives? Could be that all the advice from the mid-2000s regarding nofollow is bogus. But it was definitely the case that while I was adding the attribute to commenter's home page links, I wasn't adding it to links in the comment. Doh! With this fixed, perhaps I will just have to deal with the spammers I have and not even more spammers in the future.

i digress

I started by simply changing my security question to require a number in a certain range. No dice; bogus comments still got through. I changed the range; could it be the numbers they were using were already in range? Again the bogosity continued undaunted.

So I decided to break down and write a bogus comment filter. Luckily, Git gives me a handy corpus of legit and bogus comments: all the comments that remain live are legit, and all that were ever added but are no longer live are bogus. I wrote a simple tokenizer across the comments, extracted feature counts, and fed that into a naive Bayesian classifier. I finally turned it on this morning; fingers crossed!

My trials at home show that if you train the classifier on half the data set (around 5300 bogus comments and 1900 legit comments) and then run it against the other half, I get about 6% false negatives and 1% false positives. The feature extractor interns sequences of 1, 2, and 3 tokens, and doesn't have a lower limit for number of features extracted -- a feature seen only once in bogus comments and never in legit comments is a fairly strong bogosity signal; as you have to make up the denominator in that case, I set it to indicate that such a feature is 99.9% bogus. A corresponding single feature in the legit set without appearance in the bogus set is 99% legit.

Of course with this strong of a bias towards precise features of the training set, if you run the classifier against its own training set, it produces no false positives and only 0.3% false negatives, some of which were simply reverted duplicate comments.

It wasn't straightforward to get these results out of a Bayesian classifier. The "smoothing" factor that you add to both numerator and denominator was tricky, as I mentioned above. Getting a useful tokenization was tricky. And the final trick was even trickier: limiting the significant-feature count when determining bogosity. I hate to cite Paul Graham but I have to do so here -- choosing the N most significant features in the document made the classification much less sensitive to the varying lengths of legit and bogus comments, and less sensitive to inclusions of verbatim texts from other comments.

We'll see I guess. If your comment gets caught by my filters, let me know -- over email or Twitter I guess, since you might not be able to comment! I hope to be able to keep comments open; I've learned a lot from yall over the years.

by Andy Wingo at March 06, 2017 02:16 PM

March 01, 2017

Christian Schaller2016 in review

(Christian Schaller)

I started writing this blog entry at the end of January, but kept delaying publishing it due to waiting for some cool updates we are working on. But I decided today that instead of keep pushing the 2016 review part back I should just do this as two separate blog entries. So here is my Fedora Workstation 2016 Summary :)

We did two major releases of Fedora Workstation, namely 24 and 25 each taking is steps closer to realising our vision for the future of the Linux Desktop. I am really happy that we finally managed to default to Wayland in Fedora Workstation 25. As Jonathan Corbet of LWN so well put it: “That said, it’s worth pointing out that the move to Wayland is a huge transition; we are moving away from a display manager that has been in place since before Linus Torvalds got his first computer”.
Successfully replacing the X11 system that has been used since 1987 is no small feat and we have to remember many tried over the years and failed. So a big Thank You to Kristian Høgsberg for his incredible work getting Wayland off the ground and build consensus around it from the community. I am always full of admiration for those who manage to create these kind of efforts up from their first line of code to a place where a vibrant and dynamic community can form around them.

And while we for sure have some issues left to resolve I think the launch of Wayland in Fedora Workstation 25 was so strong that we managed to keep and accelerate the momentum needed to leave the orbit of X11 and have it truly take on a life of its own.
Because we have succeeded not just in forming a community around Wayland, but with getting the existing linux graphics driver community to partake in the move, we managed to get the major desktop toolkits to partake in the move and I believe we have managed to get the community at large to partake in the move. And we needed all of those 3 to join us for this transition to have a chance to succeed with it. If this had only been about us at Red Hat and in the Fedora community who cared and contributed it would have gone nowhere, but this was truly one of those efforts that pulled together almost everyone in the wider linux community, and showcased what is possible when such a wide coalition of people get together. So while for instance we don’t ship an Enlightenment spin of Fedora (interested parties would be encouraged to do so though) we did value and appreciate the work they where doing around Wayland, simply because the bigger the community the more development and bug fixing you will see on the shared infrastructure.

A part of the Wayland effort was the new input library Peter Hutterer put out called libinput. That library allowed us to clean up our input stack and also share the input code between X and Wayland. A lot of credit to Peter and Benjamin Tissoires for their work here as the transition like the later Wayland transition succeeded without causing a huge amount of pain for our users.

And this is also our approach for Flatpak which for us forms a crucial tandem with Wayland and the future of the Linux desktop. To ensure the project is managed in a way that is open and transparent to all and allows for different groups to adapt it to their specific usecases. And so far it is looking good, with early adoption and trials from the IVI community, traditional Linux distributions, device makers like Endless and platforms such as Steam. Each of these using the technologies or looking to use them in slightly different ways, but still all collaborating on pushing the shared technologies forward.

We managed to make some good steps forward in our effort to drain the swamp of Desktop Linux land (only unfortunate thing here is a certain Trump deciding to cybersquat on the ‘drain the swamp’ mantra) with adding H264 and mp3 support to Fedora Workstation. And while the H264 support still needs some work to support more profiles (which we unfortunately did not get to in 2016) we have other codec related work underway which I think will help move the needle on this being a big issue even further. The work needed on OpenH264 is not forgotten, but Wim Taymans ended up doing a lot more multimedia plumbing work for our container based future than originally planned. I am looking forward to share more details of where his work is going these days though as it could bring another group of makers into the world of mainstream desktop Linux when its ready.

Another piece of swamp draining that happened was around the Linux Firmware Service, which went from strength to strength in 2016. We had new vendors sign up throughout the year and while not all of those efforts are public I do expect that by the end of 2017 we will have most major hardware vendors offering firmware through the service. And not only system firmware updates but things like Logitech mice and keyboards will also be available.

Of course the firmware update service also has a client part and GNOME Software truly became a powerhouse for driving change during 2016, being the catalyst not only for the firmware update service, but also for linux applications providing good metadata in a standardized manner. The Appstream format required by GNOME Software has become the de-facto standard. And speaking of GNOME Software the distribution upgrade functionality we added in Fedora 24 and improved in Fedora 25 has become pretty flawless. Always possible to improve of course, but the biggest problem I heard of was due to versioning issue due to us pushing the mp3 decoding support for Fedora in at the very last minute and thus not giving 3rd party repositories a reasonable chance to update their packaging to account for it. Lesson learnt for going forward :)

These are just of course a small subset of the things we accomplished in 2016, but I was really happy to see the great reception we had to Fedora 25 last year, with a lot of major new sites giving it stellar reviews and also making it their distribution of the year. The growth curves in terms of adoption we are seeing for Fedora Workstation is a great encouragement for the team and helps is validate that we are on the right track with setting our development priorities. My hope for 2017 is that even more of you will decide to join our effort and switch to Fedora and 2017 will be the year of Fedora Workstation! On that note the very positive reception to the Fedora Media Writer that we introduced as the default download for Fedora Workstation 25 was great to see. Being able to have one simple tool to use regardless of which operating system you come to us from simplifies so much in terms of both communication on our end and lowering the threshold of adoption on the end user side.

by uraeus at March 01, 2017 05:47 PM

February 27, 2017

GStreamerGStreamer 1.11.2 unstable release (binaries)

(GStreamer)

Pre-built binary images of the 1.11.2 unstable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

February 27, 2017 08:00 AM