July 03, 2015

Andy WingoPfmatch, a packet filtering language embedded in Lua

(Andy Wingo)

Greets, hackers! I just finished implementing a little embedded language in Lua and wanted to share it with you. First, a bit about the language, then some notes on how it works with Lua to reach the high performance targets of Snabb Switch.

the pfmatch language

Pfmatch is a language designed for filtering, classifying, and dispatching network packets in Lua. Pfmatch is built on the well-known pflang packet filtering language, using the fast pflua compiler for LuaJIT.

Here's an example of a simple pfmatch program that just divides up packets depending on whether they are TCP, UDP, or something else:

match {
   tcp => handle_tcp
   udp => handle_udp
   otherwise => handle_other
}

Unlike pflang filters written for such tools as tcpdump, a pfmatch program can dispatch packets to multiple handlers, potentially destructuring them along the way. In contrast, a pflang filter can only say "yes" or "no" on a packet.

Here's a more complicated example that passes all non-IP traffic, drops all IP traffic that is not going to or coming from certain IP addresses, and calls a handler on the rest of the traffic.

match {
   not ip => forward
   ip src 1.2.3.4 => incoming_ip
   ip dst 5.6.7.8 => outgoing_ip
   otherwise => drop
}

In the example above, the handlers after the arrows (forward, incoming_ip, outgoing_ip, and drop) are Lua functions. The part before the arrow (not ip and so on) is a pflang expression. If the pflang expression matches, its handler will be called with two arguments: the packet data and the length. For example, if the not ip pflang expression is true on the packet, the forward handler will be called.

It's also possible for the handler of an expression to be a sub-match:

match {
   not ip => forward
   ip src 1.2.3.4 => {
      tcp => incoming_tcp(&ip[0], &tcp[0])
      udp => incoming_udp(&ip[0], &ucp[0])
      otherwise => incoming_ip(&ip[0])
   }
   ip dst 5.6.7.8 => {
      tcp => outgoing_tcp(&ip[0], &tcp[0])
      udp => outgoing_udp(&ip[0], &ucp[0])
      otherwise => outgoing_ip(&ip[0])
   }
   otherwise => drop
}

As you can see, the handlers can also have additional arguments, beyond the implicit packet data and length. In the above example, if not ip doesn't match, then ip src 1.2.3.4 matches, then tcp matches, then the incoming_tcp function will be called with four arguments: the packet data as a uint8_t* pointer, its length in bytes, the offset of byte 0 of the IP header, and the offset of byte 0 of the TCP header. An argument to a handler can be any arithmetic expression of pflang; in this case &ip[0] is actually an extension. More on that later. For language lawyers, check the syntax and semantics over in our source repo.

Thanks especially to my colleague Katerina Barone-Adesi for long backs and forths about the language design; they really made it better. Fistbump!

pfmatch and lua

The challenge of designing pfmatch is to gain expressiveness, compared to writing filters by hand, while not endangering the performance targets of Pflua and Snabb Switch. These days Snabb is on target to give ASIC-driven network appliances a run for their money, so anything we come up with cannot sacrifice speed.

In practice what this means is compile, don't interpret. Using the pflua compiler allows us to generalize the good performance that we have gotten on pflang expressions to a multiple-dispatch scenario. It's a pretty straightword strategy. Naturally though, the interface with Lua is more complex now, so to understand the performance we should understand the interaction with Lua.

How does one make two languages interoperate, anyway? With pflang it's pretty clear: you compile pflang to a Lua function, and call the Lua function to match on packets. It returns true or false. It's a thin interface. Indeed with pflang and pflua you could just match the clauses in order:

not_ip = pf.compile('not ip')
incoming = pf.compile('ip src 1.2.3.4')
outgoing = pf.compile('ip dst 5.6.7.8')

function handle(packet, len)
   if not_ip(packet, len) then return forward(packet, len)
   elseif incoming(packet, len) then return incoming_ip(packet, len)
   elseif outgoing(packet, len) then return outgoing_ip(packet, len)
   else return drop(packet, len) end
end

But not only is this tedious, you don't get easy access to the packet itself, and you're missing out on opportunities for optimization. For example, if the packet fails the not_ip check, we don't need to check if it's an IP packet in the incoming check. Compiling a pfmatch program takes advantage of pflua's optimizer to produce good code for the match expression as a whole.

If this were Scheme I would make the right-hand side of an arrow be an expression and implement pfmatch as a macro; see Racket's match documentation for an example. In Lua or other languages that's harder to do; you would have to parse Lua, and it's not clear which parts of the production as a whole are the host language (Lua) and which are the embedded language (pfmatch).

Instead, I think embedding host language snippets by function name is a fine solution. It seems fairly clear that incoming_ip, for example, is some kind of function. It's easy to parse identifiers in an embedded language, both for humans and for programs, so that takes away a lot of implementation headache and cognitive overhead.

We are left with a few problems: how to map names to functions, what to do about the return value of match expressions, and how to tie it all together in the host language. Again, if this were Scheme then I'd use macros to embed expressions into the pfmatch term, and their names would be scoped into whatever environment the match term was defined. In Lua, the best way to implement a name/value mapping is with a table. So we have:

local handlers = {
   forward = function(data, len)
      ...
   end,
   drop = function(data, len)
      ...
   end,
   incoming_ip = function(data, len)
      ...
   end,
   outgoing_ip = function(data, len)
      ...
   end
}

Then we will pass the handlers table to the matcher function, and the matcher function will call the handlers by name. LuaJIT will mostly take care of the overhead of the table dispatch. We compile the filter like this:

local match = require('pf.match')

local dispatcher = match.compile([[match {
   not ip => forward
   ip src 1.2.3.4 => incoming_ip
   ip dst 5.6.7.8 => outgoing_ip
   otherwise => drop
}]])

To use it, you just invoke the dispatcher with the handlers, data, and length, and the return value is whatever the handler returns. Here let's assume it's a boolean.

function loop(self)
   local i, o = self.input.input, self.output.output
   while not link.empty() do
      local pkt = link.receive(i)
      if dispatcher(handlers, pkt.data, pkt.length) then
         link.transmit(o, pkt)
      end
   end
end

Finally, we're ready for an example of a compiled matcher function. Here's what pflua does with the match expression above:

local cast = require("ffi").cast
return function(self,P,length)
   if length < 14 then return self.forward(P, len) end
   if cast("uint16_t*", P+12)[0] ~= 8 then return self.forward(P, len) end
   if length < 34 then return self.drop(P, len) end
   if P[23] ~= 6 then return self.drop(P, len) end
   if cast("uint32_t*", P+26)[0] == 67305985 then return self.incoming_ip(P, len) end
   if cast("uint32_t*", P+30)[0] == 134678021 then return self.outgoing_ip(P, len) end
   return self.drop(P, len)
end

The result is a pretty good dispatcher. There are always things to improve, but it's likely that the function above is better than what you would write by hand, and it will continue to get better as pflua improves.

Getting back to what I mentioned earlier, when we write filtering code by hand, we inevitably end up writing interpreters for some kind of filtering language. Network functions are essentially linguistic in nature: static appliances are no good because network topologies change, and people want solutions that reflect their problems. Usually this means embedding an interpreter for some embedded language, for example BPF bytecode or iptables rules. Using pflua and pfmatch expressions, we can instead compile a filter suited directly for the problem at hand -- and while we're at it, we can forget about worrying about pesky offsets, constants, and bit-shifts.

challenges

I'm optimistic about pfmatch or something like it being a success, but there are some challenges too.

One challenge is that pflang is pretty weird. For example, attempting to access ip[100] will abort a filter immediately on a packet that is less than 100 bytes long, not including L2 encapsulation. It's wonky semantics, and in the context of pfmatch, aborting the entire pfmatch program would obviously be the wrong thing. That would abort too much. Instead it should probably just fail the pflang test in which that packet access appears. To this end, in pfmatch we turn those aborts into local expression match failures. However, this leads to an inconsistency with pflang. For example in (ip[100000] == 0 or (1==1)), instead of ip[100000] causing the whole pflang match to fail, it just causes the local test to fail. This leaves us with 1==1, which passes. We abort too little.

This inconsistency is probably a bug. We want people to be able to test clauses with vanilla pflang expressions, and have the result match the pfmatch behavior. Due to limitations in some of pflua's intermediate languages, it's likely to persist for a while. It is the only inconsistency that I know of, though.

Pflang is also underpowered in many ways. It has terrible IPv6 support; for example, tcp[0] only matches IPv4 packets, and at least as implemented in libpcap, most payload access on IPv6 packets does the wrong thing regarding chained extension headers. There is no facility in the language for binding names to intermediate results, there is no linguistic facility for talking about fragmentation, no ability to address IP source and destination addresses in arithmetic expressions by name, and so on. We can solve these in pflua with extensions to the language, but that introduces incompatibilities with pflang.

You might wonder why to stick with pflang, after all of this. If this is you, Juho Snellman wrote a great article on this topic, just for you: What's wrong with pcap filters.

Pflua's optimizer has mostly helped us, but there have been places where it could be more helpful. When compiling just one expression, you can often end up figuring out which branches are dead-ends, which helps the rest of the optimization to proceed. With more than one successful branch, we had to make a few improvements to the optimizer to actually get decent results. We also had to relax one restriction on the optimizer: usually we only permit transformations that make the code smaller. This way we know we're going in the right direction and will eventually terminate. However because of reasons™ we did decide to allow tail calls to be duplicated, so instead of having just one place in the match function that tail-calls a handler, you can end up with multiple calls. I suspect using a tracing compiler will largely make this moot, as control-flow splits effectively lead to trace duplication anyway, and making sure control-flow joins later doesn't effectively counter that. Still, I suspect that the resulting trace shape will rejoin only at the loop head, instead of in some intermediate point, which is probably OK.

future

With all of these concerns, is pfmatch still a win? Yes, probably! We're going to start using it when building Snabb apps, and will see how it goes. We'll probably end up adding a few more pflang extensions before we're done. If it's something you're in to, snabb-devel is the place to try it out, and see you on the bug tracker. Happy packet hacking!

by Andy Wingo at July 03, 2015 11:05 AM

June 26, 2015

GStreamerGStreamer development release binary builds

(GStreamer)

Pre-built binary images of the 1.5.2 development release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

June 26, 2015 10:30 PM

June 24, 2015

GStreamerGStreamer Core, Plugins, RTSP Server, Python, Editing Services, Validate 1.5.2 development release

(GStreamer)

The GStreamer team is pleased to announce the second release of the unstable 1.5 release series. The 1.5 release series is adding new features on top of the 1.0, 1.2 and 1.4 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework. The unstable 1.5 release series will lead to the stable 1.6 release series in the next weeks, and newly added API can still change until that point.

Binaries for Android, iOS, Mac OS X and Windows will be provided separately during the unstable 1.5 release series.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, or gst-validate, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, or

Check the release announcement mail for details and the release notes above for a list of changes.

June 24, 2015 11:59 PM

June 19, 2015

GStreamerGStreamer Conference 2015 Announced

(GStreamer)

The GStreamer project is happy to announce that this year's GStreamer Conference will take place on Thursday/Friday 8-9 October 2015 in Dublin, Ireland.

It will be co-hosted with the Embedded Linux Conference Europe and LinuxCon Europe.

You can (soon) find more details about the conference on the GStreamer Conference 2015 web page.

A call for papers will be sent shortly, and registration will also open soon. We will send another mail to gstreamer-announce for both, and will send updates there as well.

Talk slots will be available in varying durations from 20 minutes up to 45 minutes. Whatever you're doing or planning to do with GStreamer, we'd like to hear from you!

We also plan on having another session with short lightning talks / demos / showcase talks for those who just want to show what they've been working on or do a mini-talk instead of a full-length talk.

There will also be a social event again on Thursday evening.

We hope to see you in Dublin!

June 19, 2015 02:00 PM

June 18, 2015

Andy Wingoarrow functions coming to chrome 45!

(Andy Wingo)

It's been a long time coming, but I just flipped the bit in V8 that will ship arrow functions in Chrome 45! Woo hoo!

You probably know, but arrow functions are a new way to write functions in JavaScript. They look like this:

// Two arguments, body implicitly returned.
(x, y) => x + y

// With just one argument, no parentheses needed.
x => x * 2

// Body can have braces too; in that case use "return".
x => { return x * 2 }

Relative to the other kind of function that is written like function (x) { return x * 2 }, arrow functions don't define this or arguments in their bodies, instead capturing these values from the environment. There are a couple of other minor differences, too, but instead of writing about them here I'll just point to the great article by Jason Orendorff of the SpiderMonkey team.

Arrow functions are part of the JavaScript language standard that was called "ECMAScript 6" or ES6, and I guess you could still call it that. It seems like a silly thing for the committee to do to throw away all their branding like that but they decided to rename it ECMAScript 2015, which I'm sure is a link that the pedants are glad I have included. The upshot is that the standard is now final, gold master, etched in stone, which from an implementor's perspective is a relief. You can practically feel the anxiety ebbing away by the happy rate at which commits bubble out of source repositories and into shipping browsers, free from the fear that some spec change will force the hack-stream to change course.

From the V8 side, our arrow function implementation has also been a long time coming. My colleague Adrián Pérez did the first half of the work, and I picked up on the back end of things. It seems like such a small feature and in many ways it is, but still it took a long time. Now I know that my readers are a bunch of nerds and many of you like implementing languages, so you might appreciate these nargish points.

One of the first bits is that arrow functions are hard to parse. Consider, this is a valid JavaScript expression:

(x,y)

It's a "comma expression" that will evaluate x then y and its result will be the result of evaluating y. But add an arrow on after the end and you get not an expression but a formal parameter list:

(x,y)=>x+y

Now you might think, well OK, when you see an arrow, rewind the input stream and parse in "arrow function mode". Indeed that would be fine, but not in combination with some additional ES6 features, optional and destructuring arguments. Optional arguments look like this:

(x=42)=>x

The =42 part is the expression that will be evaluated to give x a value, if the function is called with no arguments. Note that this bit is still under implementation in V8 so you can't try it in your browser. An optional argument initializer is an expression and not a value, so you can also have:

(x=(x)=>42)=>x

Combined, this makes rewinding the token stream a proposition of exponential complexity, which is a no-go for a production JavaScript parser. Parsers are on the hot path for page-load times and no browser vendor wants to introduce a pathological case into their page load.

Instead, V8 does something I hadn't seen before. It keeps an open mind about whether something is a comma expression or a formal parameter list of an arrow function, and only makes a decision when it sees the => (or not). As it parses, V8 records places that it would signal an error for either a parameter list or for an expression, and then when that superimposed wave function collapses it checks that the production is valid, signalling the appropriate error if not. I thought this was a really neat trick, so if you're into that thing see expression classifier to see those details.

The other thing that's tricky about arrow functions is the this binding. In JavaScript, this is basically a hidden parameter passed to a function when it is called. Calling a function like o.f() passes the value of o to f as its this parameter. If instead f() is called directly, like with no dot before the call, then undefined is passed as this. Also for sloppy-mode functions, if the passed this value isn't an object, then the global object instead is assigned to this. Finally outside a function, this is bound to the global object.

OK, I know all of you know these things. Thing is, you always have a this, and although it's like a variable it's not a valid variable name, and before ES6 nothing could capture its value, because each function has its own this value. Perhaps you see where I'm going with this (ahem) now. Arrow functions introduce a function scope that doesn't have a this value, and that indeed might capture some other scope's this value, forcing it to be context-allocated. Other parts of ES6 can actually force assignment to this, like a super call, and that assignment can actually come from within an arrow function. Zounds! A simple concept, but there was a lot of incidental complexity in V8 around the implementation. Between Adrián and myself it took like three months to fix this usage in V8 to always just go through the (possibly context-allocated) variable, and there are still probably some devtools bugs to find in the upcoming weeks.

Performance-wise, arrow functions are just like functions. They should be just as fast as if you wrote them with function. So use them with joy, use them with abandon, use them judiciously -- however you decide you use them, don't let perf influence your decision one way or the other.

That's about it! Like all of my JS engine work over the past couple years, this hacking was sponsored by fabulous folks over at Bloomberg, so big ups to them. From me and Adrián at Igalia, until next time! We leave you to puzzle out what this bit of JavaScript evaluates to:

(({},{},({},{})=>({},{}))=>(({},{})=>({},{}),{},{}))({},{})

Happy hacking!

by Andy Wingo at June 18, 2015 04:41 PM

June 11, 2015

Jean-François Fortin TamThe War Against Deadlocks, part 1: The story of our new thread-safe mixing elements implementation

Let me tell you of a story that was lost and forgotten amidst Pitivi’s development battlegrounds last fall, a manuscript that I recovered from a Moldy Tome in a stony field. According to my historical data, the original author was a certain “Dorian Leger”, a French messenger that went missing from the vicinity of Paris.

Moldy Tome

The Moldy Tome as I found it

I am taking the liberty of altering fairly substantially this manuscript to clarify some parts while restoring its intent and style according to the historical context. It will serve as the first part of an epic tale (the second part is yet to be written, it will come in the next blog post, though it will probably have a more “modern” writing style), about our war against deadlocks, vile creatures that have been threatening the stability of our application for much too long. Technically, we’ve always been at war with EastasiaDeadlocks; you can see that even in the noble title of our 0.13.2 release, from a time when a different squad of maintainers roamed this land.

Previous maintainers fighting the Fomors

Previous maintainers fighting the Fomors, in the 0.8-0.10 GSt Era.

Without further ado, here is my transcription of the report:

Paris, le vingt-huit septembre, MMXIV

Dear supporters of the Video Editing Liberation Front,

Over the last month and a half, we have made major strides debugging and rewriting important backend code that Pitivi depends on. At the edge of the land of Pitivi, we are approaching the 0.94 milestone, which we plan on liberating in the coming weeks. I have been discussing with sieur Duponchelle to enquire about a particular piece of work the Company has been preparing for that purpose. He said, “We have torn out a large chunk of bug-ridden code in GStreamer and replaced it with a brand new videomixing element that we can finally show with pride and confidence. It will be a tremendous help in our battle against the Deadlocks; hopefully, it will give us stable and bug-free seeking in the timeline at last.”

Indeed, I have heard tales of previous Pitivi versions consistently crashing when seeking in a section of cross-faded (overlapped) clips. In other words, when we tried to select a frame that contained a cross-fade from one clip to another, Pitivi would freeze up and need to be put by the sword. Needless to say, this bug was killing not only the user experience, but also the morale of our troops, and needed to be dealt with as swiftly and efficiently as possible.

The technical problem behind this nuisance was a powerful piece of equipment in the GStreamer artillery: the GstElement videomixer. This contraption was trying to deal with threads other elements were throwing at it, which was by design extremely complex and error-prone—to the point where some have said it to be the work of the Devil itself.

Ming Dynasty eruptor proto-cannon

When we inspected the machine, we found the diagram above. Transcription of the odd scriptures in that diagram leads to the following interpretation of how it operated:

“To make this machine worketh, thou wilt receive buffers from all them sinkpads in different threads. Therefore, thou wilt wait for all thy pads to get a buffer to decide to mix & push the result on thy srcpad; hence thou shall be pushing buffers from the thread on which thee hath received thy last buffer. Eke, make sure not to stand in front of the machine when operating it.” — Dante, son of Sparda

Multithreading, if you recall your scholarship with the monks of Shaolin, is a difficult art to master. It allows running multimedia processing tasks in the background and enables several tasks to be executed simultaneously. A multi-threaded approach is essential to us, but also requires tedious management of variables shared by different threads (these variable usually describe audio data and video pixelization, in the case of the GstElement videomixer machine). As simultaneous threads often work on the same variables, the backend developer, proficient in the ancient C language, needs to ensure these threads do not simultaneously edit a variable, and the developer must carefully manage how threads give each other the signal to edit a variable.

shaolin tiger style

In the case of the strange machine that was causing those problems, we destroyed it with fire and rebuilt it with simplicity and harmony in mind. I cannot tell for sure, but I have been told that over ten thousand lines of ancient codes were rewritten through the exquisite art of multithreading kung fu. The new videomixer machine now has the srcpad running its own thread, and we aggregate and push buffers on the srcpad from that thread. This technique makes us much stronger against the Deadlocks.

As you have certainly seen for yourselves, previous Pitivi versions—particularly due to the mixing elements in GStreamer—were plagued with bugs causing threads to wait for each other indefinitely. To make this easier to imagine, let’s take a modern analogy: the previous videomixer implementation looked like a city full of cars at stop-sign intersections, waiting for the each other to go, causing endless traffic jams behind them. The good news is, after rewriting over 10,000 lines of code, the stop-signs were replaced by a much simpler and reliable system in 0.94, which means our videomixing element is now thread-friendly and ‘bug-free’. This required a complete rework of our mixing stack (by writing a new baseclass to substitute to collectpads2). It was quite an involved process.

We’re quite happy with what we have achieved there, but the Deadlocks are not so easily vanquished, and the story doesn’t end there. The rest of the manuscript is fairly short and consisted mostly of predictions for events that have now occurred since then, which I will be covering in the next blog post when I find more time, as it requires further analysis and expansion.


Thank you for reading, commenting and sharing! This blog post is part of a série of articles tracking progress made with work related to the 2014 Pitivi fundraiser. Researching and writing quality articles takes a lot of time, so please be patient and enjoy the ride! (◠‿◠)
  1. An update from the 2014 summer battlefront
  2. The 0.94 release
  3. The War Against Deadlocks, part 1: The story of our new thread-safe mixing elements reimplementation
  4. The War Against Deadlocks, part 2: GNonLin is dead, long live NLE
  5. The GTK+ timeline and sink
  6. The 0.95 release
  7. Measuring quality/reliability through time (clarifying what gst-validate is)
  8. Our all-in-one binaries building infrastructure, and why it matters
  9. Samples, “scenario” files and you: how you can help us reproduce (almost) any bug very easily
  10. The 1.0 release and closure of the fundraiser

by nekohayo at June 11, 2015 11:54 PM

June 07, 2015

GStreamerGStreamer Core and Plugins 1.5.1 development release

(GStreamer)

The GStreamer team is pleased to announce the first release of the unstable 1.5 release series. The 1.5 release series is adding new features on top of the 1.0, 1.2 and 1.4 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework. The unstable 1.5 release series will lead to the stable 1.6 release series in the next weeks, and newly added API can still change until that point.

Binaries for Android, iOS, Mac OS X and Windows will be provided separately during the unstable 1.5 release series.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, or gst-rtsp-server, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, or gst-rtsp-server, or

Check the release announcement mail for details and the release notes above for a list of changes.

Also available are binaries for Android, iOS, Mac OS X and Windows.

June 07, 2015 12:00 PM

May 31, 2015

Sebastian PölsterlUsing IPython for parallel computing on an MPI cluster using SLURM

I assume that you already familiar with using IPython.parallel, otherwise have a look at the documentation. Just one point of caution, if you move code that you want to run in parallel to a package or module, you might stumble upon the problem that Python cannot find the function you imported. The problem is that IPython.parallel only looks in the global namespace. The solution is using @interactive from IPython.parallel as described in this Stack Overflow post.

Although, IPython.parallel comes with built-in support for distributing work using MPI, it is going to create MPI tasks itself. However, usually compute clusters come with a job scheduler like SLURM that manages all resources. Consequently, the scheduler determines how many resources you have access to. In the case of SLURM, you have to define the number of tasks you want to process in parallel and the maximum time our job will require when adding your job to the queue.

#!/bin/bash
#SBATCH -J ipython-parallel-test
#SBATCH --ntasks=112
#SBATCH --time=00:10:00

The above script gives the job a name, requests resources to 112 tasks and sets the maximum required time to 10 minutes.

Normally, you would use ipcluster start -n 112, but since we are not allowed to create MPI tasks ourselves, we have to start the individual pieces manually via ipcontroller and ipengine. The controller provides a single point of contact the engines connect to and the engines take commands and execute them.

profile=job_${SLURM_JOB_ID}_$(hostname)
 
echo "Creating profile ${profile}"
ipython profile create ${profile}
 
echo "Launching controller"
ipcontroller --ip="*" --profile=${profile} --log-to-file &
sleep 10
 
echo "Launching engines"
srun ipengine --profile=${profile} --location=$(hostname) --log-to-file &
sleep 45

First of all, we create a new IPython profile, which will contain log files and temporary files that are necessary to establish the communication between controller and engines. To avoid clashes with other jobs, the name of the profile contains the job's ID and the hostname of the machine it is executed on. This will create a folder named profile_job_XYZ_hostname in the ~/.ipython folder.

Next, we start the controller and instruct it to listen on all available interfaces, use the newly create profile and write output to log files residing in the profile's directory. Note that this command is executed only on a single node, thus we only have a single controller per job.

Finally, we can create the engines, one one for each task, and instruct them to connect to the correct controller. Explicitly specifying the location of the controller is necessary if engines are spread across multiple physical machines and machines have multiple Ethernet interfaces. Removing this option, engines running on other machines are likely to fail connecting to the controller because they might look for the controller at the wrong location (usually localhost). You can easily find out whether this is the case by looking at the ipengine log files in the ~/.ipython/profile_job_XYZ_hostname/log directory.

Finally, we can start our Python script that uses IPython.parallel to distribute work across multiple nodes.

echo "Launching job"
python ipython-parallel-test.py --profile ${profile}

To make things more interesting, I created an actual script that approximates the number Pi in parallel.

import argparse
from IPython.parallel import Client
import numpy as np
import os
 
 
def compute_pi(n_samples):
        s = 0
        for i in range(n_samples):
                x = random()
                y = random()
                if x * x + y * y <= 1:
                        s += 1
        return 4. * s / n_samples
 
 
def main(profile):
        rc = Client(profile=profile)
        views = rc[:]
        with views.sync_imports():
                from random import random
        
        results = views.apply_sync(compute_pi, int(1e9))
        my_pi = np.sum(results) / len(results)
        filename = "result-job-{0}.txt".format(os.environ["SLURM_JOB_ID"])
        with open(filename, "w") as fp:
                fp.write("%.20f\n" % my_pi)
 
 
if __name__ == '__main__':
        parser = argparse.ArgumentParser()
        parser.add_argument("-p", "--profile", required=True,
                help="Name of IPython profile to use")
 
        args = parser.parse_args()
 
        main(args.profile)

I ran this on the Linux cluster of the Leibniz Supercomputing Centre. Using 112 tasks, it took a little more than 8 minutes, and the result I got was 3.14159637160714266813. You can download the SLURM script and Python code from above here.

by sebp at May 31, 2015 01:28 PM

May 22, 2015

Bastien Noceraiio-sensor-proxy 1.0 is out!

(Bastien Nocera) Modern (and some less modern) laptops and tablets have a lot of builtin sensors: accelerometer for screen positioning, ambient light sensors to adjust the screen brightness, compass for navigation, proximity sensors to turn off the screen when next to your ear, etc.

Enabling

We've supported accelerometers in GNOME/Linux for a number of years, following work on the WeTab. The accelerometer appeared as an input device, and sent kernel events when the orientation of the screen changed.

Recent devices, especially Windows 8 compatible devices, instead export a HID device, which, under Linux, is handled through the IIO subsystem. So the first version of iio-sensor-proxy took readings from the IIO sub-system and emulated the WeTab's accelerometer: a few too many levels of indirection.

The 1.0 version of the daemon implements a D-Bus interface, which means we can support more than accelerometers. The D-Bus API, this time, is modelled after the Android and iOS APIs.

Enjoying

Accelerometers will work in GNOME 3.18 as well as it used to, once a few bugs have been merged[1]. If you need support for older versions of GNOME, you can try using version 0.1 of the proxy.


Orientation lock in action


As we've adding ambient light sensor support in the 1.0 release, time to put in practice best practice mentioned by Owen's post about battery usage. We already had code like that in gnome-power-manager nearly 10 years ago, but it really didn't work very well.

The major problem at the time was that ambient light sensor reading weren't in any particular unit (values had different meanings for different vendors) and the user felt that they were fighting against the computer for the control of the backlight.

Richard fixed that though, adapting work he did on the ColorHug ALS sensor, and the brightness is now completely in the user's control, and adapts to the user's tastes. This means that we can implement the simplest of UIs for its configuration.

Power saving in action

This will be available in the upcoming GNOME 3.17.2 development release.

Looking ahead

For future versions, we'll want to export the raw accelerometer readings, so that applications, including games, can make use of them, which might bring up security issues. SDL, Firefox, WebKit could all do with being adapted, in the near future.

We're also looking at adding compass support (thanks Elad!), which Geoclue will then export to applications, so that location and heading data is collected through a single API.

Richard and Benjamin Tissoires, of fixing input devices fame, are currently working on making the ColorHug-ALS compatible with Windows 8, meaning it would work out of the box with iio-sensor-proxy.

Links

We're currently using GitHub for bug and code tracking. Releases are mirrored on freedesktop.org, as GitHub is known to mangle filenames. API documentation is available on developer.gnome.org.

[1]: gnome-settings-daemon, gnome-shell, and systemd will need patches

by Bastien Nocera (noreply@blogger.com) at May 22, 2015 06:31 PM

May 20, 2015

Arun RaghavanGNOME Asia 2015

I was in Depok, Indonesia last week to speak at GNOME Asia 2015. It was a great experience — the organisers did a fantastic job and as a bonus, the venue was incredibly pretty!

View from our room

View from our room

My talk was about the GNOME audio stack, and my original intention was to talk a bit about the APIs, how to use them, and how to choose which to use. After the first day, though, I felt like a more high-level view of the pieces would be more useful to the audience, so I adjusted the focus a bit. My slides are up here.

Nirbheek and I then spent a couple of days going down to Yogyakarta to cycle around, visit some temples, and sip some fine hipster coffee.

All in all, it was a week well spent. I’d like to thank the GNOME Foundation for helping me get to the conference!

Sponsored by GNOME!

Sponsored by GNOME!

by Arun at May 20, 2015 08:08 AM

May 14, 2015

Sebastian DrögePTP network clock support in GStreamer

(Sebastian Dröge)

In the last days I was working at Centricular on adding PTP clock support to GStreamer. This is now mostly done, and the results of this work are public but not yet merged into the GStreamer code base. This will need some further testing and code review, see the related bug report here.

You can find the current version of the code here in my freedesktop.org GIT repository. See at the very bottom for some further hints at how you can run it.

So what does that mean, how does it relate to GStreamer?

Precision Time Protocol

PTP is the Precision Time Protocol, which is a network protocol standardized by the IEEE (IEEE1588:2008) to synchronize the clocks between different devices in a network. It’s similar to the better-known Network Time Protocol (NTP, IETF RFC 5905), which is probably used by millions of computers down there to automatically set the local clock. Different to NTP, PTP promises to give much more accurate results, up to microsecond (or even nanosecond with PTP-aware network hardware) precision inside appropriate networks. PTP is part of a few broadcasting and professional media standards, like AES67, RAVENNA, AVB, SMPTE ST 2059-2 and others for inter-device synchronization.

PTP comes in 3 different versions, the old PTPv1 (IEEE1588-2002), PTPv2 (IEEE1588-2008) and IEEE 802.1AS-2011. I’ve implemented PTPv2 via UDPv4 for now, but this work can be extended to other variants later.

GStreamer network synchronization support

So what does that mean for GStreamer? We are now able to synchronize to a PTP clock in the network, which allows multiple devices to accurately synchronize media to the same clock. This is useful in all scenarios where you want to play the same media on different devices, and want them all to be completely synchronized. You can probably imagine quite a few use cases for this yourself now, especially in the context of the “Internet of Things” but also for more normal things like video walls or just having multiple screens display the same thing in the same room.

This was already possible previously with the GStreamer network clock, but that clock implements a custom protocol that only other GStreamer applications can understand currently. See for example here, here or here. With the PTP clock we now get another network clock that speaks a standardized protocol and can interoperate with other software and hardware.

Performance, WiFi and other unreliable networks

When running the code, you will probably notice that PTP works very well in controlled and reliable networks (2-20 microseconds accuracy is what I got here). But it’s not that accurate in wireless networks or in general unreliable networks. It seems like in those networks the custom GStreamer network clock protocol works more reliable currently, partially by design.

Future

As a next step, at Centricular we’re going to look at implementing support for RFC7273 in GStreamer, which allows to signal media clocks for RTP. This is part of e.g. AES67 and RAVENNA and would allow multiple RTP receivers to be perfectly synchronized against a PTP clock without any further configuration. And just for completeness, we’re probably going to release a NTP based GStreamer clock in the near future too.

Running the code

If you want to test my code, you can run it for example against PTPd. And if you want to test the accuracy of the clock, you can measure it with the ptp-clock-reflector (or here, instructions in the README) that I wrote for testing. The latter allows you to measure the accuracy, and in a local wired network I got around 2-20 microseconds accuracy. A GStreamer example application can be found here, which just prints the local and remote PTP clock times. Other than that you can use it just like any other clock on any GStreamer pipeline you can imagine.

by slomo at May 14, 2015 05:44 PM

May 07, 2015

Jean-François Fortin TamPresident’s Report — The State of the GNOME Foundation

As I hinted in my retrospective in February, 2014 has been crazy busy on a personal level. Let’s now take a look at 2014-2015 from a GNOME perspective.

When I offered my candidacy for the GNOME Foundation‘s Board of Directors in May last year, I knew that there would be plenty of issues to tackle if elected. As I was elected president afterwards, I was aware that I was getting into a demanding role that would not only test my resolve but also make use of my ability to set a clear direction and keep us moving forward through tough times. But even if someone tries to describe what’s involved in all this, it remains difficult to truly grasp the amount of work involved before you’ve experienced it yourself.

For one thing, I can say that running a branding & management consulting business at the same time as you’re steering an established public charity like the GNOME Foundation is definitely not easy.

2015-03-cal-blurred

Pictured: my calendar during the month of March

Throughout the year, I went through moments of great joy and periods of deep exhaustion where I cursed Firefox’s bug 60455 — working everyday until 1-2 AM (and waking up 5-6 hours later), for months on end, to get things done. Since 2015, my GTG todo list has consistently been at 4x my normal “healthy” quota. For example, in March, I was at 190 actionable tasks and a total of 520 tasks. Whew! So, in the name of sanity, I had to slow down some of my business activities and withdrew almost all of my involvement in the Pitivi project this term (I’ll be writing a news update blog post soon, I promise!).

I did not compromise on my involvement with the GNOME Foundation because I felt a huge responsibility towards my teammates and towards the Foundation Membership who elected us. Most of the board members, in addition to their daily work, underwent significant personal challenges during the year: relocating, career changes, family matters, all sorts of things that can affect one’s life. And yet, with the limited bandwidth we had, the Board soldiered on and accomplished many feats. I consider myself lucky to have had such a competent and deeply caring team of people to work through one of the busiest years GNOME has had yet!

2014-2015 GNOME Foundation board

What also keeps me motivated is the incredible strength of our community, the technical excellence of our platform and the fundamental need for a GNOME “desktop” (or GNOME OS) to exist. More than ever, we need Free and affordable computing for everyone. If proprietary vendors, DRM, the industry shift towards “renting” (rather than owning) software and the Snowden revelations taught us anything over the years, it’s that we need to be the truly free system that people can trust for all their computing needs, online and offline. Many have their heads in the clouds, but we need to keep our feet on the ground and be the bridge between the sky and earth—the safe base where people will come back to.

The Space Elevator, by Dusty Crosley

The Space Elevator, by Dusty Crosley

For that reason, I’m pretty excited by our friends at Endless who are shipping a radically different desktop computer running GNOME and a set of applications that will run offline, designed to make the lives of millions (billions!) of people easier in the developing world. I’m proud of our little cousin, elementary, for shipping a new version of their OS—even as an established project with lots of momentum, we can still learn a lot from what they’re doing, and we certainly appreciate their involvement in our shared technologies. Fedora Workstation, with its refined focus, is something else I’m pretty happy about. With sandboxing, OSTree and Builder in the works, I’m looking forward to GNOME OS becoming a reality. We need something rock solid and for which we can sculpt the user experience from the ground up, something which also serves as a reference and entrypoint for new contributors willing to create applications for the exciting GNOME ecosystem.

We’ve made major strides towards creating a stable and refined platform over the past few years. We have our work cut out for us in a number of areas and I look forward to us tackling them as a community. For example, one thing I’m passionate about is having a “bulletproof” OS that can handle the most demanding creative workloads, without the user needing to worry about the system’s resource usage. I should be able to have Firefox (or Web/Epiphany) running at the same time as GIMP, Inkscape and Pitivi without an exabyte of RAM or having the kernel/graphics subsystem go unresponsive due to one application hoarding resources. I know we can do better in this space. With our unparalleled ability to oversee changes through the whole stack and upcoming technologies like containers & sandboxing, we have the potential to be the most advanced OS in the world—we just need to seize this opportunity.

There are also new fields of computing that we are poised to explore as a free desktop: virtual reality—bringing a new meaning to the term “virtual desktops”—is certainly the next big step in “office computing” (including productivity and creative work, entertainment, etc.—not just gaming). We should investigate VR as the next big evolution of the desktop. Imagine getting rid of the limitations imposed by computer multi-head monitor frames…

ghost in the shell VR

We should tackle these things one step at a time, together. It takes many small efforts to steer a ship this big, and the Foundation is there to support the community every step of the way.

Here is a snapshot of what the Foundation’s Board of Directors were up to this year:

  • Dealing with over 3700 emails
  • Held 25+ regular board meetings, on the phone or in person
    • In addition to those, we held a few “special meetings” for topics like adboard outreach and ED search to drain the swamp. Therefore, in practice, we have been meeting more often than the already fast-paced bi-weekly meetings schedule.
  • Exchanged over 24,000 lines of IRC discussion within the board
  • Resolved the cash-flow problem (a.k.a. financial/success crisis) that occurred in the spring of 2014. We collected on every single outstanding invoice for OPW and will be announcing more about this soon.
  • Dealt with two very serious complaints brought to our attention — one of them is not fully resolved yet, but we’re working on it.
  • Represented GNOME at various conferences (GUADEC, SFD, GSoC Mentors Summit, OSCON, FOSDEM, LGM, GNOME.Asia, LinuxCon North America, LinuxCon Europe, linux.conf.au and probably a bunch of others I’m forgetting).
  • Negotiated with Groupon for six months before the trademark opposition filing deadline. As we reached the deadline and could wait no longer, we prepared and launched a public fundraiser and awareness campaign. This initiative worked above all expectations, with more than 100K USD raised within a day and Groupon immediately capitulating upon seeing the incredible public support we were able to muster. We got coverage in a number of media outlets, including the World Trademark Review magazine. We hope that our experience stands for the proposition that companies must respect free software communities, and we’re already seeing our situation held up as an example to support that.
  • Reached out to current (and past) advisory board members on the phone or in person.
  • Sought new sponsorship opportunities
  • Set up a kanban system to keep track of the “big picture” of long-running projects and action items.
  • Started the hunt for an Executive Director, including forming a search & hiring committee
  • Reviewed & approved budgets and reimbursements for various events
  • Reviewed & approved various trademark use requests
  • Started work on prospecting new sources of funding for the GNOME sysadmin role
  • Provided advice on fundraising for the Telder font
  • Signed two legal agreements with the Software Freedom Conservancy for the transfer of Outreachy — more news on this later
  • Administered OPW (including legal and financial aspects), until the migration to Outreachy under the SFC was completed
  • Worked on various aspects of codes of conduct
  • Initiated work on a Privacy policy
  • Provided support for GNOME conferences, including GUADEC, GNOME.Asia, the Boston Summit and the West Coast Summit
  • Signed a deal with the WHS for handling funds in Europe — more news on this later
  • Various ongoing financial and legal tasks
  • Transferred the ownership of various assets (including domain names such as gnome.gr)
  • Responded to various press or events organization inquiries, phone calls, etc.
  • Apologized to people for not moving fast enough on some matters ;)

I can tell you, like anyone who has worked on a board of directors without an Executive Director for the entire term, that I have developed a tremendous amount of respect and patience towards the work done by each volunteer and team in the GNOME community. There is so much that needs to be done to keep the GNOME Project running, it would not be possible without your help. Thank you, everyone!

by nekohayo at May 07, 2015 11:37 PM

April 27, 2015

Jan SchmidtNew gst-rpicamsrc features

(Jan Schmidt)

I’ve pushed some new changes to my Raspberry Pi camera GStreamer wrapper, at https://github.com/thaytan/gst-rpicamsrc/

These bring the GStreamer element up to date with new features added to raspivid since I first started the project, such as adding text annotations to the video, support for the 2nd camera on the compute module, intra-refresh and others.

Where possible, you can now dynamically update any of the properties – where the firmware supports it. So you can implement digital zoom by adjusting the region-of-interest (roi) properties on the fly, or update the annotation or change video effects and colour balance, for example.

The timestamps produced are now based on the internal STC of the Raspberry Pi, so the audio video sync is tighter. Although it was never terrible, it’s now more correct and slightly less jittery.

The one major feature I haven’t enabled as yet is stereoscopic handling. Stereoscopic capture requires 2 cameras attached to a Raspberry Pi Compute Module, so at the moment I have no way to test it works.

I’m also working on GStreamer stereoscopic handling in general (which is coming along). I look forward to releasing some of that code soon.

 

by thaytan at April 27, 2015 02:43 PM

April 05, 2015

GStreamerOutreachy Internship Opportunity

(GStreamer)

GStreamer has secured a spot in the May-August round of Outreachy (former OPW). The program aims to help people from groups underrepresented in free and open source software getting involved by offering focused internship opportunities with a number of free software organizations twice a year.

The current round of outreachy internship opportunities is open to women (cis and trans), trans men, genderqueer people, and all participants of the Ascend Project regardless of gender. The organization plans to expand the program to more participants from underrepresented backgrounds in the future. You can find more information about Outreachy here.

GStreamer application instructions and a list of mentored projects (you can always suggest your own) can be found at the GStreamer-Outreachy landing page. If you are interested on applying to an internship position with us, please take a look at the project ideas and get in touch by subscribing to our development list and sending an email about your selected project idea. Please include [Outreachy 2015] in the subject so we can easily spot it.

GStreamer participation in this round of the program it's being sponsored by Samsung's Open Source Group. Deadline for applications is April 10.

April 05, 2015 06:00 PM

April 02, 2015

Bastien NoceraJdLL 2015

(Bastien Nocera) Presentation and conferencing

Last week-end, in the Salle des Rancy in Lyon, GNOME folks (Fred Peters, Mathieu Bridon and myself) set up our booth at the top of the stairs, the space graciously offered by Ubuntu-FR and Fedora being a tad bit small. The JdLL were starting.

We gave away a few GNOME 3.14 Live and install DVDs (more on that later), discussed much-loved features, and hated bugs, and how to report them. A very pleasant experience all-in-all.



On Sunday afternoon, I did a small presentation about GNOME's 15 years. Talking about the upheaval, dragging kernel drivers and OS components kicking and screaming to work as their APIs say they should, presenting GNOME 3.16 new features and teasing about upcoming GNOME 3.18 ones.

During the Q&A, we had a few folks more than interested in support for tablets and convertible devices (such as the Microsoft Surface, and Asus T100). Hopefully, we'll be able to make the OS support good enough for people to be able to use any Linux distribution on those.

Sideshow with the Events box

Due to scheduling errors on my part, we ended up with the "v1" events box for our booth. I made a few changes to the box before we used it:

  • Removed the 17" screen, and replaced it with a 21" widescreen one with speakers builtin. This is useful when we can't setup the projector because of the lack of walls.
  • Upgraded machine to 1GB of RAM, thanks to my hoarding of old parts.
  • Bought a French keyboard and removed the German one (with missing keys), cleaned up the UK one (which still uses IR wireless).
  • Threw away GNOME 3.0 CDs (but kept the sleeves that don't mention the minor version). You'll need to take a sharpie to the small print on the back of the sleeve if you don't fill it with an OpenSUSE CD (we used Fedora 21 DVDs during this event).
  • Triaged the batteries. Office managers, get this cheap tester!
  • The machine's Wi-Fi was unstable, causing hardlocks (please test again if you use a newer version of the kernel/distributions). We tried to get onto the conference network through the wireless router, and installed DD-WRT on it as the vendor firmware didn't allow that.
  • The Nokia N810 and N800 tablets will going to kernel developers that are working on Nokia's old Linux devices and upstreaming drivers.
The events box is still in Lyon, until I receive some replacement hardware.

The machine is 7 years-old (nearly 8!) and only had 512MB of RAM, after the 1GB upgrade, the machine was usable, and many people were impressed by the speed of GNOME on a legacy machine like that (probably more so than a brand new one stuttering because of a driver bug, for example).

This makes you wonder what the use for "lightweight" desktop environments is, when a lot of the features are either punted to helpers that GNOME doesn't need or not implemented at all (old CPU and no 3D driver is pretty much the only use case for those).

I'll be putting it in a small SSD into the demo machine, to give it another speed boost. We'll also be needing a new padlock, after an emergency metal saw attack was necessary on Sunday morning. Five different folks tried to open the lock with the code read off my email, to no avail. Did we accidentally change the combination? We'll never know.

New project, ish

For demo machines, especially newly installed ones, you'll need some content to demo applications. This is my first attempt at uniting GNOME's demo content for release notes screenshots, with some additional content that's free to re-distribute. The repository will eventually move to gnome.org, obviously.

Thanks

The new keyboard and mouse, monitor, padlock, and SSD (and my time) were graciously sponsored by Red Hat.

by Bastien Nocera (noreply@blogger.com) at April 02, 2015 01:58 PM

March 29, 2015

Sebastian PölsterlMPI-based Nested Cross-Validation for scikit-learn

If you are working with machine learning, at some point you have to choose hyper-parameters for your model of choice and do cross-validation to estimate how well the model generalizes to unseen data. Usually, you want to avoid over-fitting on your data when selecting hyper-parameters to get a less biased estimate of the model's true performance. Therefore, the data you do hyper-parameter search on has to be independent from data you use to assess a model's performance. If you want to do know what happens if you perform both tasks on the same data, have a look at the chapter The Wrong and Right Way to Do Cross-validation in the excellent book The Elements of Statistical Learning.

For instance, scikit-learn's Support Vector Regression class has at least two hyper-parameters, the penalty weight C and which kernel to use. Depending on the kernel, additional hyper-parameters are to be considered. Traditionally, people do an exhaustive grid search over a pre-defined set of values for each parameters and choose the setting that performed best. In fact, that is exactly what sklearn.grid_search.GridSearchCV does. In the end, what you get is the average score across the hold-out data with the best parameters. However, you don't want to report that number, because you essentially cheated by repeatedly using the hold-out data with different parameter settings to evaluate your model's performance, which is over-fitting too.

It is important that the numbers you report in the end were retrieved from data you only used once to measure performance. To avoid the pitfalls of GridSearchCV, you essentially have to nest GridSearchCV within another cross-validation such as StratifiedKFold. That way, the grid search only uses the training data of the outer cross-validation loop and results are reported on the test set, which was not used for the grid search.

Obviously, this can become computationally very demanding, e.g. if you do 10-fold cross-validation with 3-fold cross-validation for the grid-search, you need to train a total of 30 models for each configuration of parameters. Luckily, inner and outer cross-validation can be easily parallelized. GridSearchCV can do this with the n_jobs parameter. However, for large-scale analysis you want to use a cluster to process data in parallel.

This is where the Message Passing Interface (MPI) comes in. It is a standardized protocol for parallel computing. In MPI terms, all processes are organized in groups, which are managed by a communicator and each MPI process gets an ID or rank. Usually, the node with rank zero is used as the master that distributes the work and collects it again.

I implemented nested grid search for scikit-learn classes that distributes work using MPI. I'm not an expert in MPI, so there might be more efficient solutions, but it gets the job done for me. Using it is very similar to GridSearchCV:

from mpi4py import MPI
import numpy
from sklearn.datasets import load_boston
from sklearn.svm import SVR
from grid_search import NestedGridSearchCV
 
data = load_boston()
X = data['data']
y = data['target']
 
estimator = SVR(max_iter=1000, tol=1e-5)
 
param_grid = {'C': 2. ** numpy.arange(-5, 15, 2),
              'gamma': 2. ** numpy.arange(3, -15, -2),
              'kernel': ['poly', 'rbf']}
 
nested_cv = NestedGridSearchCV(estimator, param_grid, 'mean_absolute_error',
                               cv=5, inner_cv=3)
nested_cv.fit(X, y)
 
if MPI.COMM_WORLD.Get_rank() == 0:
    for i, scores in enumerate(nested_cv.grid_scores_):
        scores.to_csv('grid-scores-%d.csv' % (i + 1), index=False)
 
    print(nested_cv.best_params_)

To run this example you execute mpiexec python example.py and it uses all available MPI processors to distribute the work among. Your final result is stored in the best_params_ attribute, which is a pandas data frame that contains the selected hyper-parameters, the average performance across all inner cross-validation folds (score (Validation)), and the performance on the outer testing fold (score (Test)).

score (Validation) C gamma kernel score (Test)
fold
1 -7.252490 0.5 0.000122 rbf -4.178257
2 -5.662221 128.0 0.000122 rbf -5.445915
3 -5.582780 32.0 0.000122 rbf -7.066123
4 -6.306561 0.5 0.000122 rbf -6.059503
5 -6.174779 128.0 0.000122 rbf -6.606218

Complete results of the grid search are stored in the grid_scores_ attributes, which is a list of data frames, one for each outer cross-validation fold.

The code is available at https://github.com/sebp/scikit-learn-mpi-grid-search and has the following dependencies.

Note that I only tried this out with python 3.4. For further details, please check out the inline documentation.

by sebp at March 29, 2015 12:43 PM

March 25, 2015

Bastien NoceraGNOME 3.16 is out!

(Bastien Nocera) Did you see?

It will obviously be in Fedora 22 Beta very shortly.

What happened since 3.14? Quite a bit, and a number of unfinished projects will hopefully come to fruition in the coming months.

Hardware support

After quite a bit of back and forth, automatic rotation for tablets will not be included directly in systemd/udev, but instead in a separate D-Bus daemon. The daemon has support for other sensor types, Ambient Light Sensors (ColorHug ALS amongst others) being the first ones. I hope we have compass support soon too.

Support for the Onda v975w's touchscreen and accelerometer are now upstream. Work is on-going for the Wi-Fi driver.

I've started some work on supporting the much hated Adaptive keyboard on the X1 Carbon 2nd generation.

Technical debt

In the last cycle, I've worked on triaging gnome-screensaver, gnome-shell and gdk-pixbuf bugs.

The first got merged into the second, the second got plenty of outdated bugs closed, and priorities re-evaluated as a result.

I wrangled old patches and cleaned up gdk-pixbuf. We still have architectural problems in the library for huge images, but at least we're up to a state where we know what the problems are, not being buried in Bugzilla.

Foundation building

A couple of projects got started that didn't reached maturation yet. I'm pretty happy that we're able to use gnome-books (part of gnome-documents) today to read Comic books. ePub support is coming!



Grilo saw plenty of activity. The oft requested "properties" page in Totem is closer than ever, so is series grouping.

In December, Allan and I met with the ABRT team, and we've landed some changes we discussed there, including a simple "Report bugs" toggle in the Privacy settings, with a link to the OS' privacy policy. The gnome-abrt application had a facelift, but we got somewhat stuck on technical problems, which should get solved in the next cycle. The notifications were also streamlined and simplified.



I'm a fan

Of the new overlay scrollbars, and the new gnome-shell notification handling. And I'm cheering on co-new app in 3.16, GNOME Calendar.

There's plenty more new and interesting stuff in the release, but I would just be duplicating much of the GNOME 3.16 release notes.

by Bastien Nocera (noreply@blogger.com) at March 25, 2015 04:23 PM

March 20, 2015

Bastien Nocera"GNOME à 15 ans" aux JdLL de Lyon

(Bastien Nocera)


Le week-end prochain, je vais faire une petite présentation sur les quinze ans de GNOME aux JdLL.

Si les dieux de la livraison sont cléments, GNOME devrait aussi avoir une présence dans le village associatif.

by Bastien Nocera (noreply@blogger.com) at March 20, 2015 05:25 PM

February 26, 2015

Bastien NoceraAnother fake flash story

(Bastien Nocera) I recently purchased a 64GB mini SD card to slot in to my laptop and/or tablet, keeping media separate from my home directory pretty full of kernel sources.

This Samsung card looked fast enough, and at 25€ include shipping, seemed good enough value.


Hmm, no mention of the SD card size?

The packaging looked rather bare, and with no mention of the card's size. I opened up the packaging, and looked over the card.

Made in Taiwan?

What made it weirder is that it says "made in Taiwan", rather than "Made in Korea" or "Made in China/PRC". Samsung apparently makes some cards in Taiwan, I've learnt, but I didn't know that before getting suspicious.

After modifying gnome-multiwriter's fake flash checker, I tested the card, and sure enough, it's an 8GB card, with its firmware modified to show up as 67GB (67GB!). The device (identified through the serial number) is apparently well-known in swindler realms.

Buyer beware, do not buy from "carte sd" on Amazon.fr, and always check for fake flash memory using F3 or h2testw, until udisks gets support for this.

Amazon were prompt in reimbursing me, but the Comité national anti-contrefaçon and Samsung were completely uninterested in pursuing this further.

In short:

  • Test the storage hardware you receive
  • Don't buy hardware from Damien Racaud from Chaumont, the person behind the "carte sd" seller account

by Bastien Nocera (noreply@blogger.com) at February 26, 2015 10:57 AM

February 23, 2015

Christian SchallerReliable BIOS updates in Fedora

(Christian Schaller)

Some years ago I bought myself a new laptop, deleted the windows partition and installed Fedora on the system. Only to later realize that the system had a bug that required a BIOS update to fix and that the only tool for doing such updates was available for Windows only. And while some tools and methods have been available from a subset of vendors, BIOS updates on Linux has always been somewhat of hit and miss situation. Well luckily it seems that we will finally get a proper solution to this problem.
Peter Jones, who is Red Hat’s representative to the UEFI working group and who is working on making sure we got everything needed to support this on Linux, approached me some time ago to let me know of the latest incoming update to the UEFI standard which provides a mechanism for doing BIOS updates. Which means that any system that supports UEFI 2.5 will in theory be one where we can initiate the BIOS update from Linux. So systems supporting this version of the UEFI specs is expected to become available through the course of this year and if you are lucky your hardware vendor might even provide a BIOS update bringing UEFI 2.5 support to your existing hardware, although you would of course need to do that one BIOS update in the old way.

So with Peter’s help we got hold of some prototype hardware from our friends at Intel which already got UEFI 2.5 support. This hardware is currently in the hands of Richard Hughes. Richard will be working on incorporating the use of this functionality into GNOME Software, so that you can do any needed BIOS updates through GNOME Software along with all your other software update needs.

Peter and Richard will as part of this be working to define a specification/guideline for hardware vendors for how they can make their BIOS updates available in a manner we can consume and automatically check for updates. We will try to align ourselves with the requirements from Microsoft in this area to allow the vendors to either use the exact same package for both Windows and Linux or at least only need small changes to them. We can hopefully get this specification up on freedesktop.org for wider consumption once its done.

I am also already speaking with a couple of hardware vendors to see if we can pilot this functionality with them, to both encourage them to support UEFI 2.5 as quickly as possible and also work with them to figure out the finer details of how to make the updates available in a easily consumable fashion.

Our hope here is that you eventually can get almost any hardware and know that if you ever need a BIOS update you can just fire up Software and it will tell you what if any BIOS updates are available for your hardware, and then let you download and install them. For people running Fedora servers we have had some initial discussions about doing BIOS updates through Cockpit, in addition of course to the command line tools that Peter is writing for this.

I mentioned in an earlier blog post that one of our goals with the Fedora Workstation is to drain the swamp in terms of fixing the real issues making using a Linux desktop challenging, well this is another piece of that puzzle and I am really glad we had Peter working with the UEFI standards group to ensure the final specification was useful also for Linux users.

Anyway as soon as I got some data on concrete hardware that will support this I will make sure to let you know.

by uraeus at February 23, 2015 04:17 PM

February 20, 2015

Jean-François Fortin Tam2014 in review

I haven’t blogged much in recent months, so this will be a pretty short and boring post, it’s meant as a pulse check anyway. All in all, 2014 has been a pretty intense year:

  • Started my own company (providing branding & management consulting services)
    • Got various branding, design, marketing and research contracts. Also, management consulting assignments.
    • Had to solve a multi-month stalemate with a Research Ethics committee. Twisting Freud’s words a little: I can heartily recommend the Ethics Committee to anyone.
    • Some clients needed me to do peripheral “IT work”, so I provided sizable infrastructure & data management improvements, saved mission-critical systems from certain death, rescued kittens, etc.
  • Joined the board of directors of the GNOME Foundation to help with a non-trivial set of tasks (fighting off Groupon trying to steamroll the GNOME trademark, biz dev, ED search, and lots of legal/financial/issue handling/tough decisions)
  • Attended GUADEC 2014
  • Went to defend a (simple) case in court (not GNOME :) and won. A big relief after months of angst.
  • Attended the GSoC Mentors Summit 10-years ultimate reunion limited collectors edition, to represent GNOME and Pitivi.
  • Moved to downtown Montréal in the middle of December.
    • Great for business!
    • Not great for the sleep cycle (living in a construction site means you get hammering and drilling starting at 6-7 AM). Also, got a bit of a shining moment when the fire alarm went off at -27°C and tons of water from the 8th floor started pouring down and out from the lifts on the floors below. Reminds me of what happened to Bronzemurder (it can be said that this is a metaphor for software development in general).
    • Had to split off a bunch of public utilities and personal infrastructure (including my offsite backup system and the ATSC+VoIP setup I made for 2-3 houses in the family – that blog post is in French but it seems to have gone viral, for some reason).
    • I can now happily host GNOME (and other FLOSS) contributors and friends when they’re in town!
  • Fixed the economy (nope, I’m just kidding on that one)

My accountants/tax folks are going to hate me.

With all that going on, I had to withdraw somewhat from hands-on involvement in Pitivi. Nonetheless, I’m hoping to collect information and write a blog post update soonish.

Here’s hoping to a great 2015! I will surely see y’all at LGM and GUADEC 2015.

by nekohayo at February 20, 2015 05:39 PM

February 18, 2015

Arun RaghavanReviewing moved files with git

This might be a well-known trick already, but just in case it’s not…

Reviewing a patch can be a bit painful when a file that has been changed and moved or renamed at one go (and there can be perfectly valid reasons for doing this). A nice thing about git is that you can reference files in an arbitrary tree while using git diff, so reviewing such changes can become easier if you do something like this:

$ git am 0001-the-thing-I-need-to-review.patch
$ git diff HEAD^:old/path/to/file.c new/path/to/file.c

This just references file.c in its old path, which is available in the commit before HEAD, and compares it to the file at the new path in the patch you just merged.

Of course, you can also use this to diff a file at some arbitrary point in the past, or in some arbitrary branch, with the same file at the current HEAD or any other point.

Hopefully this is helpful to someone out there!

Update: As Alex Elsayed points out in the comments, git diff -M/-C can be used to similar effect. The above example, for example, could be written as:

$ git am 0001-the-thing-I-need-to-review.patch
$ git show -C

by Arun at February 18, 2015 08:29 AM

January 26, 2015

Christian SchallerThanks for all the applications

(Christian Schaller)

Jobs at Red Hat
So I got a LOT of responses to my blog post about the open positions we have here at Red Hat working on Fedora and the Desktop. In fact I got so many it will probably take a bit of time before we can work through them all. So you might have to wait a little bit before getting a response from us. Anyway, thanks you to everyone who sent me their CV, much appreciated and looking forward to working with those of you we end up hiring!

Builder campaign closes in 13 hours
I want to make one last pitch for everyone to contribute to the Builder crowdfunding campaign. It has just passed 47 000 USD as I write this, which means we just need another 3000 USD to reach
the graphical debugger stretch goal. Don’t miss out on this opportunity to help this exciting open source project!

by uraeus at January 26, 2015 06:55 PM

January 21, 2015

Christian SchallerWant to join our innovative development team doing cool open source software?

(Christian Schaller)

So Red Hat are currently looking to hire into the various teams building and supporting efforts such as the Fedora Workstation, the Red Hat Enterprise Linux Workstation and of course Fedora and RHEL in generaL. We are looking at hiring around 6-7 people to help move these and other Red Hat efforts forward. We are open to candidates from any country where Red Hat has a presence, but for a subset of the positions candidates relocating to our main engineering offices in Brno, Czech Republic or Westford, Massachussets, USA, will be a requirement or candidates interested in that will be given preference. We are looking for a mix of experienced and junior candidates, so regardless of it you are fresh out of school or haven been around for a while this might be for you.

So instead of providing a list of job descriptions what I want to do here is list of various skills and backgrounds we want and then we will adjust the exact role of the candidates we end up hiring depending on the mix of skills we find. So this might be for you if you are a software developer and have one or more of these skills or backgrounds:

* Able to program in C
* Able to program in Ruby
* Able to program in Javascript
* Able to program in Assembly
* Able to program in Python
* Experience with Linux Kernel development
* Experience with GTK+
* Experience with Wayland
* Experience with x.org
* Experience with developing for PPC architecture
* Experience with compiler optimisations
* Experience with llvm-pipe
* Experience with SPICE
* Experience with developing software like Virtualbox, VNC, RDP or similar
* Experience with building web services
* Experience with OpenGL development
* Experience with release engineering
* Experience with Project Atomic
* Experience with graphics driver enablement
* Experience with other PC hardware enablement
* Experience with enterprise software management tools like Satellite or ManageIQ
* Experience with accessibility software
* Experience with RPM packaging
* Experience with Fedora
* Experience with Red Hat Enterprise Linux
* Experience with GNOME

It should be clear from the list above that we are not just looking for people with a background in desktop development this time around, two of the positions for instance will mostly be dealing with Linux kernel development. We are looking for people here who can help us make sure the latest and greatest laptops on the market works great with Fedora and RHEL, be that on the graphics side or in terms of general hardware enablement. These jobs will put you right in the middle of the action in terms of defining the future of the 3 Fedora variants, especially the Workstation; defining the future of Red Hats Enterprise Linux Workstation products and pushing the Linux platform in general forward.

If you are interested in talking with us about if we can find a place for you in Red Hat as part of this hiring round please send me your CV and some words about yourself and I will make sure to put you in contact with our recruiters. And if you are unsure about what kind of things we work on here I suggest taking a look at my latest blog about our Fedora Workstation 22 efforts to see a small sample of some of the things we are currently working on.

You can reach me at cschalle(at)redhat(dot)com.

by uraeus at January 21, 2015 06:37 PM

January 19, 2015

Christian SchallerPlanning for Fedora Workstation 22

(Christian Schaller)

So Fedora Workstation 21 is done and out and I am extremely pleased to see the positive reception and great reviews. But we are not resting on our laurels here and are already busy planning for the Fedora Workstation 22 release. As many of you might know Fedora Workstation 22 is going to come up relatively fast, so we only have about 6 more weeks of development on it feature the freezes starts to kick inn. Luckily we have a relatively long list of items that we started working on during the Fedora Workstation 21 cycle that is nearing completing and thus should make the next release. We are of course also working on bigger long term developments that you should maybe see the first outline of in Fedora 22, but not the final version. I thought it would be nice to summarize some of the bigger items we expect to land and link to the relevant blogs and announcements for each one.

Wayland
So first out is to give an update on our work on Wayland as I know that is something a lot of people are curios about. We are continuing to make great strides forward and recently hired Jonas Ådahl to the team who many might recognize as an active Wayland and libinput developer. He will be spearheading our overall Wayland effort as we are approaching the finish line. All in all things are looking good, we got a lot of the basic plumbing in place for Fedora Workstation 21, so most works these days is mostly focused on polish and cleanups. One of the bigger items is the migration to use libinput. libinput is a library we decided to create to be able to share input device handling between X and Wayland and thus make the transition smoother and lower our workload during the transition period. Libinput itself is getting very close to feature complete and they are even working on some new features for it now taking it beyond what was in X. Peter Hutterer recently released version 0.8 and we expect to have 1.0 out and in use for both X.org and Wayland in time for Fedora Workstation 22.
In parallel we are also working on porting the needed bits in GNOME over to use libinput and remove any lingering X dependencies, like the GNOME Control Center which should also be ready for Fedora Workstation 22.

Another major change related to Wayland in Fedora Workstation 22 is that we will switch the default backends in GTK+ and SDL over to using Wayland. Currently in Fedora Workstation 21 applications are actually running on top of XWayland, but in Fedora Workstation 22 at least GTK+ and SDL applications will be default to Wayland when run under the Wayland session.
The Wayland SDL backend has been around for quite a bit, but Jonas Ådahl plans on spending some time smoothing out the last rough edges, in fact for SDL applications we hope we can actually provide noticeable performance improvement over X in some cases (not because OpenGL will be faster of course, but because we might be able to be smarter about handling different resolutions between desktop and game), but we have to wait and see if that pans out or if we have to settle for performance parity with X. We are also looking at getting the login session to use Wayland by default. All in all this should take us a huge step forward towards making using Wayland feel real.

As it looks now Wayland should be quite close to what you would define as feature complete for Fedora Workstation 22, but one thing that is going to take longer to reach maturity is the support for binary drivers, especially the NVidia ones. This of course is a task that mostly falls on NVidia for natural reasons, but we are trying to help out by Adam Jackson working to making sure Mesa works with their proposed EGLStreams and OpenGL Dispatcher proposals. So during the course of the coming year we will likely have a situation that you will be able to have a production ready Wayland session if you are running any of the open source drivers, but if you want to run Wayland on top of the NVidia binary driver that is most likely to only really be possible towards the end of the year. That said this is a guesstimate from our side as how quick the heavy lifting will happen, and how quickly it will be released by NVidia for public consumption is of course all relying on internal plans and resources at NVidia and not something we control.

Battery life
One thing we know being developers ourselves and from speaking with developers about their operating system of choice, battery life is among the top 5 reasons for what choice people make about their hardware and software. Due to this Owen Taylor has been investigating for some time now both what solutions exist today, what other operating systems are doing and what approaches we can take to improve battery life. Because a common complaint we hear from a lot of people is that they don’t feel they get great battery life when running linux on their laptops currently. Some people are able to solve this using powertop, but we feel there are a lot of room for automatically give our users better battery life beyond manual tweaking user powertop.

Improving battery life is a complex issue in many ways, including figuring out how to measure battery life. I guess everyone has seen laptops advertised with X number of hours of battery life, but it is our impression that those numbers tend to be quite bogus even when running the bundled operating system. In some testing we done we concluded that the worst offenders numbers could only true if you left your laptop idle in the corner with the screen blacked out. So gnome-battery-bench will help us achieve a couple of things, it should generate comparable battery lifetime numbers which both should help our users choose the hardware that gives the best battery life under linux and it also lets us as developers keep tracking how changes affect battery life so that we can catch regressions for instance. It also lets us verify the effect various kernel tuneables or ambient light detection schemes have on battery life in a better way than we can with existing tools. We also hope to use this to work with vendors to improve the battery life of their hardware when running Fedora or RHEL. Anyway, I suggest reading Owens Taylors blog for some more details of his work on improving battery life..

One important effort we want to undertake here, which might not all make it for Fedora Workstation 22, is taking advantage of the ambient light detectors in many modern laptops. One of the biggest battery drains in your system is the screen brightness setting and by using the ambient light detection hardware we hope to be able to put in place some intelligent behavior for different situations. This is a hard problem though and it was attempted solved in GNOME before, but the end result back then was that people felt they where “fighting” GNOME over their laptop brightness settings, so in the end it was dropped completely, so we need to careful to not repeat that outcome.

Application bundles
Another major effort that is not going to ready for Fedora Workstation 22, but which we might have some preview of is Application bundles. Matthias Clasen recently sent out an email to the Fedora Desktop mailing list outlining the state of the application bundles work. This is a continuation of the Sandboxed Applications in GNOME proposal from Lennart Poettering. The effort is being spearheaded by Alexander Larsson and the goal is to build the infrastructure needed to do sandboxed desktop applications efficiently. There is a wiki page up already detailing Sandboxed Apps and there are some test applications already available. For instance you can grab an application bundle of Builder, the cool new IDE project from Christian Hergert. (Hint, make sure to support his Builder crowdfunding effort if you have not already.). Once this effort matures it will revolutionize how desktop applications are built and distributed. It should make life easier for application developers as these bundled applications are designed to be distribution agnostic and the sandboxing aspect should help improve security. Also the transition should put the application developers directly in charge of the update cycle of their applications enabling them to better support their users.

3rd Party Application handling
So the ever resourceful Richard Hughes has been working on adding support for handling 3rd party applications in GNOME Software. He outlined this effort in a recent blog post about GNOME Software.

While the end goal here it to offer 3rd party application bundles as described in the section above, the feature has also a lot of near term advantages. We have seen that over the course of the last years we moved from a model where you use your browser to search for software online to users expecting to find all software available for a system through its app store. With this 3rd party application support available in GNOME Software we can start working to make that expectation a reality also in Fedora. We took great strides forward in Fedora Workstation 21 with having metadata available for most of the standard applications packaged in Fedora, but there is also a lot of popular applications and other things out there that people tend to install and use which we for various reasons are not interested or able to ship in Fedora. The reason for this can range from licensing issues, to packaging issues to simply resource issues. With Richards work we will be able to make such software discoverable in Fedora, yet make a clear distinction between the software we have vetted and checked and the software you get from 3rd parties.

How to deal with 3rd party software has been a long and ongoing discussion in the Fedora community, and there are a lot of practical and principal details to deal with, but hopefully with this infrastructure in place it will be a lot easier to navigate those issues as people have something concrete to relate to instead of just abstract ideas and concepts.

One challenge for instance we have to figure out is that on one side we don’t want 3rd party software offered in Fedora to be some for of endorsement or sign of being somehow vetted by the Fedora Project on an ongoing basis, yet on the other side the list will most likely need to be based on some form of editorial process to for instance protect both Red Hat and Fedora from potential legal threats. I plan on sending an initial proposal to the Workstation Working Group soon for how this can work and once we hashed out the details there we will need to bring the Working groups proposal into the wider Fedora community as this also affects our Cloud and Server offering.

File Manager
A lot of people these days use Google Drive, be that personally or because their company has a corporate Google apps account. So to make life easier for our users we are making sure that Nautilus are able to treat your Google drive as just another drive on the system, letting you drag and drop files off or on it. We also dedicated some effort to clean up and modernize the file manager in general, with Carlos Soriano blogging about his efforts there just before Christmas. All in all I think these are improvements that should improve the life of our developer and sysadmin target audience, but of course they are also very useful improvements to the general linux using public.

Qt Theming
One of the things we had to postpone for Fedora Workstation 21 was the Adwaita theme for Qt applications. We are expecting it to hit Fedora Workstation 22 though and you can get the theme to install and test from Martin Briza copr repository. The end goal here is wether you run a pure Qt application like Skype or Scribus, or a KDE application like Krita or Amarok, you should get an Adwaita look and feel to the application. Of course desktop integration isn’t just about a theme, there is a reason the GNOME HIG exists, but this should be an improvement over the current situation. The theme currently targets Qt4, but of course Qt5 is also on the roadmap for a later release.

Further terminal improvements
As I mentioned in an earlier blog entry about Fedora Workstation we realize that the terminal is the most important application for many developers and sysadmins. So we are also hoping to land some more of the terminal improvements we been working on in Fedora Workstation 21. The notifications for long running jobs being maybe the thing I know a lot of developers are excited about getting their hands on. It will let you for instance start a long compile in a terminal and know that you will be notified once it is completed instead of having to manually check in from time to time.

More development tools
In my opinion the best IDE for Python development currently is PyCharm. And not only is it the best from a functionality standpoint they also decided to release an open source version last year. That said we have been struggling a bit with the packaging of PyCharm, and interestingly enough it is one of those applications I think will benefit greatly from the application bundle work we are doing, but in the meantime we at least do have a Copr of PyCharm available. It is still an open question, but we might make this CoPR one of our testcases for the 3rd party application support in GNOME Software as mentioned earlier. Anyway if you are a Python developer I strongly recommend taking a lot at it. Personally I looked at various Python IDEs over the years, but always ended up just going back to Gedit, but when trying PyCharm it was the first time I felt that the application actually offered me useful functionality beyond what a text editor like Gedit does. Also in recent versions they also deal well with the introspection based Python bindings for GTK3 which was a great boon for me.

We are also looking at improving the development story around Vagrant and doing Fedora and RHEL development, more details on that at a later point.

ABRT improvements
The ABRT tool has become a crucial development tool for us over the last couple of years. The Fedora Retrace server is one of our main tools
for prioritizing which bugs to look at first and a crucial part of our goal of making Fedora a solid
distribution. That said, especially its early days, ABRT has had its share of detractors and people
being a bit frustrated with it, so Bastien Nocera and Allan Day has been working with the ABRT team to both integrate
it further with the desktop, for instance ensuring that it follows your desktop wide privacy settings
and to make sure that the user experience of submitting a retrace report is as smooth and intuitive as possible
and not to mention as unobtrusive as possible, for instance you don’t want ABRT to choke your system while trying to generate
a stack trace for us. The Fedora Workstation Tasklist contains links to bugzilla and github so you can track their progress.

Still a lot to do!
So making our vision for the Fedora Workstation come through takes of course a lot of effort from a lot of people. And we are really lucky to be part of such a great community where so much cool stuff is happening all the time. I mean the Builder effort from Christian Hergert as I talked about earlier is one of them, but there are so many other things happening too. So if you want to get involved take a look at our tasklist and see if there is anything that interests you or for that matter if there is something that you think should be worked on, but isn’t on the list yet. Then come join us either on #fedora-workstation on the freenode IRC network or join the fedora-desktop mailing list.

by uraeus at January 19, 2015 04:05 PM

January 05, 2015

Christian SchallerTime has come to support some important projects!

(Christian Schaller)

If you read this blog entry it is very likely that you are a direct beneficiary of open source and free software. Like myself you probably have been able to get hold of, use and tinker with software that in the old world of closed source dominance would all together have cost you maybe ten thousand dollars or more. So with the spirit of the Yuletide season fresh in mind it is time to open your wallet and support some important open source fundraising campaigns.

The first one is the Builder, an IDE of our GNOME which is an effort by the unstoppable Christian Hergert to create a truly powerful and modern IDE for GNOME. Christian has already made huge strides forward with his project since quiting his dayjob to start it, and helping fund him to cross the finish line would be greatly beneficial to us all. And I think it would make a wonderful addition to the Fedora Workstation effort, so this is an easy way for you to help us move that effort forward too. So head over to the fundraiser webpage or start by viewing the great fundraiser video below:

The second effort I want to highlight is the still ongoing fundraiser for the PiTiVi video editor. Since they started that effort they have raised 22190 USD of the 35 000 USD they need to get PiTiVi to a level where they are confident to make a 1.0 release. And I think we all agree that having a top notch video editor avaiable, especially one that uses GStreamer and thus helps improve our general multimedia story is very important. This effort also has a nice introduction video if I want to know more:

I have personally contributed money to both these efforts and I hope you will too! Both projects are crucial for the long term health of the ecosystem and both are done by credible teams with the right skills to succeed. So for those of us out of school and in paying jobs, setting aside for example 100 USD to help these two efforts should be an easy choice to make, the value we will get back easily dwarfs that amount.

by uraeus at January 05, 2015 03:34 PM

January 03, 2015

Thomas Vander Stichele2 months in

(Thomas Vander Stichele)

Today is my two month monthiversary at my new job. Haven’t had time so far to sit back and reflect and let people know, but now during packing boxes for our upcoming move downtown, I welcome the distraction.

I dove into the black hole. I joined the borg collective. I’m now working for the little search engine that could.

I sure had my reservations while contemplating this choice. This is the first job I’ve had that I had to interview for – and quite a bit, I might add (though I have to admit that curiosity about the interviewing process is what made me go for the interviews in the first place – I wasn’t even considering a different job at that time). My first job, a four month high school math teaching stint right after I graduated, was suggested to me by an ex-girlfriend, and I was immediately accepted after talking to the headmaster (that job is still a fond memory for many reasons). For my first real job, I informally chatted over dinner with one of the four founders, and then I started working for them without knowing if they were going to pay me. They ended up doing so by the end of the month, and that was that. The next job was offered to me over IRC, and from that Fluendo and Flumotion were born. None of these were through a standard job interview, and when I interviewed at Google I had much more experience on the other side of the interviewing table.

From a bunch of small startups to a company the scale of Google is a big step up, so that was my main reservation. Am I going to be able to adapt to a big company’s way of working? On the other hand, I reasoned, I don’t really know what it’s like to work for a big company, and clearly Google is one of the best of those to work for. I’d rather try out working for a big company while I’m still considered relatively young job-market-wise, so I rack up some experience with both sides of this coin during my professionally mobile years.

But I’m not going to lie either – seeing that giant curious machine from the inside, learn how they do things, being allowed to pierce the veil and peak behind the curtain – there is a curiosity here that was waiting to be satisfied. Does a company like this have all big problems solved already? How do they handle things I’ve had to learn on the fly without anyone else to learn from? I was hiring and leading a small group of engineers – how does a company that big handle that on an industrial scale? How does a search query really work? How many machines are involved?

And Google is delivering in spades on that front. From the very first day, there’s an openness and a sharing of information that I did not expect. (This also explains why I’ve always felt that people who joined Google basically disappeared into a black hole – in return for this openness, you are encouraged to swear yourself to secrecy towards the outside world. I’m surprised that that can work as an approach, but it seems to). By day two we did our first commit (obviously nothing that goes to production, but still.) In my first week I found the way to the elusive (to me at least) roof top terrace by searching through internal documentation.IMG_20141229_144054The view was totally worth it.

So far, in my first two months, I’ve only had good surprises. I think that’s normal – even the noogler training itself tells you about the happiness curve, and how positive and excited you feel the first few months. It was easy to make fun of some of the perks from an outside perspective, but what you couldn’t tell from that outside perspective is how these perks are just manifestations of common engineering sense on a company level. You get excellent free lunches so that you go eat with your team mates or run into colleagues and discuss things, without losing brain power on deciding where to go eat (I remember the spreadsheet we had in Barcelona for a while for bike lunch once a week) or losing too much time doing so (in Barcelona, all of the options in the office building were totally shit. If you cared about food it was not uncommon to be out of the office area for ninety minutes or more). You get snacks and drinks so that you know that’s taken care of for you and you don’t have to worry about getting any and leave your workplace for them. There are hammocks and nap pods so you can take a nap and be refreshed in the afternoon. You get massage points for massages because a healthy body makes for a healthy mind. You get a health plan where the good options get subsidized because Google takes that same data-driven approach to their HR approach and figured out how much they save by not having sick employees. None of these perks are altruistic as such, but there is also no pretense of them being so. They are just good business sense – keep your employees healthy, productive, focused on their work, and provide the best possible environment to do their best work in. I don’t think I will ever make fun of free food perks again given that the food is this good, and possibly the favorite part of my day is the smoothie I pick up from the cafe on the way in every morning. It’s silly, it’s small, and they probably only do it so that I get enough vitamins to not get the flu in winter and miss work, but it works wonders on me and my morning mood.

I think the bottom line here is that you get treated as a responsible adult by default in this company. I remember silly discussions we had at Flumotion about developer productivity. Of course, that was just a breakdown of a conversation that inevitably stooped to the level of measuring hours worked as a measurement of developer productivity, simply because that’s the end point of any conversation on that spirals out of control. Counting hours worked was the only thing that both sides of that conversation understood as a concept, and paying for hours worked was the only thing that both sides agreed on as a basic rule. But I still considered it a major personal fault to have let the conversation back then get to that point; it was simply too late by then to steer it back in the right direction. At Google? There is no discussion about hours worked, work schedule, expected productivity in terms of hours, or any of that. People get treated like responsible adults, are involved in their short-, mid- and long-term planning, feel responsible for their objectives, and allocate their time accordingly. I’ve come in really early and I’ve come in late (by some personal definition of “on time” that, ever since my second job 15 years ago, I was lucky enough to define as ’10 AM’). I’ve left early on some days and stayed late on more days. I’ve seen people go home early, and I’ve seen people stay late on a Friday night so they could launch a benchmark that was going to run all weekend so there’d be useful data on Monday. I asked my manager one time if I should let him know if I get in later because of a doctor’s visit, and he told me he didn’t need to know, but it helps if I put it on the calendar in case people wanted to have a meeting with me at that hour.

And you know what? It works. Getting this amount of respect by default, and seeing a standard to live up to set all around you – it just makes me want to work even harder to be worthy of that respect. I never had any trouble motivating myself to do work, but now I feel an additional external motivation, one this company has managed to create and maintain over the fifteen+ years they’ve been in business. I think that’s an amazing achievement.

So far, so good, fingers crossed, touch wood and all that. It’s quite a change from what came before, and it’s going to be quite the ride. But I’m ready for it.

(On a side note – the only time my habit of wearing two different shoes was ever considered a no-no for a job was for my previous job – the dysfunctional one where they still owe me money, among other stunts they pulled. I think I can now empirically elevate my shoe habit to a litmus test for a decent job, and I should have listened to my gut on the last one. Live and learn!)

flattr this!

by Thomas at January 03, 2015 10:54 PM

December 22, 2014

Sebastian DrögeImproved GStreamer support for Blackmagic Decklink cards

(Sebastian Dröge)

In the last weeks I started to work on improving the GStreamer support for the Blackmagic Decklink cards. Decklink is Blackmagic’s product line for HDMI, SDI, etc. capture and playback cards, with drivers being available for Linux, Windows and Mac OS X. And on all platforms the same API is provided to access the devices.

GStreamer already had support for Decklink since some time in 2011, the initial plugin was written by David Schleef and seemed to work well enough for quite some time. But over the years people used GStreamer in more dynamic and complex pipelines, and we got a lot of reports recently about the plugin not working properly in such situations. Mostly there were problems with time and synchronization handling, but also other minor issues caused by the elements not using the source / sink base classes (GstBaseSrc and GstBaseSink). The latter was not easily possible because of one device providing audio and video input/output at the same time, so the elements would intuitively need two source or sink pads. And as a side effect this also made it very difficult to use the decklink elements in a standard playback pipeline with playbin.

The rewritten plugin now has separate elements for the audio and video parts of a device, giving 4 different elements in the end (decklinkaudiosrc, decklinkvideosrc, decklinkaudiosink and decklinkvideosink). These are now based on the corresponding base classes, work inside playbin (including pausing and seeking) and also handle synchronization properly. The clock of the hardware is now exposed to the pipeline as a GstClock and even if the pipeline chooses a different clock, the elements will slave their internal clock to the master clock of the pipeline and make sure that synchronization is even kept if the pipeline clock and the Decklink hardware clock are drifting apart. Thanks to GStreamer’s extensive synchronization model and all the functionality that already exists in the GstClock class, this was much easier than it sounds.

The only downside of this approach is, that it is now necessary to always have the video and audio element of one device inside the same pipeline, and also keep their states managed by the pipeline or at least make sure that they both go to PLAYING state together. Also the audio elements won’t work without a corresponding video element, which is a limitation of the hardware. Video only works without problems though.

All the code is available from gst-plugins-bad GIT master.

And now?

So what’s next? Testing, a lot of testing! Especially with different hardware, on different platforms and in all kinds of situations. If you have one of these cards, please test the latest code that is available in gst-plugins-bad GIT master. Please report any bugs you notice in Bugzilla, ideally with information about your hardware and platform and including a GStreamer debug log with GST_DEBUG=decklink*:6.

Apart from that many features are still missing, for example all the different configurations that are available via an API should be exposed on the elements, or support for more audio channels and more video modes. If you need any of these features and want to help, feel free to write patches and provide them in Bugzilla.

Any help would be highly appreciated!

by slomo at December 22, 2014 01:39 PM

December 20, 2014

GStreamerGStreamer Core, Plugins and RTSP server 1.4.5 stable release

(GStreamer)

The GStreamer team is pleased to announce a bugfix release of the stable 1.4 release series. The 1.4 release series is adding new features on top of the 1.2 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework that contains new features. The 1.4.x bugfix releases only contain important bugfixes compared to 1.4.0.

Binaries for Android, iOS, Mac OS X and Windows are provided by the GStreamer project for this release. The Android binaries are now built with the r10c NDK and as such binary compatible again with all NDK and Android releases. Additionally now binaries for Android ARMv7 and Android X86 are provided. This binary release features the first 1.4 releases of GNonLin and the GStreamer Editing Services.

The 1.x series is a stable series targeted at end users. It is not API or ABI compatible with the 0.10.x series. It can, however, be installed in parallel with the 0.10.x series and will not affect an existing 0.10.x installation.

The stable 1.4.x release series is API and ABI compatible with 1.0.x and any other 1.x release series in the future. Compared to 1.0.x it contains some new features and more intrusive changes that were considered too risky as a bugfix.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, or gst-rtsp-server, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, or gst-libav, or gst-rtsp-server.

Check the release announcement mail for details and the release notes above for a list of changes.

Also available are binaries for Android, iOS, Mac OS X and Windows.

December 20, 2014 04:00 PM

December 17, 2014

GStreamerOrc 0.4.23 bug-fix release

(GStreamer)

The GStreamer team announces another maintenance bug-fix release of liborc, the Optimized Inner Loop Runtime Compiler. Main changes since the previous release:

  • various improvements to the NEON backend to bring it closer to the SSE backend
  • add support for setting a custom backup function
  • preserve NEON/VFP registers across subroutines
  • fix 64 bit parameter loading on big-endian systems
  • improved implementations for various opcodes
  • various improvements and fixes to constants handling
  • avoid some undefined operations on signed integers
  • prefer user specific directories over global ones for intermediate files to prevent name collisions

Direct tarball download: orc-0.4.23.

December 17, 2014 10:30 AM