January 10, 2020

Guillaume DesmottesRust/GStreamer paid internship at Collabora

Collabora is offering various paid internship positions for 2020. We have a nice range of very cool projects involving kernel work, Panfrost, Monado, etc.

I'll be mentoring a GStreamer project aiming to write a Chromecast sink element in Rust. It would be a great addition to GStreamer and would give the student a chance to learn about our favorite multimedia framework but also about bindings between C GObject code and Rust.

So if you're interested don't hesitate to apply or contact me if you have any question.

by Guillaume Desmottes at January 10, 2020 10:48 AM

January 08, 2020

Víctor JáquezGStreamer-VAAPI 1.16 and libva 2.6 in Debian

Debian has migrated libva 2.6 into testing. This release includes a pull request that changes how the drivers are selected to be loaded and used. As the pull request mentions:

libva will try to load iHD firstly, if it failed. then it will load i965.

Also, Debian testing has imported that iHD driver with two flavors: intel-media-driver and intel-media-driver-non-free. So basically iHD driver is now the main VAAPI driver for Intel platforms, though it only supports the new chips, the old ones still require i965-va-driver.

Sadly, for current GStreamer-VAAPI stable, the iHD driver is not included in its driver white list. And this will pose a problem for users that have installed either of the intel-media-driver packages, because, by default, such driver is ignored and the VAAPI GStreamer elements won’t be registered.

There are three temporal workarounds (mutually excluded) for those users (updated):

  1. Uninstall intel-media-driver* and install (or keep) the old i965-va-driver-shaders/i965-va-driver.
  2. Export, by default in your session, export LIBVA_DRIVER_NAME=i965. Normally this is done adding the variable exportation in $HOME/.profile file. This environment variable will force libva to load the i965 driver.
  3. And finally, export, by default in your sessions, GST_VAAPI_ALL_DRIVERS=1. This is not advised since many applications, such as Epiphany, might fail.

We prefer to not include iHD in the stable white list because most of the work done for that driver has occurred after release 1.16.

In the case of GStreamer-VAAPI master branch (actively in develop) we have merged the iHD in the white list, since the Intel team has been working a lot to make it work. Though, it will be released for GStreamer version 1.18.

by vjaquez at January 08, 2020 04:36 PM

December 21, 2019

Sebastian Pölsterlscikit-survival 0.11 featuring Random Survival Forests released

Today, I released a new version of scikit-survival which includes an implementation of Random Survival Forests. As it’s popular counterparts for classification and regression, a Random Survival Forest is an ensemble of tree-based learners. A Random Survival Forest ensures that individual trees are de-correlated by 1) building each tree on a different bootstrap sample of the original training data, and 2) at each node, only evaluate the split criterion for a randomly selected subset of features and thresholds. Predictions are formed by aggregating predictions of individual trees in the ensemble.

For a full list of changes in scikit-survival 0.11, please see the release notes.

The latest version can be downloaded via conda or pip. Pre-built conda packages are available for Linux, OSX and Windows via

 conda install -c sebp scikit-survival

Alternatively, scikit-survival can be installed from source via pip:

 pip install -U scikit-survival

Using Random Survival Forests

To demonstrate Random Survival Forest, I’m going to use data from the German Breast Cancer Study Group (GBSG-2) on the treatment of node-positive breast cancer patients. It contains data on 686 women and 8 prognostic factors:

  1. age,
  2. estrogen receptor (estrec),
  3. whether or not a hormonal therapy was administered (horTh),
  4. menopausal status (menostat),
  5. number of positive lymph nodes (pnodes),
  6. progesterone receptor (progrec),
  7. tumor size (tsize,
  8. tumor grade (tgrade).

The goal is to predict recurrence-free survival time.

The code to reproduce the results below is available in this notebook.

First, we need to load the data and transform it into numeric values.

X, y = load_gbsg2()
grade_str = X.loc[:, "tgrade"].astype(object).values[:, np.newaxis]
grade_num = OrdinalEncoder(categories=[["I", "II", "III"]]).fit_transform(grade_str)
X_no_grade = X.drop("tgrade", axis=1)
Xt = OneHotEncoder().fit_transform(X_no_grade)
Xt = np.column_stack((Xt.values, grade_num))
feature_names = X_no_grade.columns.tolist() + ["tgrade"]

Next, the data is split into 75% for training and 25% for testing so we can determine how well our model generalizes.

X_train, X_test, y_train, y_test = train_test_split(
Xt, y, test_size=0.25, random_state=random_state)

Training

Several split criterion have been proposed in the past, but the most widespread one is based on the log-rank test, which you probably now from comparing survival curves among two or more groups. Using the training data, we fit a Random Survival Forest comprising 1000 trees.

rsf = RandomSurvivalForest(n_estimators=1000,
min_samples_split=10,
min_samples_leaf=15,
max_features="sqrt",
n_jobs=-1,
random_state=random_state)
rsf.fit(X_train, y_train)

We can check how well the model performs by evaluating it on the test data.

rsf.score(X_test, y_test)

This gives a concordance index of 0.68, which is a good a value and matches the results reported in the Random Survival Forests paper.

Predicting

For prediction, a sample is dropped down each tree in the forest until it reaches a terminal node. Data in each terminal is used to non-parametrically estimate the survival and cumulative hazard function using the Kaplan-Meier and Nelson-Aalen estimator, respectively. In addition, a risk score can be computed that represents the expected number of events for one particular terminal node. The ensemble prediction is simply the average across all trees in the forest.

Let’s first select a couple of patients from the test data according to the number of positive lymph nodes and age.

a = np.empty(X_test.shape[0], dtype=[("age", float), ("pnodes", float)])
a["age"] = X_test[:, 0]
a["pnodes"] = X_test[:, 4]
sort_idx = np.argsort(a, order=["pnodes", "age"])
X_test_sel = pd.DataFrame(
X_test[np.concatenate((sort_idx[:3], sort_idx[-3:]))],
columns=feature_names)
age estrec horTh menostat pnodes progrec tsize tgrade
0 33.0 0.0 0.0 0.0 1.0 26.0 35.0 2.0
1 34.0 37.0 0.0 0.0 1.0 0.0 40.0 2.0
2 36.0 14.0 0.0 0.0 1.0 76.0 36.0 1.0
3 65.0 64.0 0.0 1.0 26.0 2.0 70.0 2.0
4 80.0 59.0 0.0 1.0 30.0 0.0 39.0 1.0
5 72.0 1091.0 1.0 1.0 36.0 2.0 34.0 2.0

The predicted risk scores indicate that risk for the last three patients is quite a bit higher than that of the first three patients.

pd.Series(rsf.predict(X_test_sel))
0 91.477609
1 102.897552
2 75.883786
3 170.502092
4 171.210066
5 148.691835
dtype: float64

We can have a more detailed insight by considering the predicted survival function. It shows that the biggest difference occurs roughly within the first 750 days.

surv = rsf.predict_survival_function(X_test_sel)
for i, s in enumerate(surv):
plt.step(rsf.event_times_, s, where="post", label=str(i))
plt.ylabel("Survival probability")
plt.xlabel("Time in days")
plt.grid(True)
plt.legend()

Alternatively, we can also plot the predicted cumulative hazard function.

surv = rsf.predict_cumulative_hazard_function(X_test_sel)
for i, s in enumerate(surv):
plt.step(rsf.event_times_, s, where="post", label=str(i))
plt.ylabel("Cumulative hazard")
plt.xlabel("Time in days")
plt.grid(True)
plt.legend()

Permutation-based Feature Importance

The implementation is based on scikit-learn’s Random Forest implementation and inherits many features, such as building trees in parallel. What’s currently missing is feature importances via the feature_importance_ attribute. This is due to the way scikit-learn’s implementation computes importances. It relies on a measure of impurity for each child node, and defines importance as the amount of decrease in impurity due to a split. For traditional regression, impurity would be measured by the variance, but for survival analysis there is no per-node impurity measure due to censoring. Instead, one could use the magnitude of the log-rank test statistic as an importance measure, but scikit-learn’s implementation doesn’t seem to allow this.

Fortunately, this is not a big concern though, as scikit-learn’s definition of feature importance is non-standard and differs from what Leo Breiman proposed in the original Random Forest paper. Instead, we can use permutation to estimate feature importance, which is preferred over scikit-learn’s definition. This is implemented in the ELI5 library, which is fully compatible with scikit-survival.

import eli5
from eli5.sklearn import PermutationImportance
perm = PermutationImportance(rsf, n_iter=15, random_state=random_state)
perm.fit(X_test, y_test)
eli5.show_weights(perm, feature_names=feature_names)
table.eli5-weights { width: 40%; margin-left: auto; margin-right: auto; } table.eli5-weights > tbody > tr > td { background-color: transparent; } table.eli5-weights > tbody > tr:hover > td { background-color: #e5e5e5; } .col-weight { padding: 0 1em 0 0.5em; text-align: right; } .col-feature { padding: 0 0.5em 0 0.5em; text-align: left; }
Weight Feature
0.0676 ± 0.0229 pnodes
0.0206 ± 0.0139 age
0.0177 ± 0.0468 progrec
0.0086 ± 0.0098 horTh
0.0032 ± 0.0198 tsize
0.0032 ± 0.0060 tgrade
-0.0007 ± 0.0018 menostat
-0.0063 ± 0.0207 estrec

The result shows that the number of positive lymph nodes (pnodes) is by far the most important feature. If its relationship to survival time is removed (by random shuffling), the concordance index on the test data drops on average by 0.0676 points. Again, this agrees with the results from the original Random Survival Forests paper.

December 21, 2019 04:46 PM

December 18, 2019

GStreamerGStreamer Rust bindings 0.15.0 release

(GStreamer)

A new version of the GStreamer Rust bindings, 0.15.0, was released.

As usual this release follows the latest gtk-rs release, and a new version of the GStreamer plugins written in Rust was also released.

This new version features a lot of newly bound API for creating subclasses of various GStreamer types: GstPreset, GstTagSetter, GstClock, GstSystemClock, GstAudioSink, GstAudioSrc, GstDevice, GstDeviceProvider, GstAudioDecoder and GstAudioEncoder.

In addition to that, a lot of bugfixes and further API improvements have happened over the last few months that should make development of GStreamer applications or plugins in Rust as convenient as possible.

A new release of the GStreamer Rust plugins will follow in the next days.

Details can be found in the release notes for gstreamer-rs and gstreamer-rs-sys.

The code and documentation for the bindings is available on the freedesktop.org GitLab

as well as on crates.io.

If you find any bugs, missing features or other issues please report them in GitLab.

December 18, 2019 05:00 PM

December 17, 2019

Bastien NoceraGMemoryMonitor (low-memory-monitor, 2nd phase)

(Bastien Nocera) TL;DR

Use GMemoryMonitor in glib 2.63.3 and newer in your applications to lower overall memory usage, and detect low memory conditions.

low-memory-monitor

To start with, let's come back to low-memory-monitor, announced at the end of August.

It's not really a “low memory monitor”. I know, the name is deceiving, but it actually monitors memory pressure stalls, and how hard it is for the kernel to allocate memory when applications need it. The longer it takes to allocate memory, the longer the kernel takes to allocate it, usually because it needs to move memory around to make room for a big allocation, when an application starts up for example, or prepares an in-memory buffer for saving.

It is not a daemon that will kill programs on low memory. It's not a user-space out-of-memory killer, and does not take those policy decisions. It can however be configured to ask the kernel to do that. The kernel doesn't really know what it's doing though, and user-space isn't helping either, so best disable that for now...

As listed in low-memory-monitor's README (and in the announcement post), there were a number of similar projects around, but none that would offer everything we needed, eg.:
  • Has a D-Bus interface to propagate low memory conditions
  • Requires Linux 5.2's kernel memory pressure stalls information (Android's lowmemorykiller daemon has loads of code to get the same information from the kernel for older versions, and it really is quite a lot of code)
  • Written in a compiled language to save on startup/memory usage costs (around 500 lines of C code, as counted by sloccount)
  • Built-in policy, based upon values used in Android and Endless OS
 GMemoryMonitor

Next up, in our effort to limit memory usage, we'll need some help from applications. That's where GMemoryMonitor comes in. It's simple enough, listen to the low-memory-warning signal and free some image thumbnails, index caches, or dump some data to disk, when you receive a signal.

The signal also gives you a “warning level”, with 255 being when low-memory-monitor would trigger the kernel's OOM killer, and lower values different levels of “try to be a good citizen”.

The more astute amongst you will have noticed that low-memory-monitor runs as root, on the system bus, and wonder how those new fangled (5 years old today!) sandboxed applications would receive those signals. Fear not! Support for a portal version of GMemoryMonitor landed in xdg-desktop-portal on the same day as in glib. Everything tied together with installed tests that use the real xdg-desktop-portal to test the portal and unsandboxed versions.

How about an OOM killer?

By using memory pressure stall information, we receive information about the state of the kernel before getting into swapping that'd cause the machine to become unusable. This also means that, as our threshold for keeping everything ticking is low, if we were to kill high memory consumers, we'd get a butter smooth desktop, but, based on my personal experience, your browser and your mail client would take it in turns disappearing from your desktop in a way that you wouldn't even notice.

We'll definitely need to think about our next step in application state management, and changing our running applications paradigm.

Distributions should definitely disable the OOM killer for now, and possibly try their hands at upstream some systemd OOMPolicy and OOMScoreAdjust options for system daemons.

Conclusion

Creating low-memory-monitor was easy enough, getting everything else in place was decidedly more complicated. In addition to requiring changes to glib, xdg-desktop-portal and python-dbusmock, it also required a lot of work on the glib CI to save me from having to write integration tests in C that would have required a lot of scaffolding. So thanks to all involved in particular Philip Withnall for his patience reviewing my changes.

by Bastien Nocera (noreply@blogger.com) at December 17, 2019 11:53 PM

December 15, 2019

Gustavo OrrilloSpecial collections’ design process

The visualization we developed for the Network of Libraries of the Bank of the Republic of Colombia is a web tool that allows users to create their own search paths through the special documents and collections available in the libraries. This project took around 6 months of work, form initial research and sketching to the final product that is currently in use. It gave us a unique opportunity to apply novel frameworks and technologies for web development such as p5.js to make the rich cultural heritage deposited at the Network of Libraries more easily accessible to a wide range of users, from ocassional visitors to expert researchers. This post shows some of the visual materials and concepts that inspired the tool, as well as design sketches and prototypes.

Early UI sketch

Background

The Network of Libraries of the Bank of the Republic is the depository of more than thirty-five historical archives that constitute a primary source for the reconstruction of the history of Colombia. Researchers and historians can consult most of these archives in the Room of Rare Books and Manuscripts of the Luis Ángel Arango Library; however, some of these archives are available at other cultural centers of the Bank of the Republic around the country.

When we started working on this project, several of the materials were available in digital through a web portal where the information was organized in various thematic groupings and by document types. Analysis of this portal revealed the following features:

  • Most of the data had already been digitized
  • A fraction of the contents were available through a “pre-made” timeline visualization implemented with an existing javascript library.
  • Another part of the contents were available through different interfaces (list, maps, etc)
  • There was little connection between all the contents, each topic had to be visualized within its own separate page

The major aims of the interactive visualization to be developed were the following:

  • to gives a more holistic access to the contents of the site
  • to emphasize relationships between separate themes and types of materials, and facilitate finding contents and understanding the context of these contents

Time-based data

As the special documents and collections had a strong temporal dimension, and timelines were used in the original version of the website, we started exploring timeline-based visualizations of the data. The slideshow below contains some previous projects we considered in our research:

A timeline of history - http://histography.io/ World Digital Library Timelines - https://www.wdl.org/en/timelines/ Interactive Blog Calendar - https://eagereyes.org/blog-calendar Summit on the Summit Annual Reports - https://fathom.info/reports

Visualizations using the timeline metaphor.

Two issues with the use timelines to visualize the collections from the Network of Libraries were that the data is sparsely populated, and that some items could cover a very wide time range, for example 50 years or even more. So the problem became how to handle such a large interval properly in a traditional timeline? Because of this issue, we looked into more dynamic approaches. The interactive timelines in the New Cooper Hewitt Experience at the Smithsonian Cooper Hewitt Museum in New York:

and the Timeline of Modern Art at Tate Modern in London:

are great examples where museum’s collections are form a flowing stream where visitors can select and manipulate the items they find interesting.

Seach processes

Another concept that we considered early on was that of serendipitous search, as suggested by the following picture of a library user browsing the shelves:

Browsing the shelves

To us, this idea of serendipitous search resonated with Psychogeography, the term coined in the 1950s by the Situationists and denoting the “exploration of urban environments that emphasizes playfulness and drifting”. Can we transfer this practice into the visualization of cultural materials to create some kind of “Librageography” or “Bibliogeografía”, which prioritizes playful search? Some visual inspiration related to these concepts:

In particular, the psychogeographic concept of drifting led to the idea of a visual exploration where each user constructs their own map of the data by wandering through the connections between the data elements, defined by the common tags shared by the collections. The following early sketch offer a glimpse of these schemes:

First hand-drawn sketch

While a drifting navigation through the collections could provide an engaging and playful experience for ocassional users, the visualization should also offer tools for a more directed search that advanced users may need when researching the data or looking for specific information. A first mockup of the web viewer incorporates such tools:

First hand-drawn sketch

Refining the design

Once we identified an initial visual metaphor and navigation mechanism, it was the time to start iterating over the early sketches while discussing the progress with the team from the Virtual Library of the Bank of the Republic, who was in charge of the project.

The prototype also supported mobile phones since the beginning, as it was very important that the web visualizer was accessible through both desktop and mobile:

Mobile prototype

In parallel to the design of the main visualization screen, we also starting sketching the intro screen. This is also very important, as the intro screen is the entry point to the visualization, and it may disuade users, specially newcomers, to stay in the page and explore the data:

Sketches of intro screen

We developed a working prototype based on those initial designs and sketches, which allowed us to test the basic user flows and iterate the designs to obtain feedback quickly:

Once we agreed on the overall modes of introduction, presentation and interaction, we started refining the visual appareance of the prototype through color, textures, text, animation, and mutual relationships of all the elements:

Work on the UI was also ongoing at this stage:

After a few rounds additional rounds of design iterations, the color palette and UI improved significantly, and at that stage we were approaching the final version, with only a few minor tweaks left:

Responsive design

With a great majority of users navigating the web on their phones, a big priority for us was to ensure that the viewer worked well on mobile browsers. In the end, we were able to keep the functionality and appereance consistent across desktop, iOS, and Android, while dealing with the differences in screen state and interaction modalities (i.e.: touch vs mouse):

Conclusions

The Virtual Library of the Bank of the Republic released the new Special Documents and Collections portal, including the interactive viewer, in November of 2019. We are very satisfied with the design process described here as well as with the final product. We believe that we created an useful tool for cultural promotion and research in the humanities, and we hope to gain insight on the user engagement with the viewer based on the analytics collected by the portal.

by Andres Colubri at December 15, 2019 04:00 PM

December 13, 2019

Bastien NoceraDual-GPU support follow-up: NVIDIA driver support

(Bastien Nocera) If you remember, back in 2016, I did the work to get a “Launch on Discrete GPU” menu item added to application in gnome-shell.

This cycle I worked on adding support for the NVIDIA proprietary driver, so that the menu item shows up, and the right environment variables are used to launch applications on that device.

Tested with another unsupported device...


Behind the scenes

There were a number of problems with the old detection code in switcheroo-control:
- it required the graphics card to use vga_switcheroo in the kernel, which the NVIDIA driver didn't do
- it could support more than 2 GPUs
- and it didn't really actually know which GPU was going to be the “main” one

And, on top of all that, gnome-shell expected the Mesa OpenGL stack to be used, so it only knew the right environment variables to do that, and only for one secondary GPU.

So we've extended switcheroo-control and its API to do all this.

(As a side note, commenters asked me about the KDE support, and how it would integrate, and it turns out that KDE's code just checks for the presence of a file in /sys, which is only present when vga_switcheroo is used. So I would encourage KDE to adopt the switcheroo-control D-Bus API for this)

Closing

All this will be available in Fedora 32, using GNOME 3.36 and switcheroo-control 2.0. We might backport this to Fedora 31 after it's been tested, and if there is enough interest.

by Bastien Nocera (noreply@blogger.com) at December 13, 2019 04:15 PM

December 08, 2019

Phil NormandHTML overlays with GstWPE, the demo

(Phil Normand)

Once again this year I attended the GStreamer conference and just before that, Embedded Linux conference Europe which took place in Lyon (France). Both events were a good opportunity to demo one of the use-cases I have in mind for GstWPE, HTML overlays!

As we, at Igalia, usually have a …

by Philippe Normand at December 08, 2019 02:00 PM

December 03, 2019

GStreamerGStreamer 1.16.2 stable bug fix release

(GStreamer)

The GStreamer team is pleased to announce the second bug fix release in the stable 1.16 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.16.x.

See /releases/1.16/ for the details.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Download tarballs directly here: gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

December 03, 2019 08:00 PM

November 13, 2019

Sebastian DrögeThe GTK Rust bindings are not ready yet? Yes they are!

(Sebastian Dröge)

When talking to various people at conferences in the last year or at conferences, a recurring topic was that they believed that the GTK Rust bindings are not ready for use yet.

I don’t know where that perception comes from but if it was true, there wouldn’t have been applications like Fractal, Podcasts or Shortwave using GTK from Rust, or I wouldn’t be able to do a workshop about desktop application development in Rust with GTK and GStreamer at the Linux Application Summit in Barcelona this Friday (code can be found here already) or earlier this year at GUADEC.

One reason I sometimes hear is that there is not support for creating subclasses of GTK types in Rust yet. While that was true, it is not true anymore nowadays. But even more important: unless you want to create your own special widgets, you don’t need that. Many examples and tutorials in other languages make use of inheritance/subclassing for the applications’ architecture, but that’s because it is the idiomatic pattern in those languages. However, in Rust other patterns are more idiomatic and even for those examples and tutorials in other languages it wouldn’t be the one and only option to design applications.

Almost everything is included in the bindings at this point, so seriously consider writing your next GTK UI application in Rust. While some minor features are still missing from the bindings, none of those should prevent you from successfully writing your application.

And if something is actually missing for your use-case or something is not working as expected, please let us know. We’d be happy to make your life easier!

P.S.

Some people are already experimenting with new UI development patterns on top of the GTK Rust bindings. So if you want to try developing an UI application but want to try something different than the usual signal/callback spaghetti code, also take a look at those.

by slomo at November 13, 2019 03:02 PM

November 06, 2019

GStreamerGStreamer Conference 2019 talk recordings online

(GStreamer)

Thanks to our partners at Ubicast the recordings of this year's GStreamer Conference talks are now available online.

You can view or download the GStreamer Conference 2019 videos here.

Talks:

Lightning Talks:

November 06, 2019 02:00 PM

November 01, 2019

Jean-François Fortin TamSurvey: making Getting Things GNOME sustainable as a productivity app for public good

Now that you’ve been introduced to the overall concept of Getting Things Done with the video in my previous blog post, let me show you the secret weapon of chaos warriors who want to follow that methodology with a digital tool they can truly own.

Your secret weapon: “Getting Things GNOME”

Getting Things GNOME” is a native GNOME desktop application that is entirely free and open-source (licensed under the GPL) and runs locally on your computer.

Here’s how it looks like on my computer (and yes, I do have over 600 actionable tasks at all times lately):

GTG 0.3.1 showing my personal task list

Getting Things GNOME is one of the most well-made productivity apps of the decade. It has:

  • A mature codebase with almost a decade of work that has been put into it
  • A nice GTK interface (GTK2 in the stable version, GTK3 in the development version), with a flexible free-form text editor for handling inline @tags, and extended descriptions/notes below the title
  • The ability to quickly defer tasks (ex: “do it tomorrow”, “do it next week”, “do it next month”, etc.)
  • Natural language parsing (you can tag things while you type the title, and you can use dates such as “Tomorrow”, “Thursday” in addition to the standard “YYYY-MM-DD”)
  • “Work view” mode for displaying only “actionable” tasks so you can focus on your work
  • The notion of sub-tasks and dependencies
  • Tags, which can also be made hierarchical (ex: “@phone” can be a child of “@work”) and can have colors and icons
  • Search and “saved searches”
  • Plugins!
  • Works offline, owned by you with no restrictions, no licensing B.S., software “by the people for the people”

This is what the Git (development) version of GTG’s UI currently looks like, showing my personal tasks (it needs some improvement, but it’s already impressive):

My tasks with the “kusanagi” tag, as shown by the development version’s GTK3 UI.

Here are some more (old) screenshots:

Historical context

Unfortunately, the previous maintainers never developed a business model to sustain development, and, after a long period of development hell, abandoned the project as they moved on to get various jobs.

This is understandable and I can’t fault them… however, I still need a working and maintained application that will continue working working throughout the years as our technological landscape evolves (the Linux platform is pretty ruthless on that front). Otherwise, you risk having your Linux distribution suddenly stop packaging the app you dearly depend on.

If anyone was wondering if all Free and Open-Source software can continue as a side-project forever, this is a prime case study. This is what happens when software goes unmaintained.

Meanwhile, in the world of proprietary software and software-as-a-service, countless people are paying a big one-time, monthly or yearly licensing fee to use to-do software applications that they don’t own, and for which they can’t be 110% certain that the software respects their digital rights. Depending on these raises fundamental questions when it’s something as long-lived and all-encompassing as a personal productivity system: Can you trust that cloud service? Or that this proprietary app isn’t profiling what’s going on in your OS, or uploading your data if it sees certain key words? That the licensing fee won’t increase? That the whole thing won’t change when the parent company becomes part of a merger, acquisition or bankruptcy? That the app will keep functioning throughout the years as you upgrade your operating systems, that it will work on more than one or two “authorized” computers?

All these questions are relevant when you’re depending on proprietary software. And when you’re depending on an online/cloud service, you’re always at the mercy of that service eventually shutting down (or being acquired, merged, etc.). There are numerous examples of that happening, including this, this, this, this and that, to name only a few.

This is not The End?

It doesn’t have to be that way.

What if we could bring “GTG” back from the dead, finish the job, and sustain it forever more so that we can all benefit from it?

There are two ways this can be done (which are not mutually exclusive, I might add):

  • From the existing userbase, reform a completely new team of software developers that will be willing to come together and “finish the job” to get a new release out of the door. I have obtained administration rights to the GitHub repository so I can, in practice, play “project manager” and help volunteers get on board. We could have a status analysis and brainstorming session to establish a roadmap with the shortest path to releasability, and beyond, and then work together for the months and years to come to accomplish our goals
  • Someone with enough freedom and time (it could be me, it could be someone else, it could be multiple people) gets paid to be day-to-day maintainer(s), the one(s) doing the majority of the core “boring” work to get everything back into shape and mentors the part-time contributors who contribute opportunistic (“drive-by”) patches and merge requests.

The two options are viable, but require enough public interest to happen. This is why I’m making a survey, linked below, to evaluate the potential for such an initiative. Are you interested in GTG coming back in full force? Whatever your answer, let me know through this 5 minutes survey (November 24th update: survey is now closed, thanks for participating!). Your input is was very much appreciated:

  • It is much more efficient to run this survey (and share results here) than to spend months preparing a campaign only to find out that there is or isn’t enough demand.
  • It also lets me raise awareness and hopefully assemble a team of new contributors (because you don’t just make them appear out of thin air); if enough volunteers show up (with a lot of time and passion to share) we can get started without much delay, get together and create momentum.

Help me evaluate how we can bring it back to life, with this 5 minutes survey

The survey can be found here (November 24th update: the survey is now closed, thanks for participating!)

Filling this survey should take 4-6 minutes at most.

Let’s be clear: I’m not doing this for myself (just grabbing a proprietary app package is much easier and would let me move on to MUCH more lucrative opportunities), I would be doing this for the greater public good, because it breaks my heart to think that GTG would die when it’s such a great piece of software.

There is no sane FLOSS native desktop alternative for Linux users, and open-source software should be worth more money than proprietary software, not less: you are getting better value out of it, with an implicit guarantee that the software respects your rights and privacy, and that it will remain available forever as long as there is someone on the planet willing to maintain it.

On the other hand, spending time creating software costs money; the alternative is not caring and pursuing a lucrative career, so the software remains unmaintained and everybody loses. So I need to know that nursing GTG back to health would be worth the effort, that the application would be used by many (not just a handful) of people around the world. I seek “meaningful” work.

Help me determine if this is worth my (or anyone’s) time by filling the survey today, and please share it with those around you, and elsewhere on the interwebs. Thanks!

The post Survey: making Getting Things GNOME sustainable as a productivity app for public good appeared first on The Open Sourcerer.

by Jeff at November 01, 2019 12:30 PM

October 30, 2019

Jean-François Fortin TamThe goldsmith and the chaos warrior: a typology of workers

As I’ve spent a number of years working for various organizations, big and small, with different types of collaborators and staffers, I’ve devised a simple typology of workers that can help explain the various levels of success, self-organization, productivity and stress of those workers, depending on whether there is a fit between their work type and their work processes. This is one of the many typologies I use to describe human behavior, and I haven’t spent years and a Ph.D. thesis devising this, this is just some down-to-earth reflections I’ve had. Without much further ado, here’s what I’ve come up with so far.


The first type of worker is what I call the “chaos warrior”: this includes the busy managers, professional event organizers, executives, deal-with-everything assistants, researchers, freelancers or contractors.

In my view, “chaos warriors” are the types of workers who—from a systemic point of view—have to deal with constantly changing environments and demands, time-based deadlines, dependencies on other people or materials, multiple parallel projects, etc.

  • Chaos warriors, in their natural state, very rarely have the luxury of single-threaded work and interruption-free environments (though those certainly would be welcome, and chaos warriors sometimes have to naturally retreat to external “think spaces” to get foundational work done).
  • Chaos warriors don’t necessarily enjoy the chaos (some of them hate it, some of them crave it), but it’s part of the system they find themselves in, so they have to structure their workflow around it—or risk incompetence or burning out really fast. They have to become “organized” chaos warriors, otherwise they’re just chaos “victims”.
  • The in-between state, the somewhat-organized-but-not-zen chaos worker that many freelancers experience, is what I call, “Calm Like a Bomb”.

Note that the chaos I am referring to is cognitive, not physical; firefighters, paramedics and ER nurses, the police and military, are “emergency” workers, not warriors of cognitive “chaos”. They are beyond the scope of what I’m covering here (and what’s coming in my next blog posts). They don’t need a “productivity system” to sort through cognitive overload, they deal with whatever comes forth as best as they can in any given situation.

The ultimate embodiment of a chaos warrior is the nameless heroïne in the 4th DaiCon event opening animation from 1983: you can’t get a better representation of triumph amidst chaos than that!

The second category of workers is what I call the “goldsmith”—that is, people with a very specific role, who work in a single regular “employment” type of job, often with set hours, and possibly on-site (in an office/warehouse/shop/etc.).

This may include most office workers, public servants, software design & development folks who make a sharp separation between work and personal life, construction subcontractors working as part of a big real estate project, waiters and bartenders, technicians, retail sales & logistics, etc. I’m vastly simplifying and generalizing of course, but here I sketch the picture of someone who comes in in the morning, looks at the task list/assignments/inbox, works on that throughout the day, and then leaves their work life behind to enjoy their personal life; then the process resets on the next day.

  • “Pure” goldsmiths do not track work items outside of the workplace, and usually do not need to track personal items while at work (or aren’t allowed to). As such, in both settings, their mind is focused and clear. You arrive at context A, you work on context A’s items that are in front of you. You arrive at context B, you relax or deal with whatever has come up in your home “as it happens”. Arguably, from this standpoint of work-life separation, you could put some “emergency workers” in this category.
  • The goldsmith may have a simpler life, which is kind of a luxury, really: they can more easily have a tranquil mind, without the cognitive weight of hundreds of pending items and complex dependency chains governing their tasks. You do the job, you move on.
  • When they are asked to “produce” output in their area of expertise, those are often the type of workers that would benefit from a quiet, interruption-free work environment. There’s a reason why Joel Spolsky designed the FogCreek offices to allow developers to close the door and work in peace, instead of the chaos of open-space offices (that’s a story for another day).

Some specialized “creative” goldsmiths have a hard time separating work from personal life; even when they are home, they can’t help but think about potential creative solutions to the challenges they’re trying to solve at work. In that case, those may be “chaos warriors” in disguise.

In my view, personal productivity methodologies like GTD cater first and foremost to the “organized chaos warriors”, rather than the goldsmiths, who may have little use for all-encompassing cognitive techniques, or who may have tools that already structure their work for them.

Notably, in some industries like IT or the Free & Open-Source software sub-industry, we have done a pretty good job at externalizing (for better or for worse) the software developers and designers’ todo list as “bug/issue trackers”, and their assignments may often be linear and fairly predictable, allowing them to be “goldsmiths”. Most of the time, a software developer or designer, in their core duties, are going to deal with “whatever is in the issue list” (or kanban board), particularly in a team setting, and as such probably don’t feel the need to have a dedicated personal todo list, which might be considered duplication of information and management overhead. Input goes in (requirements, bug reports, feature requests), output goes out (a new feature, design, or fix). There are some exceptions to this generalization however:

  • When your issue tracker (bug inventory) is not actively managed (triaged, organized, regularly pruned), you eventually end up declaring “bugtracker bankruptcy”, or, like Benjamin Otte once said to me on IRC, “Whoever catches me first on IRC in the morning, wins.”
  • Some goldsmiths may have more complicated lives than just their job duties and might be interested by this approach nonetheless.
  • Sometimes, shared/public/open bug trackers are a tyranny on the mind, much like a popular email inbox: the demands are so numerous and complex (or unstructured) that they are not only externally imposed goals, they become imposed chaos—in which case the goldsmith may find that they need to extract a personal subset of the items from the “firehose” into a remixed, personalized, digestible task list for themselves.

Do you recognize yourself in one of these categories I’ve come up with? Or do you fit into some other category I might not have thought about? Did you find this essay interesting? Let me know in the comments!

If you’re a chaos warrior, or you fit any of the goldsmith’s “exceptions” (or if you’re interested in the field of personal productivity in general), you’ll probably be interested in reading my next article on (re)building the best free & open-source “GTD” application out there (but before that, if you haven’t read it already, check out my previous article on “getting things done”).

The post The goldsmith and the chaos warrior: a typology of workers appeared first on The Open Sourcerer.

by Jeff at October 30, 2019 12:30 PM

October 28, 2019

Jean-François Fortin TamA secret to productivity for busy individuals with chaotic contexts

Over the years, some people have asked me how I manage so many projects—short and long—without forgetting anything, without breaking promises and commitments, all while looking like a zen buddha. A few observers also remarked (often in mockery) that I tend to take a note of everything, that I document an outrageous amount of seemingly mundane details in my professional and personal life.

In the battlefield of the modern world’s incessant demands and boundless opportunities, I survive “seemingly effortlessly” by being methodology-intensive and adhering to a particular cognitive philosophy that acknowledges the limitations of the human brain—and works around them. This way of life lets me keep track of the big picture without being paralyzed by the daunting nature of a project:

(How do you climb a colossus? One hair at a time!)

I do procrastinate sometimes (and spend a long time juggling and incubating ideas), but this is not the same thing as being paralyzed by the anxiety of a restless mind, which is what, I posit, plagues many knowledge workers today.

  • My procrastination is often the result of the context and timing I find myself in: prioritizing other emergencies, or having no energy left for certain types of tasks, needing to find some long blocks of uninterrupted time to focus on a complex task, or simply needing additional tools or information.
  • When I’m fed up from “incubating” a task long enough and need to force the creation of “focused time”, I can always shut down my email and chat clients and blast out a State of Trance. That sometimes help get into the state of flow.

But surely, listening to Armin van Buuren’s playlists is not the primary gateway to productivity, is it? There has to be more to why I virtually don’t experience stress and anxiety!

So here’s a little secret I’ve been harboring for the last ten years: I’m a hardcore GTD practicioner. It basically governs my life.

In geek terms, that means I’m a cyborg who outsources part of his brain to an external memory system so that he can have all his CPU and RAM available for focused processing.

I could write an unbearably long blog post on the matter and try to explain GTD without forgetting anything, but I thought I’d make a quick overview video instead, which summarizes the core concept for you:

(By the way, hi, I have resurrected my YouTube channel! Feel free to subscribe as that will encourage me to publish more content)

David Allen’s GTD is certainly the single most transformative personal productivity book, bar none, that you should read, if you care about efficiency and stress reduction. It changed my life: it helped me throughout my university studies, through the jobs, through the Free & Open-Source software projects, and the personal day-to-day (things get a lot more complicated when you juggle career with projects, finances, home improvement, continuous learning, and a fat corgi dog).

Pictured: the fat corgi. That fluffy guy doesn’t care about personal productivity.

This methodology and philosophy works best for those I would call “organized chaos warriors”. Check out my two complementary blog posts: my typology of workers (where I define the chaos warriors) and the presentation of my favorite Free and Open-Source tool for Getting Things Done.


P.s.: I linked to the 2001 edition of the book (which is the one I had read) because, according to some reviewers’ comments on the 2015 remake, the 2nd edition is much longer for no real gain other than to mention digital tools rather than a paper-based workflow).

The post A secret to productivity for busy individuals with chaotic contexts appeared first on The Open Sourcerer.

by Jeff at October 28, 2019 12:30 PM

October 24, 2019

Jean-François Fortin TamUnderstanding the Rotschild vs GNOME case in 12 minutes

What’s the deal with the Rothschild vs GNOME Shotwell patent litigation case that the GNOME Foundation must defend against, and why does it matter for protecting the Free & Open-Source software community at large? Here’s my personal attempt at explaining the matter with a short video.

Please note that this video represents my personal opinions, I am not a lawyer nor a representative of the GNOME Foundation, etc. That said, please feel free to share far and wide 🙂

The post Understanding the Rotschild vs GNOME case in 12 minutes appeared first on The Open Sourcerer.

by Jeff at October 24, 2019 06:00 PM

Xabier Rodríguez CalvarVCR to WebM with GStreamer and hardware encoding

My family had bought many years ago a Panasonic VHS video camera and we had recorded quite a lot of things, holidays, some local shows, etc. I even got paid 5000 pesetas (30€ more than 20 years ago) a couple of times to record weddings in a amateur way. Since my father passed less than a year ago I have wanted to convert those VHS tapes into something that can survive better technologically speaking.

For the job I bought a USB 2.0 dongle and connected it to a VHS VCR through a SCART to RCA cable.

The dongle creates a V4L2 device for video and is detected by Pulseaudio for audio. As I want to see what I am converting live I need to tee both audio and video to the corresponding sinks and the other part would go to to the encoders, muxer and filesink. The command line for that would be:

gst-launch-1.0 matroskamux name=mux ! filesink location=/tmp/test.webm \
v4l2src device=/dev/video2 norm=255 io-mode=mmap ! queue ! vaapipostproc ! tee name=video_t ! \
queue ! vaapivp9enc rate-control=4 bitrate=1536 ! mux.video_0 \
video_t. ! queue ! xvimagesink \
pulsesrc device=alsa_input.usb-MACROSIL_AV_TO_USB2.0-02.analog-stereo ! 'audio/x-raw,rate=48000,channels=2' ! tee name=audio_t ! \
queue ! pulsesink \
audio_t. ! queue ! vorbisenc ! mux.audio_0

As you can see I convert to WebM with VP9 and Vorbis. Something interesting can be passing norm=255 to the v4l2src element so it’s capturing PAL and the rate-control=4 for VBR to the vaapivp9enc element, otherwise it will use cqp as default and file size would end up being huge.

You can see the pipeline, which is beatiful, here:

As you can see, we’re using vaapivp9enc here which is hardware enabled and having this pipeline running in my computer was consuming more or less 20% of CPU with the CPU absolutely relaxed, leaving me the necessary computing power for my daily work. This would not be possible without GStreamer and GStreamer VAAPI plugins, which is what happens with other solutions whose instructions you can find online.

If for some reason you can’t find vaapivp9enc in Debian, you should know there are a couple of packages for the intel drivers and that the one you should install is intel-media-va-driver. Thanks go to my colleague at Igalia Víctor Jáquez, who maintains gstreamer-vaapi and helped me solving this problem.

My workflow for this was converting all tapes into WebM and then cutting them in the different relevant pieces with PiTiVi running GStreamer Editing Services both co-maintained by my colleague at Igalia, Thibault Saunier.

by calvaris at October 24, 2019 10:27 AM

October 14, 2019

GStreamerGStreamer Conference 2019: Full Schedule, Talks Abstracts and Speakers Biographies now available

(GStreamer)

The GStreamer Conference team is pleased to announce that the full conference schedule including talk abstracts and speaker biographies is now available for this year's lineup of talks and speakers, covering again an exciting range of topics!

The GStreamer Conference 2019 will take place on 31 October - 1 November 2019 in Lyon, France just after the Embedded Linux Conference Europe (ELCE).

Details about the conference and how to register can be found on the conference website.

This year's topics and speakers:

Lightning Talks:

  • Raising the Importance of the V4L2 plugin and Challenges
    Nicolas Dufresne, Collabora
  • WebKit-powered HTML overlays in your pipeline with GstWPE
    Philippe Normand, Igalia
  • Detect a metal can using GStreamer/OpenFoodFacts
    Stéphane Cerveau, Collabora
  • A new GStreamer RTSP Server
    Sebastian Dröge, Centricular
  • A brand new documentation infrastructure for the GStreamer framework
    Thibault Saunier, Igalia
  • GStreamer on Windows: Everything New
    Nirbheek Chauhan, Centricular
  • An Improved Latency Tracer
    Nicolas Dufresne, Collabora
  • Using Bots to Improve the Gitlab Workflow
    Jordan Petridis, Centricular
  • GNOME Radio
    Ole Aamot, GNOME
  • SCTE-35 support in GStreamer
    Edward Hervey, Centricular
  • Closed captions, AFD, BAR
    Aaron Boxer, Collabora
  • ...and more to come
  • ...
  • Submit your lightning talk now!

Many thanks to our sponsors, Collabora, Pexip, Igalia, Fluendo, Centricular, Facebook and Zeiss, without whom the conference would not be possible in this form. And to Ubicast who will be recording the talks again.

Considering becoming a sponsor? Please check out our sponsor brief.

We hope to see you all in Lyon in October! Don't forget to register!

October 14, 2019 06:00 PM

October 08, 2019

Andy Wingothoughts on rms and gnu

(Andy Wingo)

Yesterday, a collective of GNU maintainers publicly posted a statement advocating collective decision-making in the GNU project. I would like to expand on what that statement means to me and why I signed on.

For many years now, I have not considered Richard Stallman (RMS) to be the head of the GNU project. Yes, he created GNU, speaking it into existence via prophetic narrative and via code; yes, he inspired many people, myself included, to make the vision of a GNU system into a reality; and yes, he should be recognized for these things. But accomplishing difficult and important tasks for GNU in the past does not grant RMS perpetual sovereignty over GNU in the future.

ontological considerations

More on the motivations for the non serviam in a minute. But first, a meta-point: the GNU project does not exist, at least not in the sense that many people think it does. It is not a legal entity. It is not a charity. You cannot give money to the GNU project. Besides the manifesto, GNU has no by-laws or constitution or founding document.

One could describe GNU as a set of software packages that have been designated by RMS as forming part, in some way, of GNU. But this artifact-centered description does not capture movement: software does not, by itself, change the world; it lacks agency. It is the people that maintain, grow, adapt, and build the software that are the heart of the GNU project -- the maintainers of and contributors to the GNU packages. They are the GNU of whom I speak and of whom I form a part.

wasted youth

Richard Stallman describes himself as the leader of the GNU project -- the "chief GNUisance", he calls it -- but this position only exists in any real sense by consent of the people that make GNU. So what is he doing with this role? Does he deserve it? Should we consent?

To me it has been clear for many years that to a first approximation, the answer is that RMS does nothing for GNU. RMS does not write software. He does not design software, or systems. He does hold a role of accepting new projects into GNU; there, his primary criteria is not "does this make a better GNU system"; it is, rather, "does the new project meet the minimum requirements".

By itself, this seems to me to be a failure of leadership for a software project like GNU. But unfortunately when RMS's role in GNU isn't neglect, more often as not it's negative. RMS's interventions are generally conservative -- to assert authority over the workings of the GNU project, to preserve ways of operating that he sees as important. See for example the whole glibc abortion joke debacle as an example of how RMS acts, when he chooses to do so.

Which, fair enough, right? I can hear you saying it. RMS started GNU so RMS decides what it is and what it can be. But I don't accept that. GNU is about practical software freedom, not about RMS. GNU has long outgrown any individual contributor. I don't think RMS has the legitimacy to tell this group of largely volunteers what we should build or how we should organize ourselves. Or rather, he can say what he thinks, but he has no dominion over GNU; he does not have majority sweat equity in the project. If RMS actually wants the project to outlive him -- something that by his actions is not clear -- the best thing that he could do for GNU is to stop pretending to run things, to instead declare victory and retire to an emeritus role.

Note, however, that my personal perspective here is not a consensus position of the GNU project. There are many (most?) GNU developers that still consider RMS to be GNU's rightful leader. I think they are mistaken, but I do not repudiate them for this reason; we can work together while differing on this and other matters. I simply state that I, personally, do not serve RMS.

selective attrition

Though the "voluntary servitude" questions are at the heart of the recent joint statement, I think we all recognize that attempts at self-organization in GNU face a grave difficulty, even if RMS decided to retire tomorrow, in the way that GNU maintainers have selected themselves.

The great tragedy of RMS's tenure in the supposedly universalist FSF and GNU projects is that he behaves in a way that is particularly alienating to women. It doesn't take a genius to conclude that if you're personally driving away potential collaborators, that's a bad thing for the organization, and actively harmful to the organization's goals: software freedom is a cause that is explicitly for everyone.

We already know that software development in people's free time skews towards privilege: not everyone has the ability to devote many hours per week to what is for many people a hobby, and it follows of course that those that have more privilege in society will be more able to establish a position in the movement. And then on top of these limitations on contributors coming in, we additionally have this negative effect of a toxic culture pushing people out.

The result, sadly, is that a significant proportion of those that have stuck with GNU don't see any problems with RMS. The cause of software freedom has always run against the grain of capitalism so GNU people are used to being a bit contrarian, but it has also had the unfortunate effect of creating a cult of personality and a with-us-or-against-us mentality. For some, only a traitor would criticise the GNU project. It's laughable but it's a thing; I prefer to ignore these perspectives.

Finally, it must be said that there are a few GNU people for whom it's important to check if the microphone is on before making a joke about rape culture. (Incidentally, RMS had nothing to say on that issue; how useless.)

So I honestly am not sure if GNU as a whole effectively has the demos to make good decisions. Neglect and selective attrition have gravely weakened the project. But I stand by the principles and practice of software freedom, and by my fellow GNU maintainers who are unwilling to accept the status quo, and I consider attempts to reduce GNU to founder-loyalty to be mistaken and without legitimacy.

where we're at

Given this divided state regarding RMS, the only conclusion I can make is that for the foreseeable future, GNU is not likely to have a formal leadership. There will be affinity groups working in different ways. It's not ideal, but the differences are real and cannot be papered over. Perhaps in the medium term, GNU maintainers can reach enough consensus to establish a formal collective decision-making process; here's hoping.

In the meantime, as always, happy hacking, and: no gods! No masters! No chief!!!

by Andy Wingo at October 08, 2019 03:34 PM

October 07, 2019

Christian SchallerGStreamer Conference 2019 (including GStreamer and PipeWire hackfests)

(Christian Schaller)

GStreamer Conference 2019 banner

GStreamer Conference 2019 in Lyon France


So the GStreamer Conference 2019 is approaching being held in Lyon, France between 31st October and 1st November 2019. This year is special as it marks the GStreamer projects 20th year of existence. I still remember seeing the announcement of GStreamer 0.0.9 which Erik Walthinsen sent to the GNOME announe mailing list. Back then I felt that multimedia support where one of the big gaps around the Linux operating system that needed filling (no, XAnim was nice for its time, but it was not a long term solution :) and GStreamer seemed like the perfect project to fill it. So I joined the GStreamer IRC channel determined to try to help the project succeed however I could. A little over a year later we all met for the first time at GUADEC in Copenhagen, even posing for this exciting team photo.

GStreamer Team at GUADEC Copenhagen in 2001 (we all looked slightly younger and fresher back then.)


Anyway, 20 years later there will be a talk and presentation by GStreamer co-founder Wim Taymans (wearing blue shirt and black pants in picture above) at the GStreamer Conference commemorating 20 years of GStreamer. Detailing taking the project from idealistic spare time effort to the multimedia industry juggernaut it is today.

Of course the conference is not going to be focused on the past, as there is a long line up of great talks talking about modern streaming with DASH, HDR support in GStreamer, latest developments around GStreamer and Rust, Virtual reality, Vulkan and more. Actually on the ‘and more’ topic, Wim Taymans will also do a presentation on PipeWire, the next generation audio and video server, at the GStreamer Conference this year, hopefully demoing some of the great improvements in things like our pro-audio Jack emulation support.
So if you haven’t already, make your way to the GStreamer Conference 2019 website and register for the 10th annual GStreamer Conference!

For those going be aware that there will also be a joint GStreamer fall hackfest and PipeWire hackfest in the two days following the GStreamer Conference. So be sure to sign up for those if interested. They will be co-located with participants flowing freely between the two events.

by uraeus at October 07, 2019 03:57 PM

September 23, 2019

GStreamerGStreamer 1.16.1 stable bug fix release

(GStreamer)

The GStreamer team is pleased to announce the first bug fix release in the stable 1.16 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.16.x.

See /releases/1.16/ for the details.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Download tarballs directly here: gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

September 23, 2019 11:00 PM

Christian SchallerFedora Workstation 31 – Whats new

(Christian Schaller)

We are laboring on getting Fedora Workstation 31 out the door next Month, with the beta release being made available last week. So here are some of the highlights of this upcoming release which I and the team hope you will enjoy. Many of these items I already covered in my June blogpost about Fedora Workstation 31, so if you read that one consider this one a status update as there will be some repeats.

Wayland improvements
Fedora has been leading the migration to Wayland since day one and we are not planning to stop. XWayland on demand has been an effort a lot of people contributed to this cycle. The goal is to only need XWayland for legacy X applications, not have it started and running all the time as that is a waste of system resources and also having core functionality still depend on X under Wayland makes the system more fragile. XWayland-on-demand has been a big effort with contributions from a lot of people and companies. One piece of this was the Systemd user session patches that was originally written by Iain Lane from Canonical. They had been lingering for a bit so Benjamin Berg took those patches on for this cycle and helped shepherd them over the finish line and get them merged upstream. This work wasn’t a hard requirement for Wayland-on-demand, but since it makes it a lot easier to do different things under X and Wayland which in turn makes moving towards XWayland-on-demand a little simpler to implement. That work will also allow (in future releases) us to do things like only start services under GNOME that are actually needed for your hardware, so for instance if you don’t have a bluetooth adapter in your computer there is no reason to run the bits of GNOME dealing with bluetooth. So expect further resource savings coming from this work over time.

Carlos Garnacho then spent time going through GNOME Shell removing any lingering X dependencies while Olivier Fourdan worked on cleaning up the control center. This work has mostly landed, but it is hidden behind an experimental flag (gsettings set org.gnome.mutter experimental-features "[...,'autostart-xwayland']") in Fedora 31 as we need to mature it a bit more before its ready for primetime. But we hope and expect to have it running by default in Fedora Workstation 32.

One example of something that was still requiring X that is now gone is the keyboard and mouse accessibility features in GNOME 3, which Olivier Fourdan got re-implemented and improved for this release. So if anyone out there reading this rely on the hover click accessibility feature then that is actually a lot nicer in Fedora Workstation 31. As seen in the screenshot below you now have this nice little pie animation filling up as it prepares to click which is a huge improvement over how it used to work.

Clock on hover

Click on hover in action

Another item we feel is an important part of reducing the need for XWayland is having Firefox running natively on Wayland. Martin Stransky and Jan Horak has been working tirelessly on trying to ensure Firefox works well on Wayland and in the Fedora 31 Beta it is running on Wayland by default. However there are a few bugs discovered that Martin and Jan are trying hard to fix atm so we can keep this default for the GA release, but if they miss the deadline we will ship the X backend version in F31 and then move to the Wayland version later on.

In Fedora Workstation 31 Wayland is still disabled by default if you use the Nvidia binary driver. The reason for this is due to lack of acceleration under XWayland, meaning that any application depending on GLX, like a lot of games, will just get software GL rendering with the binary NVidia driver. This isn’t something we can resolv on our own, Nvidia has to do the work since its their closed source driver, but we been discussing it regularly with them and we been told now that they are looking at the work Adam Jackson some time ago which was specifically aimed at helping them bring their X.org driver to XWayland. We don’t have a timeline yet, but it is being actively looked at and hopefully a proper date can be provided soon. I am actually running Fedora Workstation 31 using the NVidia driver myself at the moment on this laptop, and for those interested in helping dogfood this setup, in preparation for hopefully being able to enable Wayland on NVidia in Fedora Workstation 32, it is fairly simple thing to do. Under /usr/lib/udev/rules.d/ you find a file called 61-gdm.rules, just edit that file and comment out (#) the line that reads ‘DRIVER=="nvidia", RUN+="/usr/libexec/gdm-disable-wayland"‘ and you will revert to a standard setup where your standard session is a Wayland session, but with a x.org session available as a fallback. The more people that run this and report issues the better as it helps us make this rock solid before releasing it upon the world.

Atomic kernel modesetting
Jonas Ådahl has been hard at work this cycle on adding support for atomic mode setting. This work is not done, but the first parts of it has landed, but it has major long term advantages for us. I asked Jonas to provide a short description of the work and what it will eventually achieve as I don’t we articulated that anywhere else yet:

There are two ways for a display server to control the configuration and content of monitors – the old classic Kernel Mode Setting (classic KMS), and newer atomic Kernel Mode Setting (atomic KMS). The main difference between these two modes of operations is that with atomic KMS, the display server posts transactions containing configuration KMS that are then processed atomically by the kernel, while when using the classic KMS, the display server posts configurations command by command, where each monitor is configured by posting multiple commands. The benefits with atomic KMS are for example that the display server will up front know whether a configuration is valid (e.g. enough memory bandwidth), or that the display server can configure multiple aspects of the hardware atomically.

During the cycle leading up to Fedora Workstation 31 the foundations for how mutter (the window manager powering GNOME Shell) can make use of the new atomic KMS API was put in place. What was done was to introduce an internal transactional API for configuring monitors. This will eventually allow us to have much more control over how more advanced monitor features are utilized. For example it will be possible to place client windows directly in hardware overlay planes, meaning we can more often completely bypass full frame compositing when only the content of a single window changes. Another example for what this enables us to do is with color management; we will be able to do seamless switching between managing window color profiles using OpenGL and for instance gamma ramps. Yet another example of what this work opens the door for is framebuffer modifiers, which will among other things potentially result in higher performance with very high resolution monitors.
Finally an important aspect of the work done related to the new internal KMS API is that it aims to be thread safe, meaning eventually it will be possible to put KMS processing completely in a separate thread. This means that together with e.g. moving input device processing to its own thread it will be possible to get very short latency between mouse movement and the cursor
being moved on screen.

QtGNOME improvements
Jan Grulich has continued improved the QtGNOME module to make sure Qt apps integrate as well as possible into Fedora Workstation. His latest updates ensures that the theming keeps up to date with latest upstream changes in Adwaita, that we have a fully working dark theme, that accessibility theming work and that it works with Flatpaks. Below is a screenshot showing Okular running allowing you to see how the QtGNOME module affects the look and feel of Qt applications.

Firmware improvements
The LVFS firmware service keeps going from strength to strength. Richard Hughes presented on it during the Open Firmware Conference recently and was approached by a lot of vendors afterwards both thanking him and Red Hat for the effort, but also asking about getting more of their hardware supported. New vendors are coming onboard at rapid pace, for instance Acer joined recently and are planning to support more of their hardware on the LVFS going forward. It is also worth mentioning the GNOME Firmware tool that can now be downloaded from flathub and which works great on Fedora Workstation 31.

OpenH264 Greatly Improved
The much improved version of OpenH264 will be available soon for Fedora users. This new version adds support for the High and Advanced profiles of H264 which is what most videos found online or produced by your camera would be using. This means you can add H264 playback support to your Fedora Workstation without having to search online for 3rd party repositories like you have had to do up to now. We also are trying to ensure this will be usable by Firefox for video playback eventually. This was work we partnered with Endless, Cisco to hire the multimedia experts at Centricular to do, so another great example of cross company collaboration to bring improved functionality to the community.

Fedora Toolbox
Debarshi Ray has been working on many small improvements and better robustness for Fedora Toolbox going into Fedora Workstation 31. Fedora Toolbox for those not aware of it yet, is our tool to make doing development using pet containers simple and convenient, providing ease of use features on top of traditional container tools and integration with GNOME terminal and the GNOME Shell. The version shipping in F31 will be the last shell script based one as once Fedora Workstation 31 is out we will be going all in on rewritting Fedora Toolbox in Go, in preparation for future development and expansion. I strongly recommend trying it out as it will help open your eyes to the possibilities that using pet containers for development gives you. For instance you can easily set up a RHEL based pet container on your Fedora system to do development work that is mean to be deployed on a RHEL system or grab our special AI/ML development container for easy access to TensorFlow and similar tools.

Improved Classic mode
Another notable change in this release is the updates to GNOME Classic mode. GNOME Classic mode is a set of extensions to GNOME 3 that makes it look and behave a lot more like GNOME 2, which still has many fans out there. With this release we collected feedback from a group of Classic mode users and tried to improve the experience further, mostly be removing some remaining GNOME 3’isms that didn’t really fit the GNOME Classic user experience, like the overview and the hot corner. The session manager is now also easily accessible in the bottom corner. The theming also got cleaned up a little to remove the last bit of the ‘black’ GNOME 3 theming. That said I think it is important to remember that this is still GNOME 3 in the end, we are really just showcasing the power of extensions to tweak the user experience in quite fundamental ways here.

GNOME Classic improved

Improved GNOME Classic mode


Better support for non-English users
Fedora Workstation is used all over the globe, but we have not been happy about how our support for picking languages other than English has worked so far. The thing is that if you choose one or more languages at install time, things tended to just work fine, but if you wanted to add a new language afterwards it required jumping onto the command line and figuring out how to install the needed langpacks. In Fedora Workstation 31 Sundeep Anand have worked hard to improve this, so if you choose a new language in the GNOME Control center in Fedora Workstation 31, the required langpacks should be installed automatically for you.

Fleet Commander
Fleet Commander 0.14.1 is out just in time for Fedora Workstation 31. Fleet Commander is a tool for doing large scale deployments of Fedora and RHEL workstations, allowing you to set system wide profiles. So for instance if you have a GNOME Shell extension everyone in your organization or a specific team inside your organization should have enabled, you can deploy a profile with Fleet commander ensuring that extension is enabled for those users. Basically any setting within GNOME can be set using this, including network configuration options. There is also support for Firefox and LibreOffice settings in Fleet Commander. The big feature addition of 0.14.1 is that Fleet Commander now can be used with Active Directory, which means that even if your company or university use Active Directory for their user management, you can now deploy Fedora and RHEL profiles without needing FreeIPA. Fleet Commander is pretty much finished at this point, at least as far as any piece of software can ever be finished. Oliver Gutierrez Suarez is working on finishing up some last bits of Firefox support currently, but we don’t have any major Fleet Commander items on his todo list after that, so if you been waiting to test it out there are on new major features you need to wait on anymore, it is all there. If you are doing large scale linux desktop deployments I definitely recommend checking out Fleet commander. You will find that Fleet Commander definitely makes Fedora a great choice for doing large scale Linux desktop deployments.

Pipewire
We are not doing a lot of changes to Pipewire for Fedora Workstation 31. Mostly some bugfixes and minor improvements to the video infrastructure it already provides in Fedora 30 for Flatpaks and web browsers. We are planning major changes for Fedora Workstation 32 though, where we in fact plan to ship Pipewire as a tech preview for both Jack and PulseAudio users. The way it will work is that the system will still default to PulseAudio, but we will provide either a script or a UI option to switch over to Pipewire (and back again). There is also a plan to have a core set of ProAudio applications available as Flatpaks for Fedora Workstation 32 tested and verified to work perfectly with Pipewire, the current apps planned to be included are Ardour, Carla, a2jmidid, Hydrogen, Qtractor and Patroneo, but if there is interested contributors joining the effort we could have even more. Then for Fedora Workstation 33 the idea is to ship with Pipewire as the default audio handler, but with some way for users to switch back to PulseAudio if they have a need. Not unlike how the setup is currently with Wayland and X.org in Fedora. Wim Taymans will also be attending the Sonoj conference in Cologne Germany at the end of October to discuss Pipewire with many members of the Linux ProAudio community and hopefully help prepare them for a future where Fedora Workstation is the perfect home for ProAudio users and developers.

Sysprof
Christian Hergert spent some cycles this round on improving the Sysprof tool as it was becoming clear that to keep improving GNOME Shell and general desktop performance going forward we needing better data and ability to find the bottlenecks. Tools like sysprof often ends up being the unsung heroes of the system, but as we continue improving the overall GNOME performance and resource usage of the next few years the revamped sysprof tool will be a big part of that story.

Sysprof

Much improved Sysprof tool

Silverblue
A lot of the items we work on are part of our vision around Silverblue, a Linux desktop OS built on the idea of an immutable core image. We often mention the theoretic advantage that such a setup with an immutable OS brings, but actually as I upgraded from F30 and F31 beta on my RPM based laptop (I got a separate machine where I run Silverblue) I hit the exact kind of issue that Silverblue can help us and our users avoid. What happened was that after my upgrade I suddenly had no Wayland session anymore, just the fallback X.org session. After quite a bit of fault searching I discovered that the reason for this was that I had been testing Valves ACO shader compiler on F30. These packages had a newer version number than the F31 packages and thus where not overwritten as part of the upgrade. Unfortunately the EGL package that came as part of that repository did not work well on F31 and thus the Wayland session failed. Once I did a distro sync and forced all packages to be the actual F31 versions things worked correctly, but it did illustrate the challenges with systems where different parts of the core can and will get updated at different times. With a single well tested core OS image these kind of problems will not happen. That said being able to test such things as ACO is valuable and useful and luckily OStree and Silverblue do offer functionality for installing such things in a clean and non-damaging way through what is known as package layering. When you install new packages like that on using package layering they will only last until your next reboot, after you reboot your back to a clean original state system. Of course if you really want to keep some experimental packages around there are other things you can do too, like overriding, but for simple testing like I did with ACO, package layering will provide you with a simple and safe way to do that.

We realize that Silverblue is a major change in how a Linux distro is ‘supposed’ to work, so we are taking our time with it to ensure we do it right and that we have made sure applications and tools work in a way that functions well on an immutable OS. So if you are interested I do recommend that you grab the Fedora 31 Silverblue image and give it a spin, but we are still working on polishing the experience so don’t expect it to be a seamless experience at this point in time. Of course as things like Flatpaks, Fedora Toolbox and a host of smaller issues get improved upon we do believe this will be such an overall improvement over an ‘old fashioned’ linux distro that you will be asking yourself why the Linux world didn’t do this years ago.

Improved performance
A lot of work has gone into improving the general performance of GNOME 3.34. The GNOME shell team has been very active and is a great example of a large numbers of developers working together from different backgrounds. So this release features a lot of great performance work by Daniel van Vugt from Canonical and by Georges Stavracas from Endless for instance. The Red Hat team has focused on providing patch review and feedback and working on bigger long term changes and enablers, like Christian Hergerts work on Sysprof, Jonas Ådahl work on atomic mode setting and Benjamin Bergs work on systemd-user session support. All in all I think you will find that Fedora Workstation 31 with GNOME 3.34 provides a faster and smoother experience, an experience we will continue to build upon going forward as some of these long term efforts starts paying off.

Sonic Boom

Performance is better than ever

Summary
So this has been a roundup of some of the core items you should look forward to in Fedora Workstation 31. There are other items coming too in this release, like the Miracast GNOME Network Display application that Benjamin Berg has written, more Fedora Flatpaks available than ever before and more. We also have a lot of interesting items coming up in Fedora Workstation 32 like Bastien Noceras work improving low memory handling. So stay tuned.

by uraeus at September 23, 2019 05:45 PM

September 02, 2019

Sebastian Pölsterlscikit-survival 0.10 released

This release of scikit-survival adds two features that are standard in most software for survival analysis, but were missing so far:

  1. CoxPHSurvivalAnalysis now has a ties parameter that allows you to choose between Breslow’s and Efron’s likelihood for handling tied event times. Previously, only Breslow’s likelihood was implemented and it remains the default. If you have many tied event times in your data, you can now select Efron’s likelihood with ties="efron" to get better estimates of the model’s coefficients.
  2. A compare_survival function has been added. It can be used to assess whether survival functions across 2 or more groups differ.

To illustrate the use of compare_survival, let’s consider the Veterans’ Administration Lung Cancer Trial. Here, we are considering the Celltype feature and we want to know whether the tumor type impacts survival. We can visualize the survival function for each subgroup using the Kaplan-Meier estimator.

import matplotlib.pyplot as plt
from sksurv.datasets import load_veterans_lung_cancer
from sksurv.nonparametric import kaplan_meier_estimator
data_x, data_y = load_veterans_lung_cancer()
group_indicator = data_x.loc[:, "Celltype"]
groups = group_indicator.unique()
for group in groups:
group_y = data_y[group_indicator == group]
time, surv_prob = kaplan_meier_estimator(
group_y["Status"],
group_y["Survival_in_days"])
plt.step(time, surv_prob, where="post",
label="Celltype = {}".format(group))
plt.xlabel("time $t$")
plt.ylabel("est. probability of survival")
plt.ylim(0, 1)
plt.grid(True)
plt.legend()
Kaplan-Meier estimates of survival function.

Kaplan-Meier estimates of survival function.

The figure indicates that patients with adenocarcinoma (green line) do not survive beyond 200 days, whereas patients with squamous cell lung cancer (blue line) can survive several years. We can determine whether this difference is indeed statistically significant by performing a non-parametric log-rank test. It groups patients according to cell type and compares the estimated group-specific hazard rate with the pooled hazard rate. Under the null hypothesis, the hazard rate of groups is equal for all time points. The alternative hypothesis is that the hazard rate of at least one group differs from the others at some time.

from sksurv.compare import compare_survival
chisq, pvalue, stats, covar = compare_survival(
data_y, group_indicator, return_stats=True)

The resulting test statistic $\chi^2 = 25.40$, which corresponds to a highly significant P-value of $1.3\cdot{10}^{-5}$. In addition, we can look at group-specific statistics by specifying return_stats=True.

counts observed expected statistic
group
adeno 27 26 15.69 10.31
large 27 26 34.55 -8.55
smallcell 48 45 30.10 14.90
squamous 35 31 47.65 -16.65

The column counts lists the size of each group and is followed by the number of observed and expected events. The last column statistic is the difference between the observed and expected number of events from which the overall $\chi^2$ statistic is computed.

Download

The latest version of scikit-survival can be obtained via conda or pip. Pre-built conda packages are available for Linux, OSX and Windows:

 conda install -c sebp scikit-survival

Alternatively, you can install it from source via pip:

 pip install -U scikit-survival

September 02, 2019 04:06 PM

August 21, 2019

Bastien Noceralow-memory-monitor: new project announcement

(Bastien Nocera) I'll soon be flying to Greece for GUADEC but wanted to mention one of the things I worked on the past couple of weeks: the low-memory-monitor project is off the ground, though not production-ready.

low-memory-monitor, as its name implies, monitors the amount of free physical memory on the system and will shoot off signals to interested user-space applications, usually session managers, or sandboxing helpers, when that memory runs low, making it possible for applications to shrink their memory footprints before it's too late either to recover a usable system, or avoid taking a performance hit.

It's similar to Android's lowmemorykiller daemon, Facebook's oomd, Endless' psi-monitor, amongst others

Finally a GLib helper and a Flatpak portal are planned to make it easier for applications to use, with an API similar to iOS' or Android's.

Combined with work in Fedora to use zswap and remove the use of disk-backed swap, this should make most workstation uses more responsive and enjoyable.

by Bastien Nocera (noreply@blogger.com) at August 21, 2019 11:57 AM

August 12, 2019

Robert McQueenFlathub, brought to you by…

(Robert McQueen)

Over the past 2 years Flathub has evolved from a wild idea at a hackfest to a community of app developers and publishers making over 600 apps available to end-users on dozens of Linux-based OSes. We couldn’t have gotten anything off the ground without the support of the 20 or so generous souls who backed our initial fundraising, and to make the service a reality since then we’ve relied on on the contributions of dozens of individuals and organisations such as Codethink, Endless, GNOME, KDE and Red Hat. But for our day to day operations, we depend on the continuous support and generosity of a few companies who provide the services and resources that Flathub uses 24/7 to build and deliver all of these apps. This post is about saying thank you to those companies!

Running the infrastructure

Mythic Beasts Logo

Mythic Beasts is a UK-based “no-nonsense” hosting provider who provide managed and un-managed co-location, dedicated servers, VPS and shared hosting. They are also conveniently based in Cambridge where I live, and very nice people to have a coffee or beer with, particularly if you enjoy talking about IPv6 and how many web services you can run on a rack full of Raspberry Pis. The “heart” of Flathub is a physical machine donated by them which originally ran everything in separate VMs – buildbot, frontend, repo master – and they have subsequently increased their donation with several VMs hosted elsewhere within their network. We also benefit from huge amounts of free bandwidth, backup/storage, monitoring, management and their expertise and advice at scaling up the service.

Starting with everything running on one box in 2017 we quickly ran into scaling bottlenecks as traffic started to pick up. With Mythic’s advice and a healthy donation of 100s of GB / month more of bandwidth, we set up two caching frontend servers running in virtual machines in two different London data centres to cache the commonly-accessed objects, shift the load away from the master server, and take advantage of the physical redundancy offered by the Mythic network.

As load increased and we brought a CDN online to bring the content closer to the user, we also moved the Buildbot (and it’s associated Postgres database) to a VM hosted at Mythic in order to offload as much IO bandwidth from the repo server, to keep up sustained HTTP throughput during update operations. This helped significantly but we are in discussions with them about a yet larger box with a mixture of disks and SSDs to handle the concurrent read and write load that we need.

Even after all of these changes, we keep the repo master on one, big, physical machine with directly attached storage because repo update and delta computations are hugely IO intensive operations, and our OSTree repos contain over 9 million inodes which get accessed randomly during this process. We also have a physical HSM (a YubiKey) which stores the GPG repo signing key for Flathub, and it’s really hard to plug a USB key into a cloud instance, and know where it is and that it’s physically secure.

Building the apps

Our first build workers were under Alex’s desk, in Christian’s garage, and a VM donated by Scaleway for our first year. We still have several ARM workers donated by Codethink, but at the start of 2018 it became pretty clear within a few months that we were not going to keep up with the growing pace of builds without some more serious iron behind the Buildbot. We also wanted to be able to offer PR and test builds, beta builds, etc ­­— all of which multiplies the workload significantly.

Packet Logo

Thanks to an introduction by the most excellent Jorge Castro and the approval and support of the Linux Foundation’s CNCF Infrastructure Lab, we were able to get access to an “all expenses paid” account at Packet. Packet is a “bare metal” cloud provider — like AWS except you get entire boxes and dedicated switch ports etc to yourself – at a handful of main datacenters around the world with a full range of server, storage and networking equipment, and a larger number of edge facilities for distribution/processing closer to the users. They have an API and a magical provisioning system which means that at the click of a button or one method call you can bring up all manner of machines, configure networking and storage, etc. Packet is clearly a service built by engineers for engineers – they are smart, easy to get hold of on e-mail and chat, share their roadmap publicly and set priorities based on user feedback.

We currently have 4 Huge Boxes (2 Intel, 2 ARM) from Packet which do the majority of the heavy lifting when it comes to building everything that is uploaded, and also use a few other machines there for auxiliary tasks such as caching source downloads and receiving our streamed logs from the CDN. We also used their flexibility to temporarily set up a whole separate test infrastructure (a repo, buildbot, worker and frontend on one box) while we were prototyping recent changes to the Buildbot.

A special thanks to Ed Vielmetti at Packet who has patiently supported our requests for lots of 32-bit compatible ARM machines, and for his support of other Linux desktop projects such as GNOME and the Freedesktop SDK who also benefit hugely from Packet’s resources for build and CI.

Delivering the data

Even with two redundant / load-balancing front end servers and huge amounts of bandwidth, OSTree repos have so many files that if those servers are too far away from the end users, the latency and round trips cause a serious problem with throughput. In the end you can’t distribute something like Flathub from a single physical location – you need to get closer to the users. Fortunately the OSTree repo format is very efficient to distribute via a CDN, as almost all files in the repository are immutable.

Fastly Logo

After a very speedy response to a plea for help on Twitter, Fastly – one of the world’s leading CDNs – generously agreed to donate free use of their CDN service to support Flathub. All traffic to the dl.flathub.org domain is served through the CDN, and automatically gets cached at dozens of points of presence around the world. Their service is frankly really really cool – the configuration and stats are reallly powerful, unlike any other CDN service I’ve used. Our configuration allows us to collect custom logs which we use to generate our Flathub stats, and to define edge logic in Varnish’s VCL which we use to allow larger files to stream to the end user while they are still being downloaded by the edge node, improving throughput. We also use their API to purge the summary file from their caches worldwide each time the repository updates, so that it can stay cached for longer between updates.

To get some feelings for how well this works, here are some statistics: The Flathub main repo is 929 GB, of which 73 GB are static deltas and 1.9 GB of screenshots. It contains 7280 refs for 640 apps (plus runtimes and extensions) over 4 architectures. Fastly is serving the dl.flathub.org domain fully cached, with a cache hit rate of ~98.7%. Averaging 9.8 million hits and 464 Gb downloaded per hour, Flathub uses between 1-2 Gbps sustained bandwidth depending on the time of day. Here are some nice graphs produced by the Fastly management UI (the numbers are per-hour over the last month):

Graph showing the requests per hour over the past month, split by hits and misses. Graph showing the data transferred per hour over the past month.

To buy the scale of services and support that Flathub receives from our commercial sponsors would cost tens if not hundreds of thousands of dollars a month. Flathub could not exist without Mythic Beasts, Packet and Fastly‘s support of the free and open source Linux desktop. Thank you!

by ramcq at August 12, 2019 03:31 PM

August 08, 2019

Bastien Noceralibfprint 1.0 (and fprintd 0.9.0)

(Bastien Nocera) After more than a year of work libfprint 1.0 has just been released!

It contains a lot of bug fixes for a number of different drivers, which would make it better for any stable or unstable release of your OS.

There was a small ABI break between versions 0.8.1 and 0.8.2, which means that any dependency (really just fprintd) will need to be recompiled. And it's good seeing as we also have a new fprintd release which also fixes a number of bugs.

Benjamin Berg will take over maintenance and development of libfprint with the goal of having a version 2 in the coming months that supports more types of fingerprint readers that cannot be supported with the current API.

From my side, the next step will be some much needed modernisation for fprintd, both in terms of code as well as in the way it interacts with users.

by Bastien Nocera (noreply@blogger.com) at August 08, 2019 02:53 PM

August 05, 2019

Phil NormandReview of the Igalia Multimedia team Activities (2019/H1)

(Phil Normand)

This blog post takes a look back at the various Multimedia-related tasks the Igalia Multimedia team was involved in during the first half of 2019.

GStreamer Editing Services

Thibault added support for the OpenTimelineIO open format for editorial timeline information. Having many editorial timeline information formats supported by OpenTimelineIO reduces …

by Philippe Normand at August 05, 2019 01:30 PM

Jean-François Fortin TamIt’s 2019, and I’m starting an email mailing list.

As I’m resurfacing, looking at my writing backlog and seeing over a dozen blog posts to finish and publish in the coming months, I’m thinking that now would be a good time to offer readers a way to be notified of new publications without having to manually check my website all the time or to use specialized tools. So, I’m starting a notification mailing list (a.k.a. “newsletter”).

What kind of topics will be covered?

In the past, my blog has mostly been about technology (particularly Free and Open-Source software) and random discoveries in life. Here are some examples of previous blog posts:

In the future, I will likely continue to cover technology-related subjects, but also hope to write more often on findings and insights from “down to earth” businesses I’ve worked with, so that you can see more than just a single industry.

Therefore, my publications will be be about:

  • business (management, growth, entrepreneurship, market positioning, public relations, branding, etc.);
  • society (sustainability, social psychology, design, public causes, etc. Not politics.);
  • technology;
  • life & productivity improvement (“lifehacking”).

If you want to subscribe right away and don’t care to read about the whole “why” context, here’s a form for this 👉

.mailpoet_hp_email_label{display:none;}#mailpoet_form_2 .mailpoet_form { } #mailpoet_form_2 .mailpoet_paragraph { line-height: 20px; } #mailpoet_form_2 .mailpoet_segment_label, #mailpoet_form_2 .mailpoet_text_label, #mailpoet_form_2 .mailpoet_textarea_label, #mailpoet_form_2 .mailpoet_select_label, #mailpoet_form_2 .mailpoet_radio_label, #mailpoet_form_2 .mailpoet_checkbox_label, #mailpoet_form_2 .mailpoet_list_label, #mailpoet_form_2 .mailpoet_date_label { display: block; font-weight: bold; } #mailpoet_form_2 .mailpoet_text, #mailpoet_form_2 .mailpoet_textarea, #mailpoet_form_2 .mailpoet_select, #mailpoet_form_2 .mailpoet_date_month, #mailpoet_form_2 .mailpoet_date_day, #mailpoet_form_2 .mailpoet_date_year, #mailpoet_form_2 .mailpoet_date { display: block; } #mailpoet_form_2 .mailpoet_text, #mailpoet_form_2 .mailpoet_textarea { width: 200px; } #mailpoet_form_2 .mailpoet_checkbox { } #mailpoet_form_2 .mailpoet_submit input { } #mailpoet_form_2 .mailpoet_divider { } #mailpoet_form_2 .mailpoet_message { } #mailpoet_form_2 .mailpoet_validate_success { font-weight: 600; color: #468847; } #mailpoet_form_2 .mailpoet_validate_error { color: #b94a48; } #mailpoet_form_2 .mailpoet_form_loading { width: 30px; text-align: center; line-height: normal; } #mailpoet_form_2 .mailpoet_form_loading > span { width: 5px; height: 5px; background-color: #5b5b5b; }

Check your inbox or spam folder to confirm your subscription. Veuillez confirmer votre inscription avec le courriel que vous recevrez dans les prochaines minutes (n'oubliez pas de vérifier votre dossier pourriels si vous ne le voyez pas).

Otherwise, keep reading below 👇; the rest of this blog post explains the logic behind all this (why a newsletter in this day and age), and answers various questions you might have.

Why go through all that effort, Jeff?”

The idea here is to provide more convenience for some of my readers. It took me a long time to decide to offer this, as I’ll actually be spending more effort (and even money) managing this, going the extra mile to provide relevant information, and sometimes providing information that is not even on the blog.

Why bother with this? For a myriad of reasons:

  • It allows keeping in touch with my readership in a more intimate manner
  • It allows providing digests and reminders/retrospectives, from where people can choose to read more, effectively allowing “asynchronous” reading. If I were to do blog retrospectives on the blog, I think that might dilute the contents and get boring pretty fast.
  • It gives me an idea of how many people are super interested in what I’m publishing (which can be quite motivating)
  • It lets me cover all my publishing channels at once: the blog, my YouTube channel, etc.
  • It gives people the opportunity to react and interact with me more directly (not everybody wants to post a public comment, and my blog automatically disables commenting on older posts to prevent spam).

“But… Why email?!”

I realize it might look a bit surprising to start a newsletter in 2019—instead of ten years ago—but it probably is more relevant now than ever and, with experience, I am going to do a much better job at it than I would’ve a decade ago.

In over 15 years of blogging, I’ve seen technologies and social networks come and go. One thing hasn’t changed in that timeframe, however: email.

Email certainly has its faults but is the most pervasive and enduring distributed communication system out there, built on open standards. Pretty much everyone uses it. We were using email in the previous millenium, we’re using it today, and I suspect we’ll keep using it for a good long while.

“Don’t we have RSS/Atom feeds already?”

While I’m a big fan of syndication feeds (RSS/Atom) and using a feed reader myself (Liferea), these tools and technologies had their popularity peak around a decade ago and have remained a niche, used mostly by journalists and computer geeks.

  • Nobody around me in the “real world” uses them, and most people struggle to understand the concept and its benefits.
  • Even most geeks are unaware of feed syndication. Before I fully grasped what the deal with RSS was, I spent some years creating a GNOME desktop application to watch web pages for me. Ridiculous, I know!
  • And even then, many people prefer not having to use a dedicated application for this.

So, while I’m always going to keep the feeds available on my blog, I realize that most people prefer subscribing via email.

“What about social media?”

Social media creates public buzz, but doesn’t have the same usefulness and staying power.

  • As a true asynchronous medium, email provides convenience and flexibility for the reader compared to the evanescent nature of The Vortex. An email is personal, private, can be filed and consumed later, easily retrieved, unlike the messy firehose that is social media.
  • Social media is evanescent, both in content and in platforms:
    • Social networks are firehoses; they tend to be noisy (because they want to lure you into the Vortex), cluttered, chaotic, un-ordered. They are also typically proprietary and centralized in the hands of big corporations that mine your data & sociopsychological profile to sell it to the highest bidder.
    • There is no guarantee that any given social network is going to last more than a couple years (remember Google+? Or how Facebook used to be cool among youngsters until parents and aunts joined and now people are taking refuge to Twitter/Snapchat/Instagram/whatever?).
  • FLOSS & decentralized social networks? That doesn’t help reach normal people; those platforms barely attract 0.0125% of the population.
  • Instant messaging and chatrooms? Same issues. Besides, there are too many damned messaging systems to count these days (IRC, Signal, FB Messenger, WhatsApp, Telegram, Discord, Snapchat, Slack, Matrix/Riot/Fractal, oh my… stop this nonsense Larry, why don’t you just give me a call?), to the point where some are just leaving all that behind to fallback on email.

Like my blog and website, my mailing list will still be useful and available to me as the years pass. You can’t say that “with certainty” of any of the current social platforms out there.

What’s the catch? 🤔

There is no catch.

  • You get a summary of my contents delivered to your mailbox every now and then, to be read when convenient, without lifting a finger.
  • I probably get more people to enjoy my publications, and that makes me happy. Sure, it’s more work for me, but hey, that’s life (you can send me a fat paycheck to reward me, if you want 😎)

This mailing list is private and owned by me as an individual, and I am not selling your info to anybody. See also my super amazing privacy policy if you care.

I won’t email too often (maybe once a month or per quarter, I suspect), because I’ve got a million things on my plate already. We’ll see how it goes. Subscribing is voluntary, and you can unsubscribe anytime if you find me annoying (hopefully not).

Questions? Comments/feedback? Suggestions? Feel free to comment on this blog post or… send me an email 😉

The post It’s 2019, and I’m starting an email mailing list. appeared first on The Open Sourcerer.

by Jeff at August 05, 2019 12:44 PM

July 29, 2019

Sebastian PölsterlSurvival Analysis for Deep Learning

Most machine learning algorithms have been developed to perform classification or regression. However, in clinical research we often want to estimate the time to and event, such as death or recurrence of cancer, which leads to a special type of learning task that is distinct from classification and regression. This task is termed survival analysis, but is also referred to as time-to-event analysis or reliability analysis. Many machine learning algorithms have been adopted to perform survival analysis: Support Vector Machines, Random Forest, or Boosting. It has only been recently that survival analysis entered the era of deep learning, which is the focus of this post.

You will learn how to train a convolutional neural network to predict time to a (generated) event from MNIST images, using a loss function specific to survival analysis. The first part , will cover some basic terms and quantities used in survival analysis (feel free to skip this part if you are already familiar). In the second part , we will generate synthetic survival data from MNIST images and visualize it. In the third part , we will briefly revisit the most popular survival model of them all and learn how it can be used as a loss function for training a neural network. Finally , we put all the pieces together and train a convolutional neural network on MNIST and predict survival functions on the test data.

The notebook to reproduce the results is available on GitHub, or you can run it directly using Google Colaboratory.

Primer on Survival Analysis

The objective in survival analysis is to establish a connection between covariates and the time of an event. The name survival analysis originates from clinical research, where predicting the time to death, i.e., survival, is often the main objective. Survival analysis is a type of regression problem (one wants to predict a continuous value), but with a twist. It differs from traditional regression by the fact that parts of the training data can only be partially observed – they are censored.

As an example, consider a clinical study that has been carried out over a 1 year period as in the figure below.

Patient A was lost to follow-up after three months with no recorded event, patient B experienced an event four and a half months after enrollment, patient C withdrew from the study two months after enrollment, and patient E did not experience any event before the study ended. Consequently, the exact time of an event could only be recorded for patients B and D; their records are uncensored. For the remaining patients it is unknown whether they did or did not experience an event after termination of the study. The only valid information that is available for patients A, C, and E is that they were event-free up to their last follow-up. Therefore, their records are censored.

Formally, each patient record consists of the time $t>0$ when an event occurred or the time $c>0$ of censoring. Since censoring and experiencing and event are mutually exclusive, it is common to define an event indicator $\delta \in \{0;1\}$ and the observable survival time $y>0$. The observable time $y$ of a right censored time of event is defined as

$$ y = \min(t, c) = \begin{cases} t & \text{if } \delta = 1 , \\% c & \text{if } \delta = 0 . \end{cases} $$

Consequently, survival analysis demands for models that take partially observed, i.e., censored, event times into account.

Basic Quantities

Typically, the survival time is modelled as a continuous non-negative random variable $T$, from which basic quantities for time-to-event analysis can be derived, most importantly, the survival function and the hazard function.

  • The survival function $S(t)$ returns the probability of survival beyond time $t$ and is defined as $S(t) = P(T > t)$. It is non-increasing with $S(0) = 1$, and $S(\infty) = 0$.
  • The hazard function $h(t)$ denotes an approximate probability (it is not bounded from above) that an event occurs in the small time interval $[t; t + \Delta[$, under the condition that an individual would remain event-free up to time $t$: $$ h(t) = \lim_{\Delta t \rightarrow 0} \frac{P(t \leq T < t + \Delta t \mid T \geq t)}{\Delta t} \geq 0 $$ Alternative names for the hazard function are conditional failure rate, conditional mortality rate, or instantaneous failure rate. In contrast to the survival function, which describes the absence of an event, the hazard function provides information about the occurrence of an event.

Generating Synthetic Survival Data from MNIST

To start off, we are using images from the MNIST dataset and will synthetically generate survival times based on the digit each image represents. We associate a survival time (or risk score) with each class of the ten digits in MNIST. First, we randomly assign each class label to one of four overall risk groups, such that some digits will correspond to better and others to worse survival. Next, we generate risk scores that indicate how big the risk of experiencing an event is, relative to each other.

risk_score risk_group
class_label
0 3.071 3
1 2.555 2
2 0.058 0
3 1.790 1
4 2.515 2
5 3.031 3
6 1.750 1
7 2.475 2
8 0.018 0
9 2.435 2

We can see that class labels 2 and 8 belong to risk group 0, which has the lowest risk (close to zero). Risk group 1 corresponds to a risk score of about 1.7, risk group 2 of about 2.5, and risk group 3 is the group with the highest risk score of about 3.

To generate survival times from risk scores, we are going to follow the protocol of Bender et al. We choose the exponential distribution for the survival time. Its probability density function is $f(t\,|\,\lambda) = \lambda \exp(-\lambda t)$, where $\lambda > 0$ is a scale parameter that is the inverse of the expectation: $E(T) = \frac{1}{\lambda}$. The exponential distribution results in a relatively simple time-to-event model with no memory, because the hazard rate is constant: $h(t) = \lambda$. For more complex cases, refer to the paper by Bender et al.

Here, we choose $\lambda$ such that the mean survival time is 365 days. Finally, we randomly censor survival times drawing times of censoring from a uniform distribution such that we approximately obtain the desired amount of 45% censoring. The generated survival data comprises an observed time and a boolean event indicator for each MNIST image.

We can use the generated censored data and estimate the survival function $S(t)$ to see what the risk scores actually mean in terms of survival. We stratify the training data by class label, and estimate the corresponding survival function using the non-parametric Kaplan-Meier estimator.

Classes 0 and 5 (dotted lines) correspond to risk group 3, which has the highest risk score. The corresponding survival functions drop most quickly, which is exactly what we wanted. On the other end of the spectrum are classes 2 and 8 (solid lines) belonging to risk group 0 with the lowest risk.

Evaluating Predictions

One important aspect for survival analysis is that both the training data and the test data are subject to censoring, because we are unable to observe the exact time of an event no matter how the data was split. Therefore, performance measures need to account for censoring. The most widely used performance measure is Harrell’s concordance index. Given a set of (predicted) risk scores and observed times, it checks whether the ordering by risk scores is concordant with the ordering by actual survival time. While Harrell’s concordance index is widely used, it has its flaws, in particular when data is highly censored. Please refer to my previous post on evaluating survival models for more details.

We can take the risk score from which we generated survival times to check how good a model would perform if we knew the actual risk score.

cindex = concordance_index_censored(event_test, time_test, risk_scores[y_train.shape[0]:])
print(f"Concordance index on test data with actual risk scores: {cindex[0]:.3f}")
Concordance index on test data with actual risk scores: 0.705

Surprisingly, we do not obtain a perfect result of 1.0. The reason for this is that generated survival times are randomly distributed based on risk scores and not deterministic functions of the risk score. Therefore, any model we will train on this data should not be able to exceed this performance value.

Cox’s Proportional Hazards Model

By far the most widely used model to learn from censored survival data, is Cox’s proportional hazards model model. It models the hazard function $h(t_i)$ of the $i$-th subject, conditional on the feature vector $\mathbf{x}_i \in \mathbb{R}^p$, as the product of an unspecified baseline hazard function $h_0$ (more on that later) and an exponential function of the linear model $\mathbf{x}_i^\top \mathbf{\beta}$:

$$ h(t | x_{i1}, \ldots, x_{ip}) = h_0(t) \exp \left( \sum_{j=1}^p x_{ij} \beta_j \right) \Leftrightarrow \log \frac{h(t | \mathbf{x}_i)}{h_0 (t)} = \mathbf{x}_i^\top \mathbf{\beta} , $$

where $\mathbf{\beta} \in \mathbb{R}^p$ are the coefficients associated with each of the $p$ features, and no intercept term is included in the model. The key is that the hazard function is split into two parts: the baseline hazard function $h_0$ only depends on the time $t$, whereas the exponential is independent of time and only depends on the covariates $\mathbf{x}_i$.

Cox’s proportional hazards model is fitted by maximizing the partial likelihood function, which is based on the probability that the $i$-th individual experiences an event at time $t_i$, given that there is one event at time point $t_i$. As we will see, by specifying the hazard function as above, the baseline hazard function $h_0$ can be eliminated and does not need be defined for finding the coefficients $\mathbf{\beta}$. Let $\mathcal{R}_i = \{ j\,|\,y_j \geq y_i \}$ be the risk set, i.e., the set of subjects who remained event-free shortly before time point $y_i$, and $I(\cdot)$ the indicator function, then we have

$$ \begin{aligned} &P(\text{subject experiences event at $y_i$} \mid \text{one event at $y_i$}) \\% =& \frac{P(\text{subject experiences event at $y_i$} \mid \text{event-free up to $y_i$})} {P (\text{one event at $y_i$} \mid \text{event-free up to $y_i$})} \\% =& \frac{h(y_i | \mathbf{x}_i)}{ \sum_{j=1}^n I(y_j \geq y_i) h(y_j | \mathbf{x}_j) } \\% =& \frac{h_0(y_i) \exp(\mathbf{x}_i^\top \mathbf{\beta})} { \sum_{j=1}^n I(y_j \geq y_i) h_0(y_j) \exp(\mathbf{x}_j^\top \mathbf{\beta}) } \\% =& \frac{\exp( \mathbf{x}_i^\top \beta)}{\sum_{j \in \mathcal{R}_i} \exp( \mathbf{x}_j^\top \beta)} . \end{aligned} $$

By multiplying the conditional probability from above for all patients who experienced an event, and taking the logarithm, we obtain the partial likelihood function:

$$ \widehat{\mathbf{\beta}} = \arg\max_{\mathbf{\beta}}~ \log\,PL(\mathbf{\beta}) = \sum_{i=1}^n \delta_i \left[ \mathbf{x}_i^\top \mathbf{\beta} - \log \left( \sum_{j \in \mathcal{R}_i} \exp( \mathbf{x}_j^\top \mathbf{\beta}) \right) \right] . $$

Non-linear Survival Analysis with Neural Networks

Cox’s proportional hazards model as described above is a linear model, i.e., the predicted risk score is a linear combination of features. However, the model can easily be extended to the non-linear case by just replacing the linear predictor with the output of a neural network with parameters $\mathbf{\Theta}$.

This has been realized early on and was originally proposed in the work of Faraggi and Simon back in 1995. Farragi and Simon explore multilayer perceptrons, but the same loss can be used in combination with more advanced architectures such as convolutional neural networks or recurrent neural networks. Therefore, it is natural to also use the same loss function in the era of deep learning. However, this transition is not so easy as it may seem and comes with some caveats, both for training and for evaluation.

Computing the Loss Function

When implementing the Cox PH loss function, the problematic part is the inner sum over the risk set: $\sum_{j \in \mathcal{R}_i} \exp( \mathbf{x}_j^\top \mathbf{\beta})$. Note that the risk set is defined as $\mathcal{R}_i = \{ j\,|\,y_j \geq y_i \}$, which implies an ordering according to observed times $y_i$, which may lead to quadratic complexity if implemented naively. Ideally, we want to sort the data once in descending order by survival time and then incrementally update the inner sum, which leads to a linear complexity to compute the loss (ignoring the time for sorting).

Another problem is that the risk set for the subject with the smallest uncensored survival time is over the whole dataset. This is usually impractical, because we may not be able to keep the whole dataset in GPU memory. If we use mini-batches instead, as it’s the norm, (i) we cannot compute the exact loss, because we may not have access to all samples in the risk set, and (ii) we need to sort each mini-batch by observed time, instead of sorting the whole data once.

For practical purposes, computing the Cox PH loss over a mini-batch is usually fine, as long as the batch contains several uncensored samples, because otherwise the outer sum in the partial likelihood function would be over an empty set. Here, we implement the sum over the risk set by multiplying the exponential of the predictions (as a row vector) by a squared boolean matrix that contains each sample’s risk set as its rows. The sum over the risk set for each sample is then equivalent to a row-wise summation.

class InputFunction:
…
def _get_data_batch(self, index):
"""Compute risk set for samples in batch."""
time = self.time[index]
event = self.event[index]
images = self.images[index]
labels = {
"label_event": event.astype(np.int32),
"label_time": time.astype(np.float32),
"label_riskset": _make_riskset(time)
}
return images, labels
…
def _make_riskset(time):
assert time.ndim == 1, "expected 1D array"
# sort in descending order
o = np.argsort(-time, kind="mergesort")
n_samples = len(time)
risk_set = np.zeros((n_samples, n_samples), dtype=np.bool_)
for i_org, i_sort in enumerate(o):
ti = time[i_sort]
k = i_org
while k < n_samples and ti == time[o[k]]:
k += 1
risk_set[i_sort, o[:k]] = True
return risk_set
def coxph_loss(event, riskset, predictions):
# move batch dimension to the end so predictions get broadcast
# row-wise when multiplying by riskset
pred_t = tf.transpose(predictions)
# compute log of sum over risk set for each row
rr = logsumexp_masked(pred_t, riskset, axis=1, keepdims=True)
losses = tf.multiply(event, rr - predictions)
loss = tf.reduce_mean(losses)
return loss
def logsumexp_masked(risk_scores, mask,
axis=0, keepdims=None):
"""Compute logsumexp across `axis` for entries where `mask` is true."""
mask_f = tf.cast(mask, risk_scores.dtype)
risk_scores_masked = tf.multiply(risk_scores, mask_f)
# for numerical stability, substract the maximum value
# before taking the exponential
amax = tf.reduce_max(risk_scores_masked, axis=axis, keepdims=True)
risk_scores_shift = risk_scores_masked - amax
exp_masked = tf.multiply(tf.exp(risk_scores_shift), mask_f)
exp_sum = tf.reduce_sum(exp_masked, axis=axis, keepdims=True)
output = amax + tf.log(exp_sum)
if not keepdims:
output = tf.squeeze(output, axis=axis)
return output

To monitor the training process, we would like to compute the concordance index with respect to a separate validation set. Similar to the Cox PH loss, the concordance index needs access to predicted risk scores and ground truth of all samples in the validation data. While we had to opt for computing the Cox PH loss over a mini-batch, I would not recommend this for the validation data. For small batch sizes and/or high amount of censoring, the estimated concordance index would be quite volatile, which makes it very hard to interpret. In addition, the validation data is usually considerably smaller than the training data, therefore we can collect predictions for the whole validation data and compute the concordance index accurately.

Creating a Convolutional Neural Network for Survival Analysis on MNIST

Finally, after many considerations, we can create a convolutional neural network (CNN) to learn a high-level representation from MNIST digits such that we can estimate each image’s survival function. The CNN follows the LeNet architecture where the last linear has one output unit that corresponds to the predicted risk score. The predicted risk score, together with the binary event indicator and risk set, are the input to the Cox PH loss.

def model_fn(features, labels, mode, params):
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(6, kernel_size=(5, 5), activation='relu', name='conv_1'),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Conv2D(16, (5, 5), activation='relu', name='conv_2'),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(120, activation='relu', name='dense_1'),
tf.keras.layers.Dense(84, activation='relu', name='dense_2'),
tf.keras.layers.Dense(1, activation='linear', name='dense_3')
])
risk_score = model(features, training=is_training)
if mode == tf.estimator.ModeKeys.TRAIN:
loss = coxph_loss(
event=tf.expand_dims(labels["label_event"], axis=1),
riskset=labels["label_riskset"],
predictions=risk_score)
optim = tf.train.AdamOptimizer(learning_rate=params["learning_rate"])
gs = tf.train.get_or_create_global_step()
train_op = tf.contrib.layers.optimize_loss(loss, gs,
learning_rate=None,
optimizer=optim)
else:
loss = None
train_op = None
return tf.estimator.EstimatorSpec(
mode=mode,
loss=loss,
train_op=train_op,
predictions={"risk_score": risk_score})
train_spec = tf.estimator.TrainSpec(
InputFunction(x_train, time_train, event_train,
num_epochs=15, drop_last=True, shuffle=True))
eval_spec = tf.estimator.EvalSpec(
InputFunction(x_test, time_test, event_test))
params = {"learning_rate": 0.0001, "model_dir": "ckpts-mnist-cnn"}
estimator = tf.estimator.Estimator(model_fn, model_dir=params["model_dir"], params=params)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
TensorBoard plots of training loss and concordance index on test data.

TensorBoard plots of training loss and concordance index on test data.

We can make a couple of observations:

  1. The final concordance index on the validation data is close to the optimal value we computed above using the actual underlying risk scores.
  2. The loss during training is quite volatile, which stems from the small batch size (64) and the varying number of uncensored samples that contribute to the loss in each batch. Increasing the batch size should yield smoother loss curves.

Predicting Survival Functions

For inference, things are much easier, we just pass a batch of images and record the predicted risk score. To estimate individual survival functions, we need to estimate the baseline hazard function $h_0$, which can be done analogous to the linear Cox PH model by using Breslow’s estimator.

from sklearn.model_selection import train_test_split
from sksurv.linear_model.coxph import BreslowEstimator
def make_pred_fn(images, batch_size=64):
if images.ndim == 3:
images = images[..., np.newaxis]
def _input_fn():
ds = tf.data.Dataset.from_tensor_slices(images)
ds = ds.batch(batch_size)
next_x = ds.make_one_shot_iterator().get_next()
return next_x, None
return _input_fn
train_pred_fn = make_pred_fn(x_train)
train_predictions = np.array([float(pred["risk_score"])
for pred in estimator.predict(train_pred_fn)])
breslow = BreslowEstimator().fit(train_predictions, event_train, time_train)

Once fitted, we can use Breslow’s estimator to obtain estimated survival functions for images in the test data. We randomly draw three sample images for each digit and plot their predicted survival function.

sample = train_test_split(x_test, y_test, event_test, time_test,
test_size=30, stratify=y_test, random_state=89)
x_sample, y_sample, event_sample, time_sample = sample[1::2]
sample_pred_fn = make_pred_fn(x_sample)
sample_predictions = np.array([float(pred["risk_score"])
for pred in estimator.predict(sample_pred_fn)])
sample_surv_fn = breslow.get_survival_function(sample_predictions)

Solid lines correspond to images that belong to risk group 0 (with lowest risk), which the model was able to learn. Samples from the group with the highest risk are shown as dotted lines. Their predicted survival functions have the steepest descent, confirming that the model correctly identified different risk groups from images.

Conclusion

We successfully built, trained, and evaluated a convolutional neural network for survival analysis on MNIST. While MNIST is obviously not a clinical dataset, the exact same approach can be used for clinical data. For instance, Mobadersany et al. used the same approach to predict overall survival of patients diagnosed with brain tumors from microscopic images, and Zhu et al. applied CNNs to predict survival of lung cancer patients from pathological images.

July 29, 2019 05:38 AM

July 27, 2019

Sebastian Pölsterlscikit-survival 0.9 released

This release of scikit-survival adds support for scikit-learn 0.21 and pandas 0.24, among a couple of other smaller fixes. Please see the release notes for a full list of changes. If you are using scikit-survival in your research, you can now cite it using an Digital Object Identifier (DOI).

A usual, the latest version can be obtained via conda or pip. Pre-built conda packages are available for Linux, OSX and Windows:

 conda install -c sebp scikit-survival

Alternatively, scikit-survival can be installed from source via pip:

 pip install -U scikit-survival

July 27, 2019 07:11 PM

July 15, 2019

Jean-François Fortin TamAvailable for hire, 2019 edition

Hey folks, I’m back and I’m looking for some new work to challenge me—preferrably again for an organization that does something good and meaningful for the world. You can read my profile on my website, or keep reading here to discover about what I’ve been up to in the past few years.

Sometime after the end of my second term on the GNOME Foundation, I was contacted by a mysterious computer vendor that ships a vanilla GNOME on their laptops, Purism.

A laptop that was sent to me for review

They wanted my help to get their business back on track, and so I did. I began with the easy, low-hanging fruit:

  • Reviewing and restructuring their public-facing content;
  • Doing in-depth technical reviewing of their hardware products, finding industrial design flaws and reporting extensively on ways the products could be improved in future revisions;
  • Using my photo & video studio to shoot official real-world images (some of which you can see below) for use in various marketing collaterals. I also produced and edited videos that played a strong part in increasing the public’s confidence in these products.

As my work was appreciated and I was effectively showing business acumen and leadership across teams & departments, I was shortly afterwards promoted from “director of communications” to CMO.

At the very beginning I had thought it would be a short-lived contract; in practice, my partnership with Purism lasted nearly three years, as I helped the company go from strength to strength. I guess I must’ve done something right 😉

Here are some of the key accomplishments during that time:

Fun designing a professional technical brochure for conferences
  • Grew the business’ gross monthly revenue significantly (by a factor of 10, and up to a factor of 55) over the course of two years.
  • Helped devise and run the Librem 5 phone crowdfunding campaign that raised over $US 2.1 million, with significant press coverage. This proved initial market demand and reduced the risk of entering this new market. As the Linux Action Show commented during two of their episodes: “Wow, can we hire this PR department?” “They’ve done such a good job at promoting this!
  • Made the public-facing brand shine. Over time, converted some of the toughest critics into avid supporters, and turned the company’s name into one that earned trust and commands respect in our industry.
  • Did extensive research of over a hundred events (tradeshows, conferences) aligned with Purism’s business; planned and optimized sponsorships and team attendance to a selection of these events. Designed bespoke brochures, manned product exhibit booths, etc.
  • Leveraged good news, mitigated setbacks, managed customer’s expectations.
  • Devised department budget approximations and projections in preparation for investment growth.
  • Provided support and business experience to the “operations” & “customer support” departments.
  • Defined the marketing department structure and wrote critical roles to recruit for. The director of sales commented that those were “the best job descriptions [he’d] ever seen, across 50 organizations”, so apparently marketeers can make great recruiting copywriters too 😉
  • Identified many marketing and community management infrastructure issues, oversaw the deployment of solutions.
  • Onboarded members of the sales & bizdev teams so that they could blend into the organization’s culture, tap into tacit knowledge and hit the ground running
  • Coined the terms “Adaptive Design” and “Adaptive Applications” as a better, more precise terminology for convergent software in the GNOME platform. Yes, I was the team’s ghostwriter at times, and did extensive copy editing to turn technical reports into blog posts that would appeal to a wider audience while satisfying accuracy requirements.
  • Designed public surveys to gauge market demand for specific products in the B2C space, or to assess enterprise products & services requirements in the B2B space.
  • Etc. Etc.

That’s the jist of it.

With all that said, startups face challenges outside the scope of any single department. There comes a moment when your expertise has made all the difference it could in that environment, therefore making it necessary to conclude the work to seek a new challenge.

After spending a few weeks winding down that project and doing some personal R&D (there were lots of things to take care of in my backlog), I am now officially announcing my availability for hire. Or, in retweetable words:

If you know a business or organization that would benefit from my help, please feel free to share this blog post with them, or to contact me to let me know about opportunities.

The post Available for hire, 2019 edition appeared first on The Open Sourcerer.

by Jeff at July 15, 2019 12:40 PM