July 27, 2020

Michael SheldonEmoji Support for Linux Flutter Apps

(Michael Sheldon)

Recently Canonical have been working alongside Google to make it possible to write native Linux apps with Flutter. In this short tutorial, I’ll show you how you can render colour fonts, such as emoji, within your Flutter apps.

First we’ll create a simple application that attempts to display a few emoji:

import 'package:flutter/material.dart';

void main() {
  runApp(EmojiApp());
}

class EmojiApp extends StatelessWidget {
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      title: 'Emoji Demo',
      theme: ThemeData(
        primarySwatch: Colors.blue,
        visualDensity: VisualDensity.adaptivePlatformDensity,
      ),
      home: EmojiHomePage(title: '🐹 Emoji Demo 🐹'),
    );
  }
}

class EmojiHomePage extends StatefulWidget {
  EmojiHomePage({Key key, this.title}) : super(key: key);
  final String title;

  @override
  _EmojiHomePageState createState() => _EmojiHomePageState();
}

class _EmojiHomePageState extends State {

  @override
  Widget build(BuildContext context) {
    return Scaffold(
      appBar: AppBar(
        title: Text(
          widget.title,
        ),
      ),
      body: Center(
        child: Text(
          '🐶 🐈 🐇',
        ),
      ),
    );
  }
}

However, when we run it we find that our emoji characters aren’t rendering correctly:

Screenshot showing application failing to render emoji

For Flutter to be able to display colour fonts we need to explicitly bundle them with our application. We can do this by saving the emoji font we wish to use to our project directory, to keep things organised I’ve created a sub-directory called ‘fonts’ for this. Then we need to edit our ‘pubspec.yaml’ to include information about this font file:

name: emojiexample
description: An example of displaying emoji in Flutter apps
publish_to: 'none'
version: 1.0.0+1
environment:
  sdk: ">=2.7.0 <3.0.0"

dependencies:
  flutter:
    sdk: flutter

dev_dependencies:
  flutter_test:
    sdk: flutter

flutter:
  uses-material-design: true
  fonts:
     - family: EmojiOne
       fonts:
         - asset: fonts/emojione-android.ttf

I’m using the original EmojiOne font, which was released by Ranks.com under the Creative Commons Attribution 4.0 License.

Finally, we need to update our application code to specify the font family to use when rendering text:

class _EmojiHomePageState extends State {

  @override
  Widget build(BuildContext context) {
    return Scaffold(
      appBar: AppBar(
        title: Text(
          widget.title,
          style: TextStyle(fontFamily: 'EmojiOne'),
        ),
      ),
      body: Center(
        child: Text(
          '🐶 🐈 🐇',
          style: TextStyle(fontFamily: 'EmojiOne', fontSize: 32),
        ),
      ),
    );
  }
}

Now when we run our app our emoji are all rendered as expected:

Screenshot showing emoji rendering correctly width=

The full source code for this example can be found here: https://github.com/Elleo/flutter-emojiexample

by Mike at July 27, 2020 03:30 PM

July 21, 2020

Sebastian DrögeAutomatic retry on error and fallback stream handling for GStreamer sources

(Sebastian Dröge)

A very common problem in GStreamer, especially when working with live network streams, is that the source might just fail at some point. Your own network might have problems, the source of the stream might have problems, …

Without any special handling of such situations, the default behaviour in GStreamer is to simply report an error and let the application worry about handling it. The application might for example want to restart the stream, or it might simply want to show an error to the user, or it might want to show a fallback stream instead, telling the user that the stream is currently not available and then seamlessly switch back to the stream once it comes back.

Implementing all of the aforementioned is quite some effort, especially to do it in a robust way. To make it easier for applications I implemented a new plugin called fallbackswitch that contains two elements to automate this.

It is part of the GStreamer Rust plugins and also included in the recent 0.6.0 release, which can also be found on the Rust package (“crate”) repository crates.io.

Installation

For using the plugin you most likely first need to compile it yourself, unless you’re lucky enough that e.g. your Linux distribution includes it already.

Compiling it requires a Rust toolchain and GStreamer 1.14 or newer. The former you can get via rustup for example, if you don’t have it yet, the latter either from your Linux distribution or by using the macOS, Windows, etc binaries that are provided by the GStreamer project. Once that is done, compiling is mostly a matter of running cargo build in the utils/fallbackswitch directory and copying the resulting libgstfallbackswitch.so (or .dll or .dylib) into one of the GStreamer plugin directories, for example ~/.local/share/gstreamer-1.0/plugins.

fallbackswitch

The first of the two elements is fallbackswitch. It acts as a filter that can be placed into any kind of live stream. It consumes one main stream (which must be live) and outputs this stream as-is if everything works well. Based on the timeout property it detects if this main stream didn’t have any activity for the configured amount of time, or everything arrived too late for that long, and then seamlessly switches to a fallback stream. The fallback stream is the second input of the element and does not have to be live (but it can be).

Switching between main stream and fallback stream doesn’t only work for raw audio and video streams but also works for compressed formats. The element will take constraints like keyframes into account when switching, and if necessary/possible also request new keyframes from the sources.

For example to play the Sintel trailer over the network and displaying a test pattern if it doesn’t produce any data, the following pipeline can be constructed:

gst-launch-1.0 souphttpsrc location=https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.webm ! \
    decodebin ! identity sync=true ! fallbackswitch name=s ! videoconvert ! autovideosink \
    videotestsrc ! s.fallback_sink

Note the identity sync=true in the main stream here as we have to convert it to an actual live stream.

Now when running the above command and disconnecting from the network, the video should freeze at some point and after 5 seconds a test pattern should be displayed.

However, when using fallbackswitch the application will still have to take care of handling actual errors from the main source and possibly restarting it. Waiting a bit longer after disconnecting the network with the above command will report an error, which then stops the pipeline.

To make that part easier there is the second element.

fallbacksrc

The second element is fallbacksrc and as the name suggests it is an actual source element. When using it, the main source can be configured via an URI or by providing a custom source element. Internally it then takes care of buffering the source, converting non-live streams into live streams and restarting the source transparently on errors. The various timeouts for this can be configured via properties.

Different to fallbackswitch it also handles audio and video at the same time and demuxes/decodes the streams.

Currently the only fallback streams that can be configured are still images for video. For audio the element will always output silence for now, and if no fallback image is configured for video it outputs black instead. In the future I would like to add support for arbitrary fallback streams, which hopefully shouldn’t be too hard. The basic infrastructure for it is already there.

To use it again in our previous example and having a JPEG image displayed whenever the source does not produce any new data, the following can be done:

gst-launch-1.0 fallbacksrc uri=https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.webm \
    fallback-uri=file:///path/to/some/jpg ! videoconvert ! autovideosink

Now when disconnecting the network, after a while (longer than before because fallbacksrc does additional buffering for non-live network streams) the fallback image should be shown. Different to before, waiting longer will not lead to an error and reconnecting the network causes the video to reappear. However as this is not an actual live-stream, right now playback would again start from the beginning. Seeking back to the previous position would be another potential feature that could be added in the future.

Overall these two elements should make it easier for applications to handle errors in live network sources. While the two elements are still relatively minimal feature-wise, they should already be usable in various real scenarios and are already used in production.

As usual, if you run into any problems or are missing some features, please create an issue in the GStreamer bug tracker.

by slomo at July 21, 2020 01:12 PM

July 15, 2020

Sebastian DrögeGStreamer Rust Bindings & Plugins New Releases

(Sebastian Dröge)

It has been quite a while since the last status update for the GStreamer Rust bindings and the GStreamer Rust plugins, so the new releases last week make for a good opportunity to do so now.

Bindings

I won’t write too much about the bindings this time. The latest version as of now is 0.16.1, which means that since I started working on the bindings there were 8 major releases. In that same time there were 45 contributors working on the bindings, which seems quite a lot and really makes me happy.

Just as before, I don’t think any major APIs are missing from the bindings anymore, even for implementing subclasses of the various GStreamer types. The wide usage of the bindings in Free Software projects and commercial products also shows both the interest in writing GStreamer applications and plugins in Rust as well as that the bindings are complete enough and production-ready.

Most of the changes since the last status update involve API cleanups, usability improvements, various bugfixes and addition of minor API that was not included before. The details of all changes can be read in the changelog.

The bindings work with any GStreamer version since 1.8 (released more than 4 years ago), support APIs up to GStreamer 1.18 (to be released soon) and work with Rust 1.40 or newer.

Plugins

The biggest progress probably happened with the GStreamer Rust plugins.

There also was a new release last week, 0.6.0, which was the first release where selected plugins were also uploaded to the Rust package (“crate”) database crates.io. This makes it easy for Rust applications to embed any of these plugins statically instead of depending on them to be available on the system.

Overall there are now 40 GStreamer elements in 18 plugins by 28 contributors available as part of the gst-plugins-rs repository, one tutorial plugin with 4 elements and various plugins in external locations.

These 40 GStreamer elements are the following:

Audio
  • rsaudioecho: Port of the audioecho element from gst-plugins-good
  • rsaudioloudnorm: Live audio loudness normalization element based on the FFmpeg af_loudnorm filter
  • claxondec: FLAC lossless audio codec decoder element based on the pure-Rust claxon implementation
  • csoundfilter: Audio filter that can use any filter defined via the Csound audio programming language
  • lewtondec: Vorbis audio decoder element based on the pure-Rust lewton implementation
Video
  • cdgdec/cdgparse: Decoder and parser for the CD+G video codec based on a pure-Rust CD+G implementation, used for example by karaoke CDs
  • cea608overlay: CEA-608 Closed Captions overlay element
  • cea608tott: CEA-608 Closed Captions to timed-text (e.g. VTT or SRT subtitles) converter
  • tttocea608: CEA-608 Closed Captions from timed-text converter
  • mccenc/mccparse: MacCaption Closed Caption format encoder and parser
  • sccenc/sccparse: Scenarist Closed Caption format encoder and parser
  • dav1dec: AV1 video decoder based on the dav1d decoder implementation by the VLC project
  • rav1enc: AV1 video encoder based on the fast and pure-Rust rav1e encoder implementation
  • rsflvdemux: Alternative to the flvdemux FLV demuxer element from gst-plugins-good, not feature-equivalent yet
  • rsgifenc/rspngenc: GIF/PNG encoder elements based on the pure-Rust implementations by the image-rs project
Text
  • textwrap: Element for line-wrapping timed text (e.g. subtitles) for better screen-fitting, including hyphenation support for some languages
Network
  • reqwesthttpsrc: HTTP(S) source element based on the Rust reqwest/hyper HTTP implementations and almost feature-equivalent with the main GStreamer HTTP source souphttpsrc
  • s3src/s3sink: Source/sink element for the Amazon S3 cloud storage
  • awstranscriber: Live audio to timed text transcription element using the Amazon AWS Transcribe API
Generic
  • sodiumencrypter/sodiumdecrypter: Encryption/decryption element based on libsodium/NaCl
  • togglerecord: Recording element that allows to pause/resume recordings easily and considers keyframe boundaries
  • fallbackswitch/fallbacksrc: Elements for handling potentially failing (network) sources, restarting them on errors/timeout and showing a fallback stream instead
  • threadshare: Set of elements that provide alternatives for various existing GStreamer elements but allow to share the streaming threads between each other to reduce the number of threads
  • rsfilesrc/rsfilesink: File source/sink elements as replacements for the existing filesrc/filesink elements

by slomo at July 15, 2020 05:00 PM

July 14, 2020

Seungha YangBringing Microsoft Media Foundation to GStreamer

The Microsoft Media Foundation plugin has finally landed as part of GStreamer 1.17!

Currently it supports the following features:

  • Video capture from webcam (and UWP support)
  • H.264/HEVC/VP9 video encoding
  • AAC/MP3 audio encoding

NOTE : Strictly speaking, the UWP video capture implementation is not part of the Media Foundation API. The internal implementation is based on the Windows.Media.Capture API.
Due to the structural similarity between Media Foundation and WinRT Media API however, it makes sense to include the UWP video capture implementation in this plugin.

Media Foundation is known as the successor of DirectShow.

As DirectShow does, Media Foundation provides various media-related functionality, but most of the features (muxing, demuxing, capturing, rendering, decoding/encoding and pipelining of relevant processing functionality) of Media Foundation can be replaced with GStreamer.

Then why do we need Media Foundation on Windows? Isn’t GStreamer enough?

Why do we need Media Foundation then?

When it comes to software implementation, there might be several alternatives such as the well-known x264 software encoder, but what’s the situation with hardware implementations?

A very important point here is that hardware vendors such as Intel, Nvidia, AMD and Qualcomm are abstracting their hardware video encoding API via Media Foundation. Therefore, device-agnostic, hardware-accelerated media processing can be achieved using Media Foundation (more specifically, as a Media Foundation Transform API) without any external library dependencies.

Moreover, MFT (Media Foundation Transform) encoders can be used in UWP applications (but some codecs might be blacklisted by OS in this case).

Well, then what would be difference between the well-known MSDK (Intel Media SDK), NVCODEC (Nvidia Codec SDK) and Media Foundation implementations?

From my perspective, a Media Foundation implementation could be as powerful as the vendor specific APIs, because Media Foundation uses vendor implementations underneath (e.g., libmfxhw64.dll for Intel and nvEncodeAPI64.dll for NVidia). That’s not the case for the moment though — there is some overhead/limitations from GStreamer’s Media Foundation API integration.

To make Media Foundation plugin as performant as the vendor specific APIs, there is some remaining work to be done. For example Direct3D support in the Media Foundation plugin is one such potential improvement.

Media Foundation plugin details

The gst-inspect-1.0 example below summarizes the list of elements belonging to the Media Foundation plugin. Similar to the GStreamer D3D11 plugin, the Media Foundation plugin will enumerate available encoder MFT first, and then will register each MFT separately. You might therefore see a different list of elements on your system (or their description might be different).

[gst-master] PS C:\Work\gst-build> gst-inspect-1.0.exe mediafoundation
Plugin Details:
Name mediafoundation
Description Microsoft Media Foundation plugin
Filename C:\Work\GST-BU~1\build\SUBPRO~1\GST-PL~3\sys\MEDIAF~1\gstmediafoundation.dll
Version 1.17.1.1
License LGPL
Source module gst-plugins-bad
Binary package GStreamer Bad Plug-ins git
Origin URL Unknown package origin

mfmp3enc: Media Foundation MP3 Encoder ACM Wrapper MFT
mfaacenc: Media Foundation Microsoft AAC Audio Encoder MFT
mfvp9device1enc: Media Foundation VP9VideoExtensionEncoder
mfvp9enc: Media Foundation Intel® Hardware VP9 Encoder MFT
mfh265device1enc: Media Foundation HEVCVideoExtensionEncoder
mfh265enc: Media Foundation Intel® Hardware H265 Encoder MFT
mfh264device1enc: Media Foundation H264 Encoder MFT
mfh264enc: Media Foundation Intel® Quick Sync Video H.264 Encoder MFT
mfdeviceprovider: Media Foundation Device Provider
mfvideosrc: Media Foundation Video Source

10 features:
+-- 9 elements
+-- 1 device providers
  • mfvideosrc: This element is a source element which will capture video from your webcam. Note that you can use this element in your UWP application.
  • mfdeviceprovider: Available video capture devices can be enumerated by this device provider implementation, and it can provide corresponding mfvideosrc elements.
  • mf{h264,h265,vp9,aac,mp3}enc: Each element is responsible for encoding raw video/audio data into compressed data. In the above example, you can see two h264 encoders mfh264enc and mfh264device1enc. That’s the case when (Microsoft) has approved hardware MFT on your system, therefore hardware MFT will be registered first (with mfh264enc) and then a lower rank will be assigned to software MFT.

NOTE : To build the Media Foundation GStreamer plugin, you should use the MSVC compiler as there might be some missing symbols in MinGW toolchain.

Wait, where are audio sources?

Audio capture sources are not implemented in this plugin. Use the wasapi or wasapi2 plugin in this case. In general, audio processing requires more complicated timing information and control. Unfortunately, Media Foundation doesn’t provide such low-level control for users, but the wasapi API does.

A short comment about wasapi2 plugin is that it was introduced as part of GStreamer 1.17 for the purpose of UWP support. (It should work on Win32 application as well). As a result of UWP support, however, the wasapi2 plugin requires Windows 10 as it uses very new Windows APIs (probably it might work on Windows 8, but I’ve tested the wasapi2 plugin only on Windows 10).

Some codecs and software decoders are not implemented in this plugin yet, but I expect they should be added soon!
And regarding hardware video decoder implementations, please refer to my previous DXVA2 blog post

by Seungha Yang at July 14, 2020 05:18 PM

Jan SchmidtOpenHMD and the Oculus Rift

(Jan Schmidt)

For some time now, I’ve been involved in the OpenHMD project, working on building an open driver for the Oculus Rift CV1, and more recently the newer Rift S VR headsets.

This post is a bit of an overview of how the 2 devices work from a high level for people who might have used them or seen them, but not know much about the implementation. I also want to talk about OpenHMD and how it fits into the evolving Linux VR/AR API stack.

OpenHMD

http://www.openhmd.net/

In short, OpenHMD is a project providing open drivers for various VR headsets through a single simple API. I don’t know of any other project that provides support for as many different headsets as OpenHMD, so it’s the logical place to contribute for largest effect.

OpenHMD is supported as a backend in Monado, and in SteamVR via the SteamVR-OpenHMD plugin. Working drivers in OpenHMD opens up a range of VR games – as well as non-gaming applications like Blender. I think it’s important that Linux and friends not get left behind – in what is basically a Windows-only activity right now.

One downside is that does come with the usual disadvantages of an abstraction API, in that it doesn’t fully expose the varied capabilities of each device, but instead the common denominator. I hope we can fix that in time by extending the OpenHMD API, without losing its simplicity.

Oculus Rift S

I bought an Oculus Rift S in April, to supplement my original consumer Oculus Rift (the CV1) from 2017. At that point, the only way to use it was in Windows via the official Oculus driver as there was no open source driver yet. Since then, I’ve largely reverse engineered the USB protocol for it, and have implemented a basic driver that’s upstream in OpenHMD now.

I find the Rift S a somewhat interesting device. It’s not entirely an upgrade over the older CV1. The build quality, and some of the specifications are actually worse than the original device – but one area that it is a clear improvement is in the tracking system.

CV1 Tracking

The Rift CV1 uses what is called an outside-in tracking system, which has 2 major components. The first is input from Inertial Measurement Units (IMU) on each device – the headset and the 2 hand controllers. The 2nd component is infrared cameras (Rift Sensors) that you space around the room and then run a calibration procedure that lets the driver software calculate their positions relative to the play area.

IMUs provide readings of linear acceleration and angular velocity, which can be used to determine the orientation of a device, but don’t provide absolute position information. You can derive relative motion from a starting point using an IMU, but only over a short time frame as the integration of the readings is quite noisy.

This is where the Rift Sensors get involved. The cameras observe constellations of infrared LEDs on the headset and hand controllers, and use those in concert with the IMU readings to position the devices within the playing space – so that as you move, the virtual world accurately reflects your movements. The cameras and LEDs synchronise to a radio pulse from the headset, and the camera exposure time is kept very short. That means the picture from the camera is completely black, except for very bright IR sources. Hopefully that means only the LEDs are visible, although light bulbs and open windows can inject noise and make the tracking harder.

Rift Sensor view of the CV1 headset and 2 controllers.Rift Sensor view of the CV1 headset and 2 controllers.

If you have both IMU and camera data, you can build what we call a 6 Degree of Freedom (6DOF) driver. With only IMUs, a driver is limited to providing 3 DOF – allowing you to stand in one place and look around, but not to move.

OpenHMD provides a 3DOF driver for the CV1 at this point, with experimental 6DOF work in a branch in my fork. Getting to a working 6DOF driver is a real challenge. The official drivers from Oculus still receive regular updates to tweak the tracking algorithms.

I have given several presentations about the progress on implementing positional tracking for the CV1. Most recently at Linux.conf.au 2020 in January. There’s a recording at https://www.youtube.com/watch?v=PTHE-cdWN_s if you’re interested, and I plan to talk more about that in a future post.

Rift S Tracking

The Rift S uses Inside Out tracking, which inverts the tracking process by putting the cameras on the headset instead of around the room. With the cameras in fixed positions on the headset, the cameras and their view of the world moves as the user’s head moves. For the Rift S, there are 5 individual cameras pointing outward in different directions to provide (overall) a very wide-angle view of the surroundings.

The role of the tracking algorithm in the driver in this scenario is to use the cameras to look for visual landmarks in the play area, and to combine that information with the IMU readings to find the position of the headset. This is called Visual Inertial Odometry.

There is then a 2nd part to the tracking – finding the position of the hand controllers. This part works the same as on the CV1 – looking for constellations of LED lights on the controllers and matching what you see to a model of the controllers.

This is where I think the tracking gets particularly interesting. The requirements for finding where the headset is in the room, and the goal of finding the controllers require 2 different types of camera view!

To find the landmarks in the room, the vision algorithm needs to be able to see everything clearly and you want a balanced exposure from the cameras. To identify the controllers, you want a very fast exposure synchronised with the bright flashes from the hand controller LEDs – the same as when doing CV1 tracking.

The Rift S satisfies both requirements by capturing alternating video frames with fast and normal exposures. Each time, it captures the 5 cameras simultaneously and stitches them together into 1 video frame to deliver over USB to the host computer. The driver then needs to split each frame according to whether it is a normal or fast exposure and dispatch it to the appropriate part of the tracking algorithm.

Rift S – normal room exposure for Visual Inertial Odometry. Rift S – fast exposure with IR LEDs for controller tracking.

There are a bunch of interesting things to notice in these camera captures:

  • Each camera view is inserted into the frame in some native orientation, and requires external information to make use of the information in them
  • The cameras have a lot of fisheye distortion that will need correcting.
  • In the fast exposure frame, the light bulbs on my ceiling are hard to tell apart from the hand controller LEDs – another challenge for the computer vision algorithm.
  • The cameras are Infrared only, which is why the Rift S passthrough view (if you’ve ever seen it) is in grey-scale.
  • The top 16-pixels of each frame contain some binary data to help with frame identification. I don’t know how to interpret the contents of that data yet.

Status

This blog post is already too long, so I’ll stop here. In part 2, I’ll talk more about deciphering the Rift S protocol.

Thanks for reading! If you have any questions, hit me up at mailto:thaytan@noraisin.net or @thaytan on Twitter

by thaytan at July 14, 2020 05:02 PM

July 11, 2020

Sebastian DrögeLive loudness normalization in GStreamer & experiences with porting a C audio filter to Rust

(Sebastian Dröge)

A few months ago I wrote a new GStreamer plugin: an audio filter for live loudness normalization and automatic gain control.

The plugin can be found as part of the GStreamer Rust plugin in the audiofx plugin. It’s also included in the recent 0.6.0 release of the GStreamer Rust plugins and available from crates.io.

Its code is based on Kyle Swanson’s great FFmpeg filter af_loudnorm, about which he wrote some more technical details on his blog a few years back. I’m not going to repeat all that here, if you’re interested in those details and further links please read Kyle’s blog post.

From a very high-level, the filter works by measuring the loudness of the input following the EBU R128 standard with a 3s lookahead, adjusts the gain to reach the target loudness and then applies a true peak limiter with 10ms to prevent any too high peaks to get passed through. Both the target loudness and the maximum peak can be configured via the loudness-target and max-true-peak properties, same as in the FFmpeg filter. Different to the FFmpeg filter I only implemented the “live” mode and not the two-pass mode that is implemented in FFmpeg, which first measures the loudness of the whole stream and then in a second pass adjusts it.

Below I’ll describe the usage of the filter in GStreamer a bit and also some information about the development process, and the porting of the C code to Rust.

Usage

For using the filter you most likely first need to compile it yourself, unless you’re lucky enough that e.g. your Linux distribution includes it already.

Compiling it requires a Rust toolchain and GStreamer 1.8 or newer. The former you can get via rustup for example, if you don’t have it yet, the latter either from your Linux distribution or by using the macOS, Windows, etc binaries that are provided by the GStreamer project. Once that is done, compiling is mostly a matter of running cargo build in the audio/audiofx directory and copying the resulting libgstrsaudiofx.so (or .dll or .dylib) into one of the GStreamer plugin directories, for example ~/.local/share/gstreamer-1.0/plugins.

After that boring part is done, you can use it for example as follows to run loudness normalization on the Sintel trailer:

gst-launch-1.0 playbin \
    uri=https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.webm \
    audio-filter="audioresample ! rsaudioloudnorm ! audioresample ! capsfilter caps=audio/x-raw,rate=48000"

As can be seen above, it is necessary to put audioresample elements around the filter. The reason for that is that the filter currently only works on 192kHz input. This is a simplification for now to make it easier inside the filter to detect true peaks. You would first upsample your audio to 192kHz and then, if needed, later downsample it again to your target sample rate (48kHz in the example above). See the link mentioned before for details about true peaks and why this is generally a good idea to do. In the future the resampling could be implemented internally and maybe optionally the filter could also work with “normal” peak detection on the non-upsampled input.

Apart from that caveat the filter element works like any other GStreamer audio filter and can be placed accordingly in any GStreamer pipeline.

If you run into any problems using the code or it doesn’t work well for your use-case, please create an issue in the GStreamer bugtracker.

The process

As I wrote above, the GStreamer plugin is part of the GStreamer Rust plugins so the first step was to port the FFmpeg C code to Rust. I expected that to be the biggest part of the work, but as writing Rust is simply so much more enjoyable than writing C and I would have to adjust big parts of the code to fit the GStreamer infrastructure anyway, I took this approach nonetheless. The alternative of working based on the C code and writing the plugin in C didn’t seem very appealing to me. In the end, as usual when developing in Rust, this also allowed me to be more confident about the robustness of the result and probably reduced the amount of time spent debugging. Surprisingly, the translation was actually not the biggest part of the work, but instead I had to debug a couple of issues that were already present in the original FFmpeg code and find solutions for them. But more on that later.

The first step for porting the code was to get an implementation of the EBU R128 loudness analysis. In FFmpeg they’re using a fork of the libebur128 C library. I checked if there was anything similar for Rust already, maybe even a pure-Rust implementation of it, but couldn’t find anything. As I didn’t want to write one myself or port the code of the libebur128 C library to Rust, I wrote safe Rust bindings for that library instead. The end result of that can be found on crates.io as an independent crate, in case someone else also needs it for other purposes at some point. The crate also includes the code of the C library, making it as easy as possible to build and include into other projects.

The next step was to actually port the FFmpeg C code to Rust. In the end that was a rather straightforward translation fortunately. The latest version of that code can be found here.

The biggest difference to the C code is the usage of Rust iterators and iterator combinators like zip and chunks_exact. In my opinion this makes the code quite a bit easier to read compared to the manual iteration in the C code together with array indexing, and as a side effect it should also make the code run faster in Rust as it allows to get rid of a lot of array bounds checks.

Apart from that, one part that was a bit inconvenient during that translation and still required manual array indexing is the usage of ringbuffers everywhere in the code. For now I wrote those like I would in C and used a few unsafe operations like get_unchecked to avoid redundant bounds checks, but at a later time I might refactor this into a proper ringbuffer abstraction for such audio processing use-cases. It’s not going to be the last time I need such a data structure. A short search on crates.io gave various results for ringbuffers but none of them seem to provide an API that fits the use-case here. Once that’s abstracted away into a nice data structure, I believe the Rust code of this filter is really nice to read and follow.

Now to the less pleasant parts, and also a small warning to all the people asking for Rust rewrites of everything: of course I introduced a couple of new bugs while translating the code although this was a rather straightforward translation and I tried to be very careful. I’m sure there is also still a bug or two left that I didn’t find while debugging. So always keep in mind that rewriting a project will also involve adding new bugs that didn’t exist in the original code. Or maybe you’re just a better programmer than me and don’t make such mistakes.

Debugging these issues that showed up while testing the code was a good opportunity to also add extensive code comments everywhere so I don’t have to remind myself every time again what this block of code is doing exactly, and it’s something I was missing a bit from the FFmpeg code (it doesn’t have a single comment currently). While writing those comments and explaining the code to myself, I found the majority of these bugs that I introduced and as a side-effect I now have documentation for my future self or other readers of the code.

Fixing these issues I introduced myself wasn’t that time-consuming neither in the end fortunately, but while writing those code comments and also while doing more testing on various audio streams, I found a couple of bugs that already existed in the original FFmpeg C code. Further testing also showed that they caused quite audible distortions on various test streams. These are the bugs that unfortunately took most of the time in the whole process, but at least to my knowledge there are no known bugs left in the code now.

For these bugs in the FFmpeg code I also provided a fix that is merged already, and reported the other two in their bug tracker.

The first one I’d be happy to provide a fix for if my approach is considered correct, but the second one I’ll leave for someone else. Porting over my Rust solution for that one will take some time and getting all the array indexing involved correct in C would require some serious focusing, for which I currently don’t have the time.

Or maybe my solutions to these problems are actually wrong, or my understanding of the original code was wrong and I actually introduced them in my translation, which also would be useful to know.

Overall, while porting the C code to Rust introduced a few new problems that had to be fixed, I would definitely do this again for similar projects in the future. It’s more fun to write and in my opinion the resulting code is easier readable, and better to maintain and extend.

by slomo at July 11, 2020 05:34 PM

July 10, 2020

Víctor JáquezNew VA-API H.264 decoder in gst-plugins-bad

Recently, a new H.264 decoder, using VA-API, was merged in gst-plugins-bad.

Why another VA-based H.264 decoder if there is already gstreamer-vaapi?

As usual, an historical perspective may give some clues.

It started when Seungha Yang implemented the GStreamer decoders for Windows using DXVA2 and D3D11 APIs.

Perhaps we need one step back and explain what are stateless decoders.

Video decoders are magic and opaque boxes where we push encoded frames, and later we’ll pop full decoded frames in raw format. This is how OpenMAX and V4L2 decoders work, for example.

Internally we can imagine those magic and opaque boxes has two main operations:

  • Codec state handling
  • Signal processing like Fourier-related transformations (such as DCT), entropy coding, etc. (DSP, in general)

The codec state handling basically extracts, from the stream, the frame’s parameters and its compressed data, so the DSP algorithms can decode the frames. Codec state handling can be done with generic CPUs, while DSP algorithms are massively improved through specific purpose processors.

These video decoders are known as stateful decoders, and usually they are distributed through binary and closed blobs.

Soon, silicon vendors realized they can offload the burden of state handling to third-party user-space libraries, releasing what it is known as stateless decoders. With them, your code not only has to push frames into the opaque box, but now it shall handle the codec specifics to provide all the parameters and references for each frame. VAAPI and DXVA2 are examples of those stateless decoders.

Returning to Seungha’s implementation, in order to get theirs DXVA2/D3D11 decoders, they also needed a state handler library for each codec. And Seungha wrote that library!

Initially they wanted to reuse the state handling in gstreamer-vaapi, which works pretty good, but its internal library, from the GStreamer perspective, is over-engineered: it is impossible to rip out only the state handling without importing all its data types. Which is kind of sad.

Later, Nicolas Dufresne, realized that this library can be re-used by other GStreamer plugins, because more stateless decoders are now available, particularly V4L2 stateless, in which he is interested. Nicolas moved Seungha’s code into a library in gst-plugins-bad.

Currently, libgstcodecs provides state handling of H.264, H.265, VP8 and VP9.

Let’s return to our original question: Why another VA-based H.264 decoder if there is already one in gstreamer-vaapi?

The quick answer is «to pay my technical debt».

As we already mentioned, gstreamer-vaapi is big and over-engineered, though we have being simplifying the internal libraries, in particular He Junyan, has done a lot of work replacing the internal base class, GstVaapiObject, withGstObject or GstMiniObject. Also, this kind of projects, where there’s a lot of untouched code, it carries a lot of cargo cult decisions.

So I took the libgstcodecs opportunity to write a simple, thin and lean, H.264 decoder, using VA new API calls (vaExportSurfaceHandle(), for example) and learning from other implementations, such as FFMpeg and ChromeOS. This exercise allowed me to identify where are the dusty spots in gstreamer-vaapi and how they should be fixed (and we have been doing it since then!).

Also, this opportunity lead me to learn a bit more about the H.264 specification since I implemented the reference picture list handling, and fixed a small bug in Chromium.

Now, let me be crystal clear: GStreamer VA-API is not going anywhere. It is, right now, one of the most feature-complete implementations using VA-API, even with its integration issues, and we are working on them, particularly, Intel folks are working hard on a new AV1 decoder, enhancing encoders and adding new video post-processing features.

But, this new vah264dec is an experimental VA-API decoder, which aims towards a tight integration with GStreamer, oriented to provide a good experience in most of the common use cases and to enhance the common libgstcodecs library shared with other stateless decoders, looking to avoid Intel specific nuances.

These are the main characteristics and plans of this new decoder:

  • It use, by default, a DRM connection to VA display, avoiding the troubles of choosing X11 or Wayland.
    • It uses the first found DRM device as VA display
    • In the future, users will be able to provide their custom VA display through the pipeline’s context.
  • It requires libva >= 1.6
  • No multiview/stereo profiles, neither interlaced streams, because libgstcodecs doesn’t handle them yet
  • It is incompatible with gstreamer-vaapi: mixing elements might lead to problems.
  • Even if memory:VAMemory is exposed, it is not handled yet by any other element yet.
    • Users will get VASurfaces via mapping as GstGL does with textures.
  • Caps templates are generated dynamically generated by querying VAAPI
  • YV12 and I420 are added for system memory caps because they seem to be supported for all the drivers when downloading frames onto main memory, as they are used by xvimagesink and others, avoiding color conversion.
  • Decoding surfaces aren’t bounded to context, so they can grow beyond the DBP size, allowing smooth reverse playback.
  • There isn’t yet error handling and recovery.
  • The element is supposed to spawn if different renderD nodes with VA-API driver support are found (like gstv4l2), but it hasn’t been tested yet.

Now you may be asking how do I use vah264dec?

Currently vah264dec has NONE rank, which means that it will never be autoplugged, but you can use the trick of the environment variable GST_PLUGIN_FEATURE_RANK:

$ GST_PLUGIN_FEATURE_RANK=vah264dec:259 gst-play-1.0 ~/video.mp4

And that’s it!

Thanks.

by vjaquez at July 10, 2020 06:15 PM

July 07, 2020

Jean-François Fortin TamRebuild of EvanGTGelion: Getting Things GNOME 0.4 released!

We are very proud to be announcing today the 0.4 release of Getting Things GNOME (“GTG”), codenamed “You Are (Not) Done”. This much-awaited release is a major overhaul that brings together many updates and enhancements, including new features, a modernized user interface and updated underlying technology.

ScreenshotScreenshot of GTG 0.4

Beyond what is featured in these summarized release notes below, GTG itself has undergone over 630 changes affecting over 500 files, and received hundreds of bug fixes, as can be seen on here and here.

We are thankful to all of our supporters and contributors, past and present, who made GTG 0.4 possible. Check out the “About” dialog for a list of contributors for this release.

A summary of GTG’s development history and a high-level explanation of its renaissance can be seen in this teaser video:

A demonstration video provides a tour of GTG’s current features:

A few words about the significance of this release

As a result of the new lean & agile project direction and contributor workflow we have formalized, this release—the first in over 6.5 years—constitutes a significant milestone in the revival of this community-driven project.

This milestone represents a very significant opportunity to breathe new life into the project, which is why I made sure to completely overhaul the “contributor experience”, clarifying and simplifying the process for new contributors to make an impact.

I would highly encourage everybody to participate towards the next release. You can contribute all sorts of improvements to this project, be it bug fixes or new features, adopting one of the previous plugins, doing translation and localization work, working on documentation, or spreading the word. Your involvement is what makes this project a success.

— Jeff (yours truly)

“When I switched from Linux to macOS a few years ago, I never found a todo app that was as good as GTG, so I started to try every new shiny (and expensive) thing. I used Evernote, Todoist, Things and many others. Nothing came close. I spent the next 6 years trying every new productivity gadget in order to find the perfect combo.
In 2019, Jeff decided to take over GTG and bring it back from the grave. It’s a strange feeling to see your own creation continuing in the hands of others. Living to see your software being developed by others is quite an accomplishment. Jeff’s dedication demonstrated that, with GTG, we created a tool which can become an essential part of chaos warrior’s productivity system. A tool which is useful without being trendy, even years after it was designed. A tool that people still want to use. A tool that they can adapt and modernise. This is something incredible that can only happen with Open Source.”

— Lionel Dricot, original author of GTG (quote edited with permission)

It has been over seven years since the 0.3rd impact. This might very well be the Fourth Impact. Shinji, get in the f%?%$ing build bot!

— Gendo Ikari

Release notes

Technology Upgrades

GTG and libLarch have been fully ported to Python 3, GTK 3, and GObject introspection (PyGI).

User Interface and Frontend Improvements

General UI overhaul

The user interface has been updated to follow the current GNOME Human Interface Guidelines (HIG), style (see GH GTG PR #219 and GH GTG PR #235 for context) and design patterns:

  • Client-side window decorations using the GTK HeaderBar widget. Along with the removal of the menu bars, this saves a significant amount of space and allows for more content to be displayed on screen.
  • The Preferences dialog was redesigned, and its contents cleaned up to remove obsolete settings (see GH GTG PR #227).
  • All windows are properly parented (set as transient) with the main window, so that they can be handled better by window managers.
  • Symbolic icons are available throughout the UI.
  • Improvements to padding and borders are visible throughout the application.

Main window (“Task Browser”)

  • The menu bar has been replaced by a menu button. Non-contextual actions (for example: toggle Sidebar, Plugins, Preferences, Help, and About) have been moved to the main menu button.
  • Searching is now handled through a dedicated Search Bar that can be toggled on and off with the mouse, or the Ctrl+F keyboard shortcut.
  • The “Workview” mode has been renamed to the “Actionable” view. “Open”, “Actionable”, and “Closed” tasks view modes are available (see GH GTG PR #235).
  • An issue with sorting tasks by title in the Task Browser has been fixed: sorting is no longer case-sensitive, and now ignores tag marker characters (GH GTG issue #375).
  • Start/Due/Closed task dates now display as properly translated in the Task Browser (GH GTG issue #357)
  • In the Task Browser’s right-click context menus, more start/due dates choices are available, including common upcoming dates and a custom date picker (GH GTG issue #244).

Task Editor

  • The Calendar date picker pop-up widgets have been improved (see GH GTG PR #230).
  • The Task Editor now attempts to place newly created windows in a more logical way (GH GTG issue #287).
  • The title (first line of a task) has been changed to a neutral black header style, so that it doesn’t look like a hyperlink.

New Features

  • You can now open (or create) a task’s parent task (GH GTG issue #138).
  • You can now select multiple closed tasks and perform bulk actions on them (GH GTG issue #344).
  • It is now possible to rename or delete tags by right-clicking on them in the Task Browser.
  • You can automatically generate and assign tag colors. (LP GTG issue #644993)
  • The Quick Add entry now supports emojis 🤩
  • The Task Editor now provides a searchable “tag picker” widget.
  • The “Task Reaper” allows deleting old closed tasks for increased performance. Previously available as a plugin, it is now a built-in feature available in the Preferences dialog (GH GTG issue #222).
  • The Quick Deferral (previously, the “Do it Tomorrow” plugin) is now a built-in feature. It is now possible to defer multiple tasks at once to common upcoming days or to a custom date (GH GTG issue #244).
  • In the unlikely case where GTG might encounter a problem opening your data file, it will automatically attempt recovery from a previous backup snapshot and let you know about it (LP GTG issue #971651)

Backend and Code Quality improvements

  • Updates were made to overall code quality (GH GTG issue #237) to reduce barriers to contribution:
    • The code has been ported to use GtkApplication, resulting in simpler and more robust UI code overall.
    • GtkBuilder/Glade “.ui” files have been regrouped into one location.
    • Reorganization of various .py files for consistency.
    • The debugging/logging system has been simplified.
    • Various improvements to the test suite.
    • The codebase is mostly PEP8-compliant. We have also relaxed the PEP8 max line length convention to 100 characters for readability, because this is not the nineties anymore.
  • Support is available for Tox, for testing automation within virtualenvs (see GH GTG PR #239).
  • The application’s translatable strings have been reviewed and harmonized, to ensure the entire application is translatable (see GH GTG PR #346).
  • Application CSS has been moved to its own file (see GH GTG PR #229).
  • Outdated plugins and synchronization services have been removed (GH GTG issue #222).
  • GTG now provides an “AppData” (FreeDesktop AppStream metadata) file to properly present itself in distro-agnostic software-centers.
  • The Meson build system is now supported (see GH GTG PR #315).
    • The development version’s launch script now allows running the application with various languages/locales, using the LANG environment variable for example.
    • Appdata and desktop files are named based on the chosen Meson profile (see GH GTG PR #349).
    • Depending on the Meson profile, the HeaderBar style changes dynamically to indicate when the app is run in a dev environment, such as GNOME Builder (GH GTG issue #341).

Documentation Updates

  • The user manual has been rewritten, reorganized, and updated with new images (GH GTG issue #243). It is also now available as an online publication.
  • The contributor documentation has been rewritten to make it easier for developers to get involved and to clarify project contribution guidelines  (GH GTG issue #200). Namely, updates were made to the README.md file to clarify the set-up process for the development version, as well as numerous new guides and documentation for contributors in the docs/contributors/ folder.

Infrastructure and other notable updates

  • The entire GTG GNOME wiki site has been updated (GH GTG issue #200), broken links have been fixed, references to the old website have been removed.
  • We have migrated from LaunchPad to GitHub (and eventually GitLab), so references to LaunchPad have been removed.
  • We now have social media accounts on Mastodon and Twitter (GH GTG issue #294).
  • Flatpak packages on Flathub are going to be our official direct upstream-to-user software distribution mechanism (GH GTG issue #233).

Notice

In order to bring this release out of the door, some plugins have been disabled and are awaiting adoption by new contributors to test and maintain them. Please contribute to maintain your favorite plugin. Likewise, we had to remove the DBus module (and would welcome help to bring it back into a better shape, for those who want to control the app via DBus).


Getting and installing GTG 0.4

We hope to have our flatpak package ready in time for this announcement, or shortly afterwards. See the install page for details.


Spreading this announcement

We have made some social postings on Twitter, on Mastodon and on LinkedIn that you can re-share/retweet/boost. Please feel free to link to this announcement on forums and blogs as well!

The post Rebuild of EvanGTGelion: Getting Things GNOME 0.4 released! appeared first on The Open Sourcerer.

by Jeff at July 07, 2020 12:00 PM

July 06, 2020

GStreamerGStreamer Rust bindings 0.16.0 release

(GStreamer)

A new version of the GStreamer Rust bindings, 0.16.0, was released.

As usual this release follows the latest gtk-rs release.

This is the first version that includes optional support for new GStreamer 1.18 APIs. As GStreamer 1.18 was not released yet, these new APIs might still change. The minimum supported version of the bindings is still GStreamer 1.8 and the targetted GStreamer API version can be selected by applications via feature flags.

Apart from this, new version features mostly features API cleanup and the addition of a few missing APIs. The focus of this release was to make usage of GStreamer from Rust as convenient and complete as possible.

The new release also brings a lot of bugfixes, most of which were already part of the 0.15.x bugfix releases.

A new release of the GStreamer Rust plugins will follow in the next days.

Details can be found in the release notes for gstreamer-rs and gstreamer-rs-sys.

The code and documentation for the bindings is available on the freedesktop.org GitLab

as well as on crates.io.

If you find any bugs, notice any missing features or other issues please report them in GitLab.

July 06, 2020 02:00 PM

GStreamerGStreamer 1.17.2 unstable development release

(GStreamer)

The GStreamer team is pleased to announce the second development release in the unstable 1.17 release series.

The unstable 1.17 release series adds new features on top of the current stable 1.16 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework.

The unstable 1.17 release series is for testing and development purposes in the lead-up to the stable 1.18 series which is scheduled for release in a few weeks time. Any newly-added API can still change until that point, although it is rare for that to happen.

Full release notes will be provided in the near future, highlighting all the new features, bugfixes, performance optimizations and other important changes.

The autotools build has been dropped entirely for this release, so it's finally all Meson from here on.

This development release is primarily for distributors and early adaptors and anyone who still needs to update their build/packaging setup for Meson.

On the documentation front we have switched away from gtk-doc to hotdoc, but we now provide a release tarball of the built documentation in html and devhelp format, and we recommend distributors switch to that and provide a single gstreamer documentation package in future. Packagers will not need to use hotdoc themselves.

Instead of a gst-validate tarball we now ship a gst-devtools tarball, and the gstreamer-editing-services tarball has been renamed to gst-editing-services for consistency with the module name in Gitlab.

Packagers: please note that plugins may have moved between modules, so please take extra care and make sure inter-module version dependencies are such that users can only upgrade all modules in one go, instead of seeing a mix of 1.17 and 1.16 on their system.

Binaries for Android, iOS, Mac OS X and Windows are also available at the usual location.

Release tarballs can be downloaded directly here:

As always, please let us know of any issues you run into by filing an issue in Gitlab.

July 06, 2020 01:00 PM

July 02, 2020

Phil NormandWeb-augmented graphics overlay broadcasting with WPE and GStreamer

(Phil Normand)

Graphics overlays are everywhere nowadays in the live video broadcasting industry. In this post I introduce a new demo relying on GStreamer and WPEWebKit to deliver low-latency web-augmented video broadcasts.

Readers of this blog might remember a few posts about WPEWebKit and a GStreamer element we at Igalia worked on …

by Philippe Normand at July 02, 2020 01:00 PM

June 28, 2020

Sebastian Pölsterlscikit-survival 0.13 Released

Today, I released version 0.13.0 of scikit-survival. Most notably, this release adds sksurv.metrics.brier_score and sksurv.metrics.integrated_brier_score, an updated PEP 517/518 compatible build system, and support for scikit-learn 0.23.

For a full list of changes in scikit-survival 0.13.0, please see the release notes.

Pre-built conda packages are available for Linux, macOS, and Windows via

 conda install -c sebp scikit-survival

Alternatively, scikit-survival can be installed from source following these instructions.

The time-dependent Brier score

The time-dependent Brier score is an extension of the mean squared error to right censored data:

$$ \mathrm{BS}^c(t) = \frac{1}{n} \sum_{i=1}^n I(y_i \leq t \land \delta_i = 1) \frac{(0 - \hat{\pi}(t | \mathbf{x}_i))^2}{\hat{G}(y_i)} + I(y_i > t) \frac{(1 - \hat{\pi}(t | \mathbf{x}_i))^2}{\hat{G}(t)} , $$

where $\hat{\pi}(t | \mathbf{x})$ is a model’s predicted probability of remaining event-free up to time point $t$ for feature vector $\mathbf{x}$, and $1/\hat{G}(t)$ is an inverse probability of censoring weight.

The Brier score is often used to assess calibration. If a model predicts a 10% risk of experiencing an event at time $t$, the observed frequency in the data should match this percentage for a well calibrated model. In addition, the Brier score is also a measure of discrimination: whether a model is able to predict risk scores that allow us to correctly determine the order of events. The concordance index is probably the most common measure of discrimination. However, the concordance index disregards the actual values of predicted risk scores – it is a ranking metric – and is unable to tell us anything about calibration.

Let’s consider an example based on data from the German Breast Cancer Study Group 2.

from sksurv.datasets import load_gbsg2
from sksurv.preprocessing import encode_categorical
from sklearn.model_selection import train_test_split
X, y = load_gbsg2()
X = encode_categorical(X)
X_train, X_test, y_train, y_test = train_test_split(
X, y, stratify=y["cens"], random_state=1)

We want to train a model on the training data and assess its discrimination and calibration on the test data. Here, we consider a Random Survival Forest and Cox’s proportional hazards model with elastic-net penalty.

from sksurv.ensemble import RandomSurvivalForest
from sksurv.linear_model import CoxnetSurvivalAnalysis
rsf = RandomSurvivalForest(max_depth=2, random_state=1)
rsf.fit(X_train, y_train)
cph = CoxnetSurvivalAnalysis(l1_ratio=0.99, fit_baseline_model=True)
cph.fit(X_train, y_train)

First, let’s start with discrimination as measured by the concordance index.

rsf_c = rsf.score(X_test, y_test)
cph_c = cph.score(X_test, y_test)

The result indicates that both models perform equally well, achieving a concordance index of 0.688, which is significantly better than a random model with 0.5 concordance index. Unfortunately, it doesn’t help us to decide which model we should choose. So let’s consider the time-dependent Brier score as an alternative, which asses discrimination and calibration.

We first need to determine for which time points $t$ we want to compute the Brier score for. We are going to use a data-driven approach here by selecting all time points between the 10% and 90% percentile of observed time points.

import numpy as np
lower, upper = np.percentile(y["time"], [10, 90])
times = np.arange(lower, upper + 1)

This returns 1690 time points, for which we need to estimate the probability of survival for, which is given by the survival function. Thus, we iterate over the predicted survival functions on the test data and evaluate each at the time points from above.

rsf_surv_prob = np.row_stack([
fn(times)
for fn in rsf.predict_survival_function(X_test, return_array=False)
])
cph_surv_prob = np.row_stack([
fn(times)
for fn in cph.predict_survival_function(X_test)
])

Note that calling predict_survival_function for RandomSurvivalForest with return_array=False requires scikit-survival 0.13.

In addition, we want to have a baseline to tell us how much better our models are from random. A random model would simply predict 0.5 every time.

random_surv_prob = 0.5 * np.ones((y_test.shape[0], times.shape[0]))

Another useful reference is the Kaplan-Meier estimator, that does not consider any features: it estimates a survival function only from y_test. We replicate this estimate for all samples in the test data.

from sksurv.functions import StepFunction
from sksurv.nonparametric import kaplan_meier_estimator
km_func = StepFunction(*kaplan_meier_estimator(y_test["cens"], y_test["time"]))
km_surv_prob = np.tile(km_func(times), (y_test.shape[0], 1))

Instead of comparing calibration across all 1690 time points, we’ll be using the integrated Brier score (IBS) over all time points, which will give us a single number to compare the models by.

from sksurv.metrics import integrated_brier_score
random_brier = integrated_brier_score(y, y_test, random_surv_prob, times)
km_brier = integrated_brier_score(y, y_test, km_surv_prob, times)
rsf_brier = integrated_brier_score(y, y_test, rsf_surv_prob, times)
cph_brier = integrated_brier_score(y, y_test, cph_surv_prob, times)

The results are summarized in the table below:

RSF Coxnet Random Kaplan-Meier
c-index 0.688 0.688 0.500
IBS 0.194 0.188 0.247 0.217

Despite Random Survival Forest and Cox’s proportional hazards model performing equally well in terms of discrimination, there seems to be a notable difference in terms of calibration, with Cox’s proportional hazards model outperforming Random Survival Forest.

As a final note, I want to clarify that the Brier score is only applicable for models that are able to estimate a survival function. Hence, it currently cannot be used with Survival Support Vector Machines.

June 28, 2020 04:02 PM

June 22, 2020

Michael SheldonQt QML Maps – Using the OSM plugin with API keys

(Michael Sheldon)

For a recent side-project I’ve been working on (a cycle computer for UBPorts phones) I found that when using the QtLocation Map QML element, nearly all the map types provided by the OSM plugin (besides the basic streetmap type) require an API key from Thunderforest. Unfortunately, there doesn’t appear to be a documented way of supplying an API key to the plugin, and the handful of forum posts and Stack Overflow questions on the topic are either unanswered or answered by people believing that it’s not possible. It’s not obvious, but after a bit of digging into the way the OSM plugin works I’ve discovered a mechanism by which an API key can be supplied to tile servers that require one.

When the OSM plugin is initialised it communicates with the Qt providers repository which tells it what URLs to use for each map type. The location of the providers repository can be customised through the osm.mapping.providersrepository.address OSM plugin property, so all we need to do to use our API key is to set up our own providers repository with URLs that include our API key as a parameter. The repository itself is just a collection of JSON files, with specific names (cycle, cycle-hires, hiking, hiking-hires, night-transit, night-transit-hires, satellite, street, street-hires, terrain, terrain-hires, transit, transit-hires) each corresponding to a map type. The *-hires files provide URLs for tiles at twice the normal resolution, for high DPI displays.

For example, this is the cycle file served by the default Qt providers repository:

{
    "UrlTemplate" : "http://a.tile.thunderforest.com/cycle/%z/%x/%y.png",
    "ImageFormat" : "png",
    "QImageFormat" : "Indexed8",
    "ID" : "thf-cycle",
    "MaximumZoomLevel" : 20,
    "MapCopyRight" : "<a href='http://www.thunderforest.com/'>Thunderforest</a>",
    "DataCopyRight" : "<a href='http://www.openstreetmap.org/copyright'>OpenStreetMap</a> contributors"
}

To provide an API key with our tile requests we can simply modify the UrlTemplate:

    "UrlTemplate" : "http://a.tile.thunderforest.com/cycle/%z/%x/%y.png?apikey=YOUR_API_KEY",

Automatic repository setup

I’ve created a simple tool for setting up a complete repository using a custom API key here: https://github.com/Elleo/qt-osm-map-providers

  1. First obtain an API key from https://www.thunderforest.com/docs/apikeys/
  2. Next clone my repository: git clone https://github.com/Elleo/qt-osm-map-providers.git
  3. Run: ./set_api_keys.sh your_api_key (replacing your_api_key with the key you obtained in step 1)
  4. Copy the files from this repository to your webserver (e.g. http://www.mywebsite.com/osm_repository)
  5. Set the osm.mapping.providersrepository.address property to point to the location setup in step 4 (see the QML example below)

QML Example

Here’s a quick example QML app that will make use of the custom repository we’ve set up:

import QtQuick 2.7
import QtQuick.Controls 2.5
import QtLocation 5.10

ApplicationWindow {

    title: qsTr("Map Example")
    width: 1280
    height: 720

    Map {
        anchors.fill: parent
        zoomLevel: 14
        plugin: Plugin {
            name: "osm"
            PluginParameter { name: "osm.mapping.providersrepository.address"; value: "http://www.mywebsite.com/osm_repository" }
            PluginParameter { name: "osm.mapping.highdpi_tiles"; value: true }
        }
        activeMapType: supportedMapTypes[1] // Cycle map provided by Thunderforest
    }
    
}

by Mike at June 22, 2020 04:18 PM

GStreamerGStreamer 1.17.1 unstable development release

(GStreamer)

The GStreamer team is pleased to announce the first development release in the unstable 1.17 release series.

The unstable 1.17 release series adds new features on top of the current stable 1.16 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework.

The unstable 1.17 release series is for testing and development purposes in the lead-up to the stable 1.18 series which is scheduled for release in a few weeks time. Any newly-added API can still change until that point, although it is rare for that to happen.

Full release notes will be provided in the near future, highlighting all the new features, bugfixes, performance optimizations and other important changes.

The autotools build has been dropped entirely for this release, so it's finally all Meson from here on.

This development release is primarily for distributors and early adaptors and anyone who still needs to update their build/packaging setup for Meson.

On the documentation front we have switched away from gtk-doc to hotdoc, but we now provide a release tarball of the built documentation in html and devhelp format, and we recommend distributors switch to that and provide a single gstreamer documentation package in future.

Instead of a gst-validate tarball we now ship a gst-devtools tarball, and the gstreamer-editing-services tarball has been renamed to gst-editing-services for consistency with the module name in Gitlab.

Packagers: please note that plugins may have moved between modules, so please take extra care and make sure inter-module version dependencies are such that users can only upgrade all modules in one go, instead of seeing a mix of 1.17 and 1.16 on their system.

Binaries for Android, iOS, Mac OS X and Windows are also available at the usual location.

Release tarballs can be downloaded directly here:

As always, please let us know of any issues you run into by filing an issue in Gitlab.

June 22, 2020 02:30 PM

June 16, 2020

Víctor JáquezWebKit Flatpak SDK and gst-build

This post is an annex of Phil’s Introducing the WebKit Flatpak SDK. Please make sure to read it, if you haven’t already.

Recapitulating, nowadays WebKitGtk/WPE developers —and their CI infrastructure— are moving towards to Flatpak-based environment for their workflow. This Flatpak-based environment, or Flatpak SDK for short, can be visualized as a software sandboxed-container, which bundles all the dependencies required to compile, run and debug WebKitGtk/WPE.

In a day-by-day work, this approach removes the potential compilation of the world in order to obtain reproducible builds, improving the development and testing work flow.

But what if you are also involved in the development of one dependency?

This is the case of Igalia’s multimedia team where, besides developing the multimedia features for WebKitGtk and WPE, we also participate in the GStreamer development, the framework used for multimedia.

Because of this, in our workflow we usually need to build WebKit with a fix, hack or new feature in GStreamer. Is it possible to add in Flatpak our custom GStreamer build without messing its own GStreamer setup? Yes, it’s possible.

gst-build is a set of scripts in Python which clone GStreamer repositories, compile them and setup an uninstalled environment. This uninstalled environment allows a transient usage of the compiled framework from their build tree, avoiding installation and further mess up with our system.

The WebKit scripts that wraps Flatpak operations are also capable to handle the scripts of gst-build to build GStreamer inside the container, and, when running WebKit’s artifacts, the scripts enable the mentioned uninstalled environment, overloading Flatpak’s GStreamer.

How do we unveil all this magic?

First of all, setup a gst-build installation as it is documented. In this installation is were the GStreamer plumbing is done.

Later, gst-build operations through WebKit compilation scripts are enabled when the environment variable GST_BUILD_PATH is exported. This variable should point to the directory where the gst-build tree is placed.

And that’s all!

But let’s put these words in actual commands. The following workflow assumes that WebKit repository is cloned in ~/WebKit and the gst-build tree is in ~/gst-build (please, excuse my bashisms).

Compiling WebKitGtk with symbols, using LLVM as toolchain (this command will also compile GStreamer):

$ cd ~/WebKit
% CC=clang CXX=clang++ GST_BUILD_PATH=/home/vjaquez/gst-build Tools/Scripts/build-webkit --gtk --debug
...

Running the generated minibrowser (remind GST_BUILD_PATH is required again for a correct linking):

$ GST_BUILD_PATH=/home/vjaquez/gst-build Tools/Scripts/run-minibrowser --gtk --debug
...

Running media layout tests:

$ GST_BUILD_PATH=/home/vjaquez/gst-build ./Tools/Scripts/run-webkit-tests --gtk --debug media

But wait! There’s more...

What if you I want to parametrize the GStreamer compilation. To say, I would like to enable a GStreamer module or disable the built of a specific element.

gst-build, as the rest of GStreamer modules, uses meson build system, so it’s possible to pass arguments to meson through the environment variable GST_BUILD_ARGS.

For example, I would like to enable gstreamer-vaapi 😇

$ cd ~/WebKit
% CC=clang CXX=clang++ GST_BUILD_PATH=/home/vjaquez/gst-build GST_BUILD_ARGS="-Dvaapi=enabled" Tools/Scripts/build-webkit --gtk --debug
...

by vjaquez at June 16, 2020 11:49 AM

June 13, 2020

Phil NormandSetting up Debian containers on Fedora Silverblue

(Phil Normand)

After almost 20 years using Debian, I am trying something different, Fedora Silverblue. However for work I still need to use Debian/Ubuntu from time to time. In this post I am explaining the steps to setup Debian containers on Silverblue.

By default Silverblue comes with Toolbox which perfectly integrates …

by Philippe Normand at June 13, 2020 11:50 AM

June 11, 2020

Jean-François Fortin TamRevival of GTG, status update #2: git ready to test!

As a follow-up to my first global project situation update, I am happy to report great progress towards the successful revival of the GTG project.

You can see that in this fancy-pants teaser trailer (featuring epic music, big explosions and special effects), or this short status update video that also includes the trailer in it:

We’re getting really, really close. Here are some good recent news:

  1. We are seriously running out of bugs for what will become the 0.4 release.
    • I have tried pretty hard to break the git version of GTG and Diego has kept fixing issues faster than I could find new bugs 🤔 At this point, it seems to be quite robust and safe to use, so I need you to test it like maniacs.
    • The rest of the tickets in the issue tracker are all feature requests or non-critical issues that can wait to future releases (such as performance optimizations).
  2. Recently I have completed the reorganization and rewriting of the contributors documentation for the GTG project. Please take the time (7 to 9 minutes) to read that blog post.
  3. Thanks to Danielle Vansia’s diligent work, the effort to update and reorganize the user manual is well underway. I believe more great work is yet to come on that front. In addition to viewing with with Yelp, you can also read it online here.
  4. Thanks to Mart Raudsepp’s invaluable help, GTG now supports the highly-popular Meson build system.
    • That also means GNOME Builder can now build & run GTG directly.
    • You can still run GTG manually like before with the “launch.sh” script; simply ensure you have the “meson” package installed before doing so.
    • Unlike the previous launch script, this also facilitates translation work now as it automatically compiles the translation files and also supports running the development version with a language environment variable, such as “LANG=fr_CA.UTF8 ./launch.sh“!
  5. I have spent a couple of days (including a 9-10 hours nonstop coding session) reworking the code to harmonize, improve and deduplicate translatable strings, and redo the whole French translation (now with more chocolatine) and bring it to 100% completion as a way to test and ensure everything in the UI that can possibly be translated is, indeed, translated (barring one strange bug). I can assure you, the fact that I did not eat for over 53 hours in a row was a mere coïncidence.
  6. We have a Twitter account and a Mastodon account now. Go nuts.
  7. We are supposed to be migrating from GitHub to GNOME’s GitLab instance eventually. We’ll need help.

In prevision for the upcoming 0.4 release, I have also made a new release of libLarch, 3.0.1. This is a picture of me making that libLarch release:

Call for contributors
(testers, hackers, translators, packagers)

Now is a great time to get involved, whether with code, translations, or pre-packaging.

  • Considering that I’ve run out of bugs to report, I want you to start testing GTG’s git version now, and report bugs in GitHub².
    • See the read-me for tips on how to build and run the Git version, or see the footnotes below regarding our flatpak packages¹)
    • If nobody finds serious issues, then we can assume our code is “perfectly stable” and would be ready to make a release “any day now”… well, I still have to research and write release notes before that happens (wanna help?), however.
  • If you are a GNOME translator, now is your call to review and update your translations if you want to squeeze them in before the release scheduled to happen (which, barring the absence of new showstopper bugs, should happen within weeks at most).
    • Yes, I know this isn’t much of an “advance notice” at all, but we live in special times this cycle;
    • Also yes, I know the project is on GitHub instead of GNOME’s gitlab, but I haven’t got the skills and time to fix that myself. I’ll accept translation files thrown at me by email if that makes it any easier. If you are working on the translation for a particular language, you should let others know through this ticket.
  • If you are a Linux package maintainer who wants to be able to offer GTG 0.4 and libLarch 3.0.1 “from day one” or as an update in your distro, you may want to start preparations for packaging this release, considering that GTG and libLarch no longer depend on Python 2 nor GTK 2…

Adopt a a puppy plugin!

In order to be able to move fast towards 0.4 without being tied to single-handedly fixing “everything”, we’ve had to split the plugins (and data/synchronization backends) into a couple of categories: those that we can easily fix, those that are no longer relevant and those that are broken but “probably interesting to some users, while not mission-critical”. Those that were not trivial to fix have been deactivated (moved to the “unmaintained” subfolder)—at least until someone new (you?) cares enough about a particular feature to come fix and maintain it. Adopt a puppy today!

Alternate “backends” are particularly affected by this, as the only backend we’ve left enabled is the default “local storage” backend.

If you care about GTG integrating with Evolution, GNote or Tomboy (Tomboy-NG?), LaunchPad, Mantis, Bugzilla, Hamster, and Remember the Milk (that one seemed like a pretty popular backend), then please step up to contribute fixes and maintainership for your favorite plugin/backend. Otherwise, it will most likely stay deactivated.

You can see the issues related to plugins/backends here.


Footnotes

  1. All development infrastructure has been moved to GitHub; we will be decommissionning everything in LaunchPad (to the extent that it is possible to just “disable” things?) as soon as 0.4 comes out. We are supposed to be migrating from GitHub to GNOME’s GitLab instance next.
  2. One thing that is expected to be particularly important to release 0.4 to a wider audience is offering Flatpak packages. We’re mostly ready for this (see this ticket) but it might take a few more days before we can figure out how to have the nightly/dev package officially published as a flatpak (on Flathub, for example) before the 0.4 release package.
    If you don’t want to wait for that, and want a temporary flatpak to try the git version “now!!”, you can go in this folder, download the flatpak file that is sitting there and run: flatpak install -y --user gtg-git-2020-06-11.flatpak (for example). To uninstall it when you want to switch to a more official flatpak later on, do: flatpak uninstall --user org.gnome.GTGDevel; and if you have ideas on how to improve that flatpak package, feel free to help out in the ticket mentioned above.

P.s.: You might think grabbing a random package file from some obscure folder listing on a website is a bit reminiscent of Windows, and therefore by now you would inevitably be asking, “So where’s the keygen!?!”, but since there isn’t any, I would instead recommend you listen to the piece of music below to have the whole “install apps obtained from a website mentioned in a random post” experience!

                                   
   mmm        mmmmmmm          mmm 
 m"   "          #           m"   "
 #   mm          #           #   mm
 #    #          #           #    #
  "mmm"          #            "mmm"
                                   
                                   
Packaged by Diego.
Greetz to Bilal and the Flathub Team!!

“Against the Time”, by the ORiON group… Because that’s how software was installed back then.

The post Revival of GTG, status update #2: git ready to test! appeared first on The Open Sourcerer.

by Jeff at June 11, 2020 08:24 PM

June 09, 2020

Phil NormandWebKitGTK and WPE now supporting videos in the img tag

(Phil Normand)

Using videos in the <img> HTML tag can lead to more responsive web-page loads in most cases. Colin Bendell blogged about this topic, make sure to read his post on the cloudinary website. As it turns out, this feature has been supported for more than 2 years in Safari, but …

by Philippe Normand at June 09, 2020 04:00 PM

June 08, 2020

Phil NormandIntroducing the WebKit Flatpak SDK

(Phil Normand)

Working on a web-engine often requires a complex build infrastructure. This post documents our transition from JHBuild to Flatpak for the WebKitGTK and WPEWebKit development builds.

For the last 10 years, WebKitGTK has been relying on a custom JHBuild moduleset to handle its dependencies and (try to) ensure a reproducible …

by Philippe Normand at June 08, 2020 04:50 PM

June 03, 2020

Andy Wingoa baseline compiler for guile

(Andy Wingo)

Greets, my peeps! Today's article is on a new compiler for Guile. I made things better by making things worse!

The new compiler is a "baseline compiler", in the spirit of what modern web browsers use to get things running quickly. It is a very simple compiler whose goal is speed of compilation, not speed of generated code.

Honestly I didn't think Guile needed such a thing. Guile's distribution model isn't like the web, where every page you visit requires the browser to compile fresh hot mess; in Guile I thought it would be reasonable for someone to compile once and run many times. I was never happy with compile latency but I thought it was inevitable and anyway amortized over time. Turns out I was wrong on both points!

The straw that broke the camel's back was Guix, which defines the graph of all installable packages in an operating system using Scheme code. Lately it has been apparent that when you update the set of available packages via a "guix pull", Guix would spend too much time compiling the Scheme modules that contain the package graph.

The funny thing is that it's not important that the package definitions be optimized; they just need to be compiled in a basic way so that they are quick to load. This is the essential use-case for a baseline compiler: instead of trying to make an optimizing compiler go fast by turning off all the optimizations, just write a different compiler that goes from a high-level intermediate representation straight to code.

So that's what I did!

it don't do much

The baseline compiler skips any kind of flow analysis: there's no closure optimization, no contification, no unboxing of tagged numbers, no type inference, no control-flow optimizations, and so on. The only whole-program analysis that is done is a basic free-variables analysis so that closures can capture variables, as well as assignment conversion. Otherwise the baseline compiler just does a traversal over programs as terms of a simple tree intermediate language, emitting bytecode as it goes.

Interestingly the quality of the code produced at optimization level -O0 is pretty much the same.

This graph shows generated code performance of the CPS compiler relative to new baseline compiler, at optimization level 0. Bars below the line mean the CPS compiler produces slower code. Bars above mean CPS makes faster code. You can click and zoom in for details. Note that the Y axis is logarithmic.

The tests in which -O0 CPS wins are mostly because the CPS-based compiler does a robust closure optimization pass that reduces allocation rate.

At optimization level -O1, which adds partial evaluation over the high-level tree intermediate language and support for inlining "primitive calls" like + and so on, I am not sure why CPS peels out in the lead. No additional important optimizations are enabled in CPS at that level. That's probably something to look into.

Note that the baseline of this graph is optimization level -O1, with the new baseline compiler.

But as I mentioned, I didn't write the baseline compiler to produce fast code; I wrote it to produce code fast. So does it actually go fast?

Well against the -O0 and -O1 configurations of the CPS compiler, it does excellently:

Here you can see comparisons between what will be Guile 3.0.3's -O0 and -O1, compared against their equivalents in 3.0.2. (In 3.0.2 the -O1 equivalent is actually -O1 -Oresolve-primitives, if you are following along at home.) What you can see is that at these optimization levels, for these 8 files, the baseline compiler is around 4 times as fast.

If we compare to Guile 3.0.3's default -O2 optimization level, or -O3, we see bigger disparities:

Which is to say that Guile's baseline compiler runs at about 10x the speed of its optimizing compiler, which incidentally is similar to what I found for WebAssembly compilers a while back.

Also of note is that -O0 and -O1 take essentially the same time, with -O1 often taking less time than -O0. This is because partial evaluation can make the program smaller, at a cost of being less straightforward to debug.

Similarly, -O3 usually takes less time than -O2. This is because -O3 is allowed to assume top-level bindings that aren't exported from a module can be transformed to lexical bindings, which are more available for contification and inlining, which usually leads to smaller programs; it is a similar debugging/performance tradeoff to the -O0/-O1 case.

But what does one gain when choosing to spend 10 times more on compilation? Here I have a gnarly graph that plots performance on some microbenchmarks for all the different optimization levels.

Like I said, it's gnarly, but the summary is that -O1 typically gets you a factor of 2 or 4 over -O0, and -O2 often gets you another factor of 2 above that. -O3 is mostly the same as -O2 except in magical circumstances like the mbrot case, where it adds an extra 16x or so over -O2.

worse is better

I haven't seen the numbers yet of this new compiler in Guix, but I hope it can have a good impact. Already in Guile itself though I've seen a couple interesting advantages.

One is that because it produces code faster, Guile's boostrap from source can take less time. There is also a felicitous feedback effect in that because the baseline compiler is much smaller than the CPS compiler, it takes less time to macro-expand, which reduces bootstrap time (as bootstrap has to pay the cost of expanding the compiler, until the compiler is compiled).

The second fortunate result is that now I can use the baseline compiler as an oracle for the CPS compiler, when I'm working on new optimizations. There's nothing worse than suspecting that your compiler miscompiled itself, after all, and having a second compiler helps keep me sane.

stay safe, friends

The code, you ask? Voici.

Although this work has been ongoing throughout the past month, I need to add some words on the now before leaving you: there is a kind of cognitive dissonance between nerding out on compilers in the comfort of my home, rain pounding on the patio, and at the same time the world on righteous fire. I hope it is clear to everyone by now that the US police are an essentially racist institution: they harass, maim, and murder Black people at much higher rates than whites. My heart is with the protestors. Godspeed to you all, from afar. At the same time, all my non-Black readers should reflect on the ways they participate in systems that support white supremacy, and on strategies to tear them down. I know I will be. Stay safe, wear eye protection, and until next time: peace.

by Andy Wingo at June 03, 2020 08:39 PM

May 29, 2020

May 28, 2020

Christian SchallerInto the world of Robo vacums and Robo mops

(Christian Schaller)

So this is a blog post not related to Fedora or Red Hat, but rather my personal experience with getting a robo vacuum and robo mop into the house.

So about two Months ago my wife and I decided to get a Robo vacuum while shopping at Costco (a US wholesaler outfit). So we brought home the iRobot Roomba 980. Over the next week we ended up also getting the newer iRobot Roomba i7+ and the iRobot Braava m6 mopping robot. Our dream was that we would never have to vacuum or mop again, instead leaving that to our new robots to handle. With two little kids being able to cut that work from our todo list seemed like a dream come through.

I feel that whenever you get into a new technology it takes some time with your first product in that category to understand what questions to ask and what considerations to make. For instance I feel a lot of more informed and confident in my knowledge about electric cars having owned a Nissan Leaf for a few years now (enough to wish I had a Tesla instead for instance :). I guess our experience with robot vacuums here is similar.

Anyway, if you are considering buying a Robot vacuum or mop I think the first lesson we learned is that it is definitely not a magic solution. You have to prepare your house quite a bit before each run, including obvious things like tidying up anything on the floor like the kids legos etc., to discovering that certain furniture, like the IKEA Poang chairs are mortal enemies with your robo vacuum. We had to put our chair on top of the sofa as the Roomba would get stuck on it every time we tried to vacuum the floor. Also the door mat in front of our entrance door kept having its corners sucked into the vacuum getting it stuck. Anyway, our lesson learned is that vacuuming (or mopping) is not something we can do on an impulse or easily on a schedule, as it takes quite a bit of preparation. If you don’t have small kid leaving random stuff all over the house all the time you might be able to just set the vacuum on a schedule, but for us that has turned out to be a big no :). So in practice we only vacuum at night now when my wife and I have had time to prep the house after the kids have gone to bed.

It is worth nothing that we only got one vacuum now. We got the i7+ after we got the 980 due to realizing that the 980 didn’t have features like the smart map allowing you to for instance vacuum specific rooms. It also had other niceties like self emptying and it was supposed to be more quiet (which is nice when you run it at night). However in our experience it also had a less strong vacuum, so we felt it left more crap on the floor then the older 980 model. So in the end we returned the i7+ in favour of the 980, just because we felt it did a better job at vacuuming. It is quite loud though, so we can hear it very loud and clear up on the second floor while trying to fall asleep. So if you need a quiet house to sleep, this setup is not for you.

Another lesson we learned is that the vacuums or mops do not work great in darkness, so we now have to leave the light on downstairs at night when we want to vacuum or mop the floor. We should be able to automate that using Google Home, so Google Home could turn on the lights, start the vacuum and then once done, turn off the lights again. We haven’t actually gotten around to test that yet though.

As for the mop, I would say that it is not a replacement for mopping yourself, but it can reduce the frequency of you mopping yourself and thus help maintain a nice clear floor for longer after you done a full manual mop yourself. Also the m6 is super sensitive to edges, which I assume is to avoid it trying to mop your rugs and mats, but it also means that it can not traverse even small thresholds. So for us who have small thresholds between our kitchen area and the rest of the house we have to carry the mop over the thresholds and mop the rest of the first floor as a separate action, which is a bit of an annoyance now that we are running these things at night. That said the kitchen is the one room which needs moping more regularly, so in some sense the current setup where the roomba vacuums the whole first floor and the braava mop mops just the kitchen is a workable solution for us. One nice feature here is that they can be set up to run in order, so the mop will only start once the vacuum is done (that feature is the main reason we haven’t tested out other brand mops which might handle the threshold situation better).

So to conclude, would I recommend robot vacuums and robot mops to other parents with you kids? I would say yes, it has definitely helped us keep the house cleaner and nicer and let us spend less time cleaning the house. But it is not a miracle cure in any way or form, it still takes time and effort to prepare and set up the house and sometimes you still need to do especially the mopping yourself to get things really clean. As for the question of iRobot versus other brands I have no input as I haven’t really tested any other brands. iRobot is a local company so their vacuums are available in a lot of stores around me and I drive by their HQ on a regular basis, so that is the more or less random reason I ended up with their products as opposed to competing ones.

by uraeus at May 28, 2020 04:37 PM

May 17, 2020

Sebastian PölsterlSurvival Analysis for Deep Learning Tutorial for TensorFlow 2

A while back, I posted the Survival Analysis for Deep Learning tutorial. This tutorial was written for TensorFlow 1 using the tf.estimators API. The changes between version 1 and the current TensorFlow 2 are quite significant, which is why the code does not run when using a recent TensorFlow version. Therefore, I created a new version of the tutorial that is compatible with TensorFlow 2. The text is basically identical, but the training and evaluation procedure changed.

The complete notebook is available on GitHub, or you can run it directly using Google Colaboratory.

Notes on porting to TensorFlow 2

A nice feature of TensorFlow 2 is that in order to write custom metrics (such as concordance index) for TensorBoard, you don’t need to create a Summary protocol buffer manually, instead it suffices to call tf.summary.scalar and pass it a name and float. So instead of

from sksurv.metrics import concordance_index_censored
from tensorflow.core.framework import summary_pb2
c_index_metric = concordance_index_censored(…)[0]
writer = tf.summary.FileWriterCache.get(output_dir)
buf = summary_pb2.Summary(value=[summary_pb2.Summary.Value(
tag="c-index", simple_value=c_index_metric)])
writer.add_summary(buf, global_step=global_step)

you can just do

from sksurv.metrics import concordance_index_censored
with tf.summary.create_file_writer(output_dir):
c_index_metric = concordance_index_censored(…)[0]
summary.scalar("c-index", c_index_metric, step=step)

Another feature that I liked is that you can now iterate over an instance of tf.data.Dataset and directly access the tensors and their values. This is much more convenient than having to call make_one_shot_iterator first, which gives you an iterator, which you call get_next() on to get actual tensors.

Unfortunately, I also encountered some negatives when moving to TensorFlow 2. First of all, there’s currently no officially supported way to produce a view of the executed Graph that is identical to what you get with TensorFlow 1, unless you use the Keras training loop with the TensorBoard callback. There’s tf.summary.trace_export, which as described in this guide sounds like it would produce the graph, however, using this approach you can only view individual operations in TensorBoard, but you can’t inspect what’s the size of input and output tensors of an operation. After searching for while, I eventually found the answer in an Stack overflow post, and, as it turns out, that is exactly what the TensorBoard callback is doing.

Another thing I found odd is that if you define your custom loss as a subclass of tf.keras.losses.Loss, it insists that there are only two inputs y_true and y_pred. In the case of Cox’s proportional hazards loss the true label comprises an event indicator and an indicator matrix specifying which pairs in a batch are comparable. Luckily, the contents of y_pred don’t get checked, so you can just pass a list, but I would prefer to write something like

loss_fn(y_true_event=y_event, y_true_riskset=y_riskset, y_pred=pred_risk_score)

instead of

loss_fn(y_true=[y_event, y_riskset], y_pred=pred_risk_score)

Finally, although eager execution is now enabled by default, the code runs significantly faster in graph mode, i.e. annotating your model’s call method with @tf.function. I guess you are only supposed to use eager execution for debugging purposes.

May 17, 2020 02:07 PM

May 07, 2020

Christian SchallerGNOME is not the default for Fedora Workstation

(Christian Schaller)

We recently had a Fedora AMA where one of the questions asked is why GNOME is the default desktop for Fedora Workstation. In the AMA we answered why GNOME had been chosen for Fedora Workstation, but we didn’t challenge the underlying assumption built into the way the question was asked, and the answer to that assumption is that it isn’t the default. What I mean with this is that Fedora Workstation isn’t a box of parts, where you have default options that can be replaced, its a carefully procured and assembled operating system aimed at developers, sysadmins and makers in general. If you replace one or more parts of it, then it stops being Fedora Workstation and starts being ‘build your own operating system OS’. There is nothing wrong with wanting to or finding it interesting to build your own operating systems, I think a lot of us initially got into Linux due to enjoying doing that. And the Fedora project provides a lot of great infrastructure for people who want to themselves or through teaming up with others build their own operating systems, which is why Fedora has so many spins and variants available.
The Fedora Workstation project is something we made using those tools and it has been tested and developed as an integrated whole, not as a collection of interchangeable components. The Fedora Workstation project might of course over time replace certain parts with other parts over time, like how we are migrating from X.org to Wayland. But at some point we are going to drop standalone X.org support and only support X applications through XWayland. But that is not the same as if each of our users individually did the same. And while it might be technically possible for a skilled users to still get things moved back onto X for some time after we make the formal deprecation, the fact is that you would no longer be using ‘Fedora Workstation’. You be using a homebrew OS that contains parts taken from Fedora Workstation.

So why am I making this distinction? To be crystal clear, it is not to hate on you for wanting to assemble your own OS, in fact we love having anyone with that passion as part of the Fedora community. I would of course love for you to share our vision and join the Fedora Workstation effort, but the same is true for all the other spins and variant communities we have within the Fedora community too. No the reason is that we have a very specific goal of creating a stable and well working experience for our users with Fedora Workstation and one of the ways we achieve this is by having a tightly integrated operating system that we test and develop as a whole. Because that is the operating system we as the Fedora Workstation project want to make. We believe that doing anything else creates an impossible QA matrix, because if you tell people that ‘hey, any part of this OS is replaceable and should still work’ you have essentially created a testing matrix for yourself of infinite size. And while as software engineers I am sure many of us find experiments like ‘wonder if I can get Fedora Workstation running on a BSD kernel’ or ‘I wonder if I can make it work if I replace glibc with Bionic‘ fun and interesting, I am equally sure we all also realize what once we do that we are in self support territory and that Fedora Workstation or any other OS you use as your starting point can’t not be blamed if your system stops working very well. And replacing such a core thing as the desktop is no different to those other examples.

Having been in the game of trying to provide a high quality desktop experience both commercially in the form of RHEL Workstation and through our community efforts around Fedora Workstation I have seen and experienced first hand the problems that the mindset of interchangeable desktop creates. For instance before we switched to the Fedora Workstation branding and it was all just ‘Fedora’ I experienced reviewers complaining about missing features, features had actually spent serious effort implementing, because the reviewer decided to review a different spin of Fedora than the GNOME one. Other cases I remember are of customers trying to fix a problem by switching desktops, only to discover that while the initial issue they wanted fix got resolved by the switch they now got a new batch of issues that was equally problematic for them. And we where left trying to figure out if we should try to fix the original problem, the new ones or maybe the problems reported by users of a third desktop option. We also have had cases of users who just like the reviewer mentioned earlier, assumed something was broken or missing because they where using a different desktop than the one where the feature was added. And at the same time trying to add every feature everywhere would dilute our limited development resources so much that it made us move slow and not have the resources to focus on getting ready for major changes in the hardware landscape for instance.
So for RHEL we now only offer GNOME as the desktop and the same is true in Fedora Workstation, and that is not because we don’t understand that people enjoy experimenting with other desktops, but because it allows us to work with our customers and users and hardware partners on fixing the issues they have with our operating system, because it is a clearly defined entity, and adding the features they need going forward and properly support the hardware they are using, as opposed to spreading ourselves to thin that we just run around putting on band-aids for the problems reported.
And in the longer run I actually believe this approach benefits those of you who want to build your own OS to, or use an OS built by another team around a different set of technologies, because while the improvements might come in a bit later for you, the work we now have the ability to undertake due to having a clear focus, like our work on adding HiDPI support, getting Wayland ready for desktop use or enabling Thunderbolt support in Linux, makes it a lot easier for these other projects to eventually add support for these things too.

Update: Adam Jacksons oft quoted response to the old ‘linux is about choice meme’ is also a required reading for anyone wanting a high quality operating system

by uraeus at May 07, 2020 05:57 PM

May 04, 2020

Bastien NoceraDual-GPU support: Launch on the discrete GPU automatically

(Bastien Nocera) *reality TV show deep voice guy*

In 2016, we added a way to launch apps on the discrete GPU.

*swoosh effects*

In 2019, we added a way for that to work with the NVidia drivers.

*explosions*

In 2020, we're adding a way for applications to launch automatically on the discrete GPU.

*fast cuts of loads of applications being launched and quiet*




Introducing the (badly-named-but-if-you-can-come-up-with-a-better-name-youre-ready-for-computers) “PrefersNonDefaultGPU” desktop entry key.

From the specifications website:
If true, the application prefers to be run on a more powerful discrete GPU if available, which we describe as “a GPU other than the default one” in this spec to avoid the need to define what a discrete GPU is and in which cases it might be considered more powerful than the default GPU. This key is only a hint and support might not be present depending on the implementation. 
And support for that key is coming to GNOME Shell soon.

TL;DR

Add “PrefersNonDefaultGPU=true” to your application's .desktop file if it can benefit from being run on a more powerful GPU.

We've also added a switcherooctl command to recent versions of switcheroo-control so you can launch your apps on the right GPU from your scripts and tweaks.

by Bastien Nocera (noreply@blogger.com) at May 04, 2020 04:52 PM

May 02, 2020

Jean-François Fortin TamOverhauling your Open Source project’s “Developer Experience” and redefining the workflow

This started out as a simple status report following my first report on the revival of the Getting Things GNOME project, but turned out into a full-fledged article that, I believe, would be relevant to many community managers and FLOSS project maintainers out there. Particularly if you have an established open-source project looking for sustainable development but don’t have the luxury of paid developers, it should be worth investing the 7-9 minutes to read this.

As the world came to a standstill and as I finished my tax season accounting (two unrelated things, really), this month I have completed a major overhaul of the “developer experience” for GTG. The objective is to make it easier and more exciting for people to contribute to the project, by having:

  • a very clear workflow, objectives, and set of rules;
  • helpful & up-to-date reference documentation (particularly when it comes to building, testing and developing the core application).
This arguably depicts my efforts to clean up the cruft and pick up the missing pieces.

Indeed, from a community management standpoint, the project was suffering from two fundamental problems:

  1. It was completely unclear what is critical or not, and therefore what actually needs to be done to make a release. This would lead any potential contributor to feel overwhelmed and discouraged to work on the project. It is impossible to take action if you don’t know where you stand, don’t know how far you need to go, and if everything is vying for your attention.
  2. The documentation for contributors was mixed up with user documentation, and both were outdated and spread across four—or even five—websites. There were a gazillion things on LaunchPad, GitHub, ReadTheDocs, a defunct website/blog, and on the GTG wiki—which had at least 55 documentation pages, plus 50 pages of past Google Summer of Code projects, totalling somewhere over 105 pages, two thirds of which had broken links. When information was not “just” scattered, it was also often duplicated, conflicting, or so outdated that it was downright misleading. So, yeah.

Not everything is black and white in this world, but when you combine these two polarities problems together, you end up with a “mottled dove”—the Ikaruga.

Why yes, I am totally using a bipolar shoot-em-up bullet hell as the analogy for what the potential contributor’s developer experience must have felt like.

What the project probably looked like from an outsider’s perspective. Actually applies to many open-source projects out there.

Part 1: Fixing the workflow, redefining the objectives and policies

I am addressing the 1st problem mentioned above with:

Let’s take a minute to explain my philosophy.

To have a clear sense of direction, as a maintainer or core developer, you need to be able to know what is “critical” and what is better left for new contributors to tackle. This is why I created a dynamic list of issue labels and their descriptions, two of which are extremely important: “low-hanging-fruit” and “patch-or-wont-happen“. See CONTRIBUTING.md and the bug reporting & triage guide for further explanations.

Then, just as you must only assign “critical” (or “necessary”) issues to yourself, you must also be ruthless about the “minimum viable product“. If the release can be functional without a particular issue being solved, then that issue is not to be targetted to the milestone, unless a fix/patch is already being proposed or worked on somehow. That way your developers and maintainers can look only at the milestone as their guiding star and have a very clear sense of progression and of “when” it is done:

milestones progression“That sounds like a reference to the Rebuild of Evangelion”, you say?
Well of course we’re Eva nerds, what did you expect?

Obviously this is meant for an atomic “release early and often” development model, not the “time-based releases” model (which I don’t think makes much sense for independent projects).

This is what the setting expectations is all about. By clearly documenting the above, I am essentially establishing a “social contract” between users and contributors. This is not about being lazy, it’s about being brutally honest about the resources you have to contend with.

Part 2: Separating the documentation for contributors

To solve the 2nd fundamental problem, I spent some time analyzing the existing pages and documentation.

I decided that the wiki would now serve only for “Introducing/marketing the project” to users, acting as a website/landing page for the project. Other than historical documents, anything “documentation” would be relegated either to the official user manual, or to files in the development forge (both GitHub and GitLab automatically render Markdown files as nice HTML, so there is no need to use a wiki for that nowadays). This avoids everything becoming a giant kitchen sink mess, and makes it pleasant to read again.

To make that happen, this is what I’ve done in the past two weeks:

“Burning the Brushwood” (1893), by Eero Järnefelt
  1. Fixed all the broken links;
  2. Migrated any relevant contents to nicely rendered Markdown files into a central place (the main GTG Git repository on GitHub), then split, merged or rewrote a ton of “cornerstone” documentation including the new README, the new CONTRIBUTING file, and most of the stuff you see in docs/contributors/;
  3. Deleted the migrated wiki pages and associated links, archived the rest that remains there for “hysterical raisins“, by moving it to the bottom of the page;
  4. Wrote a new introduction and list of features & benefits for users, at the top of the wiki homepage;
  5. Rewrote remaining “cornerstone” wiki pages (new download/install page, new roadmap page, etc.);
  6. Archeologically recovered the epic lost manifesto page;
  7. Used some more archeology to create the press coverage page;
  8. Ordered the GTG.ReadTheDocs.io website to be destroyed and its remains cremated with the brushwood.

Behold: 37 wiki front page revisions later, the front page now does a decent job at answering the #1 question for people hearing about GTG for the first time: “Why would I use GTG? Why is it magical?” The wiki’s front page used to look like this, it now looks like this. Some might say it is now a very nice shrubbery.

On the other side, in the Git repository, my 23 commits involved 155 files, with 1399 line insertions and 888 line deletions.

The git commit timestamps don’t reflect the spread-out, multi-week nature of this work. Good thing I’m not invoicing GTG for that work, because it would cost more than a Nissan Micra.

Remaining GTG dev docs you can help with

Some documents in the new contributors docs folder are things that I have migrated but not actually reviewed for up-to-dateness or accuracy, such as the DBus API documentation or the plugins documentation. If there are outdated parts, I welcome you to contribute suggestions and ideally patches to address any remaining issues, as those areas are a bit out of my area of focus and expertise (I am many things, but I am not an API architect nor data structures specialist).

The post Overhauling your Open Source project’s “Developer Experience” and redefining the workflow appeared first on The Open Sourcerer.

by Jeff at May 02, 2020 08:17 PM

April 28, 2020

Christian SchallerFedora Workstation : Swamp draining for 6 years

(Christian Schaller)

As Fedora Workstation 32 was released today I ended up looking back at our efforts to drain the swamp over the last 6 years. In April of 2014 I wrote a blog post outlining our vision for the Fedora Workstation effort and what we wanted to achieve with it. I hadn’t looked at that blog post in years, but it was interesting going back to it and realize that while some of the details have changed it is still the vision we are pursuing today; to keep draining the swamp and make Fedora Workstation a top notch operating system for developers and makers in general. Which I guess is one of the hallmarks of a decent vision, that it allows for the details to change without invalidating it.

One of my pet peeves at the time with Linux as a desktop operating system was that so many of the so called efforts to make linux user friendly was essentially duck taping over the problems, creating fragile solutions that often made it harder for us to really move forward. In the yers since we addressed a lot of major swamp issues with our efforts around HiDPI & Bolt (getting ahead of hardware enablement for new monitors and Thunderbolt devices respectively), Flatpaks, GNOME Software and AppStream (making applications discoverable, deployable and maintainable), Wayland (making your desktop secure and future proof), LVFS and firmware handling (making them easily available for Linux users), Finger print reader standard (ensuring your hardware is fully supported) and coming up with ways to improve the lives of developers with improvements to the terminal or Fedora Toolbox, our developer pet container tool.

Working on these and other issues we early realized that a model where hardware gets enabled in a reactive manner, in response to new laptops being sold, was never going to yield a good result for our users. As long as we followed that model people where bound to always hit issues with laptops as they came out and then have to deal with those issues for the first 6-12 Months of its life. This is why I am so excited about our new partnership with Lenovo that we pre-announced on Friday as it is both the culmination of our efforts over the last 6 years, but also the starting point of a new era in terms of how we work with hardware makers. So instead of us spending a ton of time trying to reverse engineers basic drivers we can now rely on our hardware partner and their component vendors providing that and we can instead focus on what I call high level hardware enablement. Meaning that as we see new features coming into laptops and computers we can try to improve the infrastructure in the operating system to be able to take full advantage of said hardware, and we can do so in collaboration with the hardware makers knowing that once we provide the infrastructure they will ensure to provide drivers and similar fitting into that infrastructure. Our work on fingerprint readers and thunderbolt support for instance has been two great early examples of that.

Anyway, you are probably interested to know some of the new things coming in Fedora Workstaton 32, so here are some of my personal highlights:

New lock screen

This is more a cosmetic change, but one that every user will see upon logging into their Fedora system after a new install or upgrade. The new design features a faded version of your desktop background image and it should also feel more smooth as the password dialog now appears on the lock screen page as opposed to before where it sort of replaced it. The dialog now also tries to more discreetly than before inform you if your trying to type in the password while the lock screen is on. A big thanks to Allan Day and the GNOME design team for their work here trying to polish this part of the user interface.

GNOME extension app

GNOME Shell extensions are little tweaks and additional features for the desktop that our user have gotten accustomed to and enjoy greatly. Extensions are also the technology that powers the GNOME Classic session that provides those of our users who want it with a more traditional desktop experience. GNOME Shell extensions have gradually evolved in how we work with them since their inception as something you install through your web browser to now being handled through GNOME Software. With Fedora Workstation 32 we are making the new GNOME Shell extensions management app available as the next step in the evolution of GNOME Shell extensions, making it simple to turn any given extension on of our or quickly see which extensions you have installed.

GNOME Extensions app

GNOME Extensions handling app

Fedora Toolbox

Fedora Toolbox is our helper for making working with containers for development and testing as easy it possibly can be. Debarshi Ray and Ondřej Míchal have been hard at work porting the Fedora Toolbox to Go from shell for this release. For those wondering why we choose Go as the language; there was basically two reasons for that. One we felt that the toolbox had gone as far as it could as a shell script, and two that was the language used by all the components we rely on and interact with in the container space, like buildah and podman. We also wanted to make it easy for developers on those projects to contribute by using the same language as they use in their projects.

Fedora Toolbox

Fedora Toolbox running on Fedora Workstation 32

Performance improvements

Another area that we always try to give some love is general performance improvement. For example this time around Christian Hergert identified some really bad behavior of GNOME shell when running on a system with very high I/O. At the face of it GNOME Shell didn’t look like it should have been affected, but during some intensive debugging sessions Christian Hergert discovered that I/O was triggered by various API calls to do things like string translation. So he put together a set of patches to resolve the high I/O stalls and can now report that GNOME Shell keeps running smoothly as silk, even under high disk I/O situations.

PipeWire

Wim Taymans keeps making great strides forward with PipeWire, our tool for creating a unified media handler for audio, pro-audio and Video. In Fedora Workstation 32 we will be shipping the 0.3 version which has quite complete Jack support. In fact we are hoping to team up with the Fedora Jam team to finalize the Jack support during the Fedora 32 lifecycle by testing it extensively. We have a lot of Jacks apps already working with PipeWire, including a series of important Jack apps that we have put into Flatpaks in Fedora like Carla. While the support is there in PipeWire in Fedora 32 right now, there are some convenience work we are still needing to do, but we hope to get that pushed out by next week to make replacing Jack with PipeWire becomes very simple to both do and undo for testing purposes.

The PulseAudio support is the last piece that are still in progress. It works for simple music playback, but it is not a drop in replacement for PulseAudio yet, so while we hoped to encourage widespread testing in F32 we will aim at delaying that to F33 in order to polish the PulseAudio support more first. But once ready we will make this available for testing in a simple manner just like the Jack support.

There has also been further work on the video side of PipeWire, adding support for zero copy video capture, this has reduced the overhead of doing things like screen capturing significantly and should be a nice performance/resource usage improvement for everyone.

Firefox on Wayland

Martin Stransky and Jan Horak has been working hard to improve how Firefox runs and works when used as a Wayland native application fixing a truckload of bigger and smaller bugs this cycle. We feel that we crossed the corner now in terms of the Wayland version being just as stable and good as the X11 one. In fact we could move beyond just fixing bugs to actually adding features this time around for instance Martin Stransky worked on WebGL HW acceleration support enabling us to have that enabled by default now for the first time. We also made sure to taking advantage of the Pipewire zero copy support to improve your video conferencing applications running under Firefox which turned out to be even more important than we expected considering Covid-19 has everyone working from home.

Looking forward

We spent a lot of time and energy over the last 6 years to get to where we are now, putting in place a lot of the basic building blocks needed to make Linux a great desktop operating system. And it feels great that just as we kick of the new line of Lenovo laptops running Fedora we are also entering a new phase of development where we can move beyond getting our basic infrastructure in place, but we can really start taking advantage of it to rapidly improve the experience we are providing even more. A good example is the Firefox work mentioned above, where we finally could move on from ‘make it work with Wayland and PipeWire, to ‘lets take advantage of these new pieces to make Firefox on Linux better’. Another example here is that Adam Jackson is currently investigating how we can improve how Fedora Workstation performs for remote usage. This work includes looking at things like VNC and RDP and commercial offerings and figuring out how we can make our stack work better with such tools, on top of the improvements that PipeWire brings for such usecases.

There is some more heavy lifting needed before our next generation OS architecture, Silverblue, is ready to be our default offering, but it is improving leaps and bounds each release and already have a loyal following, personally I am very excited about the fact that we are quickly moving closer the point were we can make it our default and through that offer features like bulletproof OS updates, factory resets and solid version rollbacks.

On the Flatpak side Owen Taylor and Alex Larsson are putting in a lot of final touches on our Red Hat infrastructure. So for RHEL8.2 we will finally be able to build Flatpaks in RHEL infrastructure and provide a runtime and SDK for our RHEL customers to use. But equally exciting is that we will be able to offer these to the community at large, meaning that we can offer a high quality Flatpak Long Term Support runtime and SDK for ISVs that they can use to both target RHEL users, but also Fedora and other Linux distributions with, in a similar vein to how the Red Hat UBI works. We will also be looking at ways to make getting access to these on Fedora very simple for developers, so that developing towards this runtime becomes quick and easy on your Fedora system. Alex and Owen are also working on an incremental updates feature to be shared between Kubernetes containers and OCI Flatpaks, making both technologies better and updates a lot smaller.

We are also looking at a host of other smaller improvements, many of them in collaboration with our friends at Lenovo, like lap detection (so you can be sure the laptop doesn’t burn you), privacy features (like making it harder to read your screen from an angle) and far field microphones. There are also things like Lennarts HomeD idea which we will be looking at as a way to improve the end user experience.

So the future is looking bright and I hope to see many new faces in the Fedora community going forward, be that if you download Fedora Workstation 32 to install on your own system yourself or if you join us through buying a Fedora laptop from Lenovo this summer.

by uraeus at April 28, 2020 03:46 PM