February 07, 2017

Arun RaghavanStricter JSON parsing with Haskell and Aeson

I’ve been having fun recently, writing a RESTful service using Haskell and Servant. I did run into a problem that I couldn’t easily find a solution to on the magical bounty of knowledge that is the Internet, so I thought I’d share my findings and solution.

While writing this service (and practically any Haskell code), step 1 is of course defining our core types. Our REST endpoint is basically a CRUD app which exchanges these with the outside world as JSON objects. Doing this is delightfully simple:

{-# LANGUAGE DeriveGeneric #-}

import Data.Aeson
import GHC.Generics

data Job = Job { jobInputUrl :: String
               , jobPriority :: Int
               , ...
               } deriving (Eq, Generic, Show)

instance ToJSON Job where
  toJSON = genericToJSON defaultOptions

instance FromJSON Job where
  parseJSON = genericParseJSON defaultOptions

That’s all it takes to get the basic type up with free serialization using Aeson and Haskell Generics. This is followed by a few more lines to hook up GET and POST handlers, we instantiate the server using warp, and we’re good to go. All standard stuff, right out of the Servant tutorial.

The POST request accepts a new object in the form of a JSON object, which is then used to create the corresponding object on the server. Standard operating procedure again, as far as RESTful APIs go.

The nice part about doing it like this is that the input is automatically validated based on types. So input like:

{
  "jobInputUrl": 123, // should have been a string
  "jobPriority": 123
}

will result in:

Error in $: expected String, encountered Number

However, as this nice tour of how Aeson works demonstrate, if the input has keys that we don’t recognise, no error will be raised:

{
  "jobInputUrl": "http://arunraghavan.net",
  "jobPriority": 100,
  "junkField": "junkValue"
}

This behaviour would not be undesirable in use-cases such as mine — if the client is sending fields we don’t understand, I’d like for the server to signal an error so the underlying problem can be caught early.

As it turns out, making the JSON parsing stricter and catch missing fields is just a little more involved. I didn’t find how this could be done in a single place on the Internet, so here’s the best I could do:

{-# LANGUAGE DeriveDataTypeable #-}
{-# LANGUAGE DeriveGeneric      #-}

import Data.Aeson
import Data.Data
import GHC.Generics

data Job = Job { jobInputUrl :: String
               , jobPriority :: Int
               , ...
               } deriving (Data, Eq, Generic, Show)

instance ToJSON Job where
  toJSON = genericToJSON defaultOptions

instance FromJSON Job where
  parseJSON json = do
    job <- genericParseJSON defaultOptions json
    if keysMatchRecords json job
    then
      return job
    else
      fail "extraneous keys in input"
    where
      -- Make sure the set of JSON object keys is exactly the same as the fields in our object
      keysMatchRecords (Object o) d =
        let
          objKeys   = sort . fmap unpack . keys
          recFields = sort . fmap (fieldLabelModifier defaultOptions) . constrFields . toConstr
        in
          objKeys o == recFields d
      keysMatchRecords _ _          = False

The idea is quite straightforward, and likely very easy to make generic. The Data.Data module lets us extract the constructor for the Job type, and the list of fields in that constructor. We just make sure that’s an exact match for the list of keys in the JSON object we parsed, and that’s it.

Of course, I’m quite new to the Haskell world so it’s likely there are better ways to do this. Feel free to drop a comment with suggestions! In the mean time, maybe this will be useful to others facing a similar problem.

Update: I’ve fixed parseJSON to properly use fieldLabelModifier from the default options, so that comparison actually works when you’re not using Aeson‘s default options. Thanks to /u/tathougies for catching that.

I’m also hoping to rewrite this in generic form using Generics, so watch this space for more updates.

by Arun at February 07, 2017 05:10 AM

February 01, 2017

GStreamerGStreamer 1.10.3 stable release (binaries)

(GStreamer)

Pre-built binary images of the 1.10.3 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

See /releases/1.10/ for the full list of changes.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

February 01, 2017 08:40 AM

January 30, 2017

GStreamerGStreamer 1.10.3 stable release

(GStreamer)

The GStreamer team is pleased to announce the third bugfix release in the stable 1.10 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.10.0. For a full list of bugfixes see Bugzilla.

See /releases/1.10/ for the full release notes.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

January 30, 2017 02:25 PM

January 29, 2017

Thomas Vander SticheleA morning in San Francisco

(Thomas Vander Stichele)

This morning in San Francisco, I check out from the hotel and walk to Bodega, a place I discovered last time I was here. I walk past a Chinese man swinging his arms slowly and deliberately, celebrating a secret of health us Westerners will never know. It is Chinese New Year, and I pass bigger groups celebrating and children singing. My phone takes a picture of a forest of phones taking pictures.

I get to the corner hoping the place is still in business. The sign outside asks “Can it all be so simple?” The place is open, so at least for today, the answer is yes. I take a seat at the bar, and I’m flattered when the owner recognizes me even if it’s only my second time here. I ask her if her sister made it to New York to study – but no, she is trekking around Columbia after helping out at the bodega every weekend for the past few months. I get a coffee and a hibiscus mimosa as I ponder the menu.

The man next to me turns out to be her cousin, Amir. He took a plane to San Francisco from Iran yesterday after hearing an executive order might get signed banning people with visas from seven countries to enter the US. The order was signed two hours before his plane landed. He made it through immigration. The fact sheet arrived on the immigration officer’s desks right after he passed through, and the next man in his queue, coming from Turkey, did not make it through. Needles and eyes.

Now he is planning to get a job, and get a lawyer to find a way to bring over his wife and 4 year old child who are now officially banned from following him for 120 days or more. In Iran he does business strategy and teaches at University. It hits home really hard that we are not that different, him and I, and how undeservedly lucky I am that I won’t ever be faced with such a horrible choice to make. Paria, the owner, chimes in, saying that she’s a Iranian Muslim who came to the US 15 years ago with her family, and they all can’t believe what’s happening right now.

The church bell chimes a song over Washington Square Park and breaks the spell, telling me it’s eleven o’clock and time to get going to the airport.

flattr this!

by Thomas at January 29, 2017 04:57 AM

January 19, 2017

Arun RaghavanQuantifying Synchronisation: Oscilloscope Edition

I’ve written a bit in my last two blog posts about the work I’ve been doing in inter-device synchronised playback using GStreamer. I introduced the library and then demonstrated its use in building video walls.

The important thing in synchronisation, of course, is how much in-sync are the streams? The video in my previous post gave a glimpse into that, and in this post I’ll expand on that with a more rigorous, quantifiable approach.

Before I start, a quick note: I am currently providing freelance consulting around GStreamer, PulseAudio and open source multimedia in general. If you’re looking for help with any of these, do get in touch.

The sync measurement setup

Quantifying what?

What is it that we are trying to measure? Let’s look at this in terms of the outcome — I have two computers, on a network. Using the gst-sync-server library, I play a stream on both of them. The ideal outcome is that the same video frame is displayed at exactly the same time, and the audio sample being played out of the respective speakers is also identical at any given instant.

As we saw previously, the video output is not a good way to measure what we want. This is because video displays are updated in sync with the display clock, over which consumer hardware generally does not have control. Besides, our eyes are not that sensitive to minor differences in timing unless images are side-by-side. After all, we’re fooling it with static pictures that change every 16.67ms or so.

Using audio, though, we should be able to do better. Digital audio streams for music/videos typically consist of 44100 or 48000 samples a second, so we have a much finer granularity than video provides us. The human ear is also fairly sensitive to timings with regards to sound. If it hears the same sound at an interval larger than 10 ms, you will hear two distinct sounds and the echo will annoy you to no end.

Measuring audio is also good enough because once you’ve got audio in sync, GStreamer will take care of A/V sync itself.

Setup

Okay, so now that we know what we want to measure, but how do we measure it? The setup is illustrated below:

Sync measurement setup illustrated

As before, I’ve set up my desktop PC and laptop to play the same stream in sync. The stream being played is a local audio file — I’m keeping the setup simple by not adding network streaming to the equation.

The audio itself is just a tick sound every second. The tick is a simple 440 Hz sine wave (A₄ for the musically inclined) that runs for for 1600 samples. It sounds something like this:

https://arunraghavan.net/wp-content/uploads/test-audio.ogg

I’ve connected the 3.5mm audio output of both the computers to my faithful digital oscilloscope (a Tektronix TBS 1072B if you wanted to know). So now measuring synchronisation is really a question of seeing how far apart the leading edge of the sine wave on the tick is.

Of course, this assumes we’re not more than 1s out of sync (that’s the periodicity of the tick itself), and I’ve verified that by playing non-periodic sounds (any song or video) and making sure they’re in sync as well. You can trust me on this, or better yet, get the code and try it yourself! :)

The last piece to worry about — the network. How well we can sync the two streams depends on how well we can synchronise the clocks of the pipeline we’re running on each of the two devices. I’ll talk about how this works in a subsequent post, but my measurements are done on both a wired and wireless network.

Measurements

Before we get into it, we should keep in mind that due to how we synchronise streams — using a network clock — how in-sync our streams are will vary over time depending on the quality of the network connection.

If this variation is small enough, it won’t be noticeable. If it is large (10s of milliseconds), then we may notice start to notice it as echo, or glitches when the pipeline tries to correct for the lack of sync.

In the first setup, my laptop and desktop are connected to each other directly via a LAN cable. The result looks something like this:

The first two images show the best case — we need to zoom in real close to see how out of sync the audio is, and it’s roughly 50µs.

The next two images show the “worst case”. This time, the zoomed out (5ms) version shows some out-of-sync-ness, and on zooming in, we see that it’s in the order of 500µs.

So even our bad case is actually quite good — sound travels at about 340 m/s, so 500µs is the equivalent of two speakers about 17cm apart.

Now let’s make things a little more interesting. With both my laptop and desktop connected to a wifi network:

On average, the sync can be quite okay. The first pair of images show sync to be within about 300µs.

However, the wifi on my desktop is flaky, so you can see it go off up to 2.5ms in the next pair. In my setup, it even goes off up to 10-20ms, before returning to the average case. The next two images show it go back and forth.

Why does this happen? Well, let’s take a quick look at what ping statistics from my desktop to my laptop look like:

Ping from desktop to laptop on wifi

That’s not good — you can see that the minimum, average and maximum RTT are very different. Our network clock logic probably needs some tuning to deal with this much jitter.

Conclusion

These measurements show that we can get some (in my opinion) pretty good synchronisation between devices using GStreamer. I wrote the gst-sync-server library to make it easy to build applications on top of this feature.

The obvious area to improve is how we cope with jittery networks. We’ve added some infrastructure to capture and replay clock synchronisation messages offline. What remains is to build a large enough body of good and bad cases, and then tune the sync algorithm to work as well as possible with all of these.

Also, Florent over at Ubicast pointed out a nice tool they’ve written to measure A/V sync on the same device. It would be interesting to modify this to allow for automated measurement of inter-device sync.

In a future post, I’ll write more about how we actually achieve synchronisation between devices, and how we can go about improving it.

by Arun at January 19, 2017 12:09 PM

January 17, 2017

GStreamerGStreamer 1.11.1 unstable release (binaries)

(GStreamer)

Pre-built binary images of the 1.11.1 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

January 17, 2017 05:45 AM

January 12, 2017

GStreamerGStreamer 1.11.1 unstable release

(GStreamer)

The GStreamer team is pleased to announce the first release of the unstable 1.11 release series. The 1.11 release series is adding new features on top of the 1.0, 1.2, 1.4, 1.6, 1.8 and 1.10 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework. The unstable 1.11 release series will lead to the stable 1.12 release series in the next weeks. Any newly added API can still change until that point.

Full release notes will be provided at some point during the 1.11 release cycle, highlighting all the new features, bugfixes, performance optimizations and other important changes.

Binaries for Android, iOS, Mac OS X and Windows will be provided in the next days.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

January 12, 2017 03:00 PM

January 11, 2017

Phil NormandWebKitGTK+ and HTML5 fullscreen video

(Phil Normand)

HTML5 video is really nice and all but one annoying thing is the lack of fullscreen video specification, it is currently up to the User-Agent to allow fullscreen video display.

WebKit allows a fullscreen button in the media controls and Safari can, on user-demand only, switch a video to fullscreen and display nice controls over the video. There’s also a webkit-specific DOM API that allow custom media controls to also make use of that feature, as long as it is initiated by a user gesture. Vimeo’s HTML5 player uses that feature for instance.

Thanks to the efforts made by Igalia to improve the WebKit GTK+ port and its GStreamer media-player, I have been able to implement support for this fullscreen video display feature. So any application using WebKitGTK+ can now make use of it :)

The way it is done with this first implementation is that a new fullscreen gtk+ window is created and our GStreamer media-player inserts an autovideosink in the pipeline and overlays the video in the window. Some simple controls are supported, the UI is actually similar to Totem’s. One improvement we could do in the future would be to allow application to override or customize that simple controls UI.

The nice thing about this is that we of course use the GstXOverlay interface and that it’s implemented by the linux, windows and mac os video sinks :) So when other WebKit ports start to use our GStreamer player implementation it will be fairly easy to port the feature and allow more platforms to support fullscreen video display :)

Benjamin also suggested that we could reuse the cairo surface of our WebKit videosink and paint it in a fullscreen GdkWindow. But this should be done only when his Cairo/Pixman patches to enable hardware acceleration land.

So there is room for improvement but I believe this is a first nice step for fullscreen HTML5 video consumption from Epiphany and other WebKitGTK+-based browsers :) Thanks a lot to Gustavo, Sebastian Dröge and Martin for the code-reviews.

by Philippe Normand at January 11, 2017 02:42 PM

January 10, 2017

Jean-François Fortin TamReviewing the Librem 15

Following up on my previous post where I detailed the work I’ve been doing mostly on Purism’s website, today’s post post will cover some video work. Near the beginning of October, I received a Librem 15 v2 unit for testing and reviewing purposes. I have been using it as my main laptop since then, as I don’t believe in reviewing something without using it daily for a couple weeks at least. And so on nights and week-ends, I wrote down testing results, rough impressions and recommendations, then wrote a detailed plan and script to make the first in depth video review of this laptop. Here’s the result—not your typical 2-minutes superficial tour:

With this review, I wanted to:

  • Satisfy my own curiosity and then share the key findings; one of the things that annoyed me some months ago is that I couldn’t find any good “up close” review video to answer my own technical questions, and I thought “Surely I’m not the only one! Certainly a bunch of other people would like to see what the beast feels like in practice.”
  • Make an audio+video production I would be proud of, artistically speaking. I’m rather meticulous in my craft as like creating quality work made to last (similarly, I have recently finished a particular decorative painting after months of obsession… I’ll let you know about that in some other blog post ;)
  • Put my production equipment to good use; I had recently purchased a lot of equipment for my studio and outdoors shooting—it was just begging to be used! Some details on that further down in this post.
  • Provide a ton of industrial design feedback to the Purism team for future models, based on my experience owning and using almost every laptop type out there. And so I did. Pages and pages of it, way more than can fit in a video:
Pictured: my review notes

A fine line

For the video, I spent a fair amount of time writing and revising my video’s plan and narration (half a dozen times at least), until I came to a satisfactory “final” version that I was able to record the narration for (using the exquisite ancient techniques of voice acting).

The tricky part is being simultaneously concise, exhaustive, fair, and entertaining. I wanted to be as balanced as possible and to cover the most crucial topics.

  • At the end of the day, there are some simple fundamentals of what makes a good or bad computer. Checking my 14 pages of review notes, I knew I was being extremely demanding, and that some of those expectations of mine came down to personal preference, things that most people don’t even care about, or common issues that are not even addressed by most “big brand” OEMs’ products… so I balanced my criticism with a dose of realism, making sure to focus on what would matter to people.
  • I also chose topics that would have a longer “shelf life”, considering how much work it takes to produce a high-quality video. For example, even while preparing the review over the course of 2-3 months, some aspects (such as the touchpad drivers) changed/improved and made me revise my opinion. The touchpad behaved better in Fedora 25 than in Fedora 24… until a kernel update broke it (Ugh. At that point I decided to version-lock my kernel package in Fedora).
  • I was conservative in my estimates, even if that makes the device look less shiny than it is. For example, while I said “5-6 hours” of battery life in the review video, in practice I realized that I can get 10-12 hours of battery life with my personal usage pattern (writing text in Gedit, no web browser open, 50% brightness, etc.) and a few simple tweaks in PowerTop.

The final version of my script, as I recorded it, was 59 minutes long. No jokes. And that was after I had decided to ignore some topics (ex.: the whole part about the preloaded operating system; that part would be long enough to be a standalone review review).

I spent some days processing that 59 minutes recording to remove any sound impurities or mistakes, and sped up the tempo, bringing down the duration to 37 and then 31 minutes. Still, that was too long, so while I was doing the final video edit, I tightened everything further (removing as many speech gaps as possible) and cut out a few more topics at the last minute. The final rendered result is a video that is 21 minutes long. Much more reasonable, considering it’s an “in depth” review.

Real audio, lighting, and optics

My general work ethic is: when you do something, do it properly or don’t do it at all.

For this project I used studio lighting, tripods and stands, a dolly, a DSLR with two lenses, a smaller camera for some last minute shots, a high-end PCM sound recorder, a phantom-powered shotgun microphone, a recording booth, monitoring headphones, sandbags, etc.

To complement the narration and cover the length of the review, I chose seven different songs (out of many dozens) based on genre, mood and tempo. I sometimes had to cut/mix songs to avoid the annoying parts. The result, I hope, is a video that remains pleasant and entertaining to watch throughout, while also having a certain emotional or “material” quality to it. For example, the song I used for the thermal design portion makes me feel like I’m touching heatpipes and watching energy flow. Maybe that’s just me though—perhaps if there was some lounge music I would feel like a sofa ;)

Fun!

Much like when I made a video for a local symphonic orchestra, I have enjoyed the making of this review video immensely. I’m glad I got an interesting device to review (not a whole chicken in a can) with the freedom to make the “video I would have wanted to see”. On that note, if anyone has a fighter jet they’d want to see reviewed, let me know (I’m pretty good at dodging missiles ;)

by nekohayo at January 10, 2017 04:00 PM

January 03, 2017

Jean-François Fortin TamRenommer un périphérique audio avec PulseAudio

J’ai découvert par hasard qu’on peut faire un clic droit sur le “profil” d’un périphérique dans pavucontrol pour renommer le périphérique. Or, cette fonctionnalité n’est pas disponible par défaut puisqu’il faut un module supplémentaire (sous Fedora, du moins). Pour faire un essai en temps réel:

$ pactl load-module module-device-manager
34

“34”? Quelle drôle de réponse! Je présume que ça veut dire que ça a fonctionné. Si on réessaie immédiatement la même commande, on obtient:

$ pactl load-module module-device-manager
Échec : Échec lors de l'initialisation du module

Ce qui, si on pense comme un développeur un peu paresseux sur la sémantique, dévoile que le module est bel et bien chargé déjà.

On peut maintenant donner des noms beaucoup plus courts et pratiques à nos périphériques favoris. Par exemple, ma SoundBlaster au nom beaucoup trop complexe:

Ou encore mon bon vieux micro Logitech au nom tout à fait cryptique:

Côté interface, tout ceci est plutôt moche et difficile à découvrir, alors j’ai ouvert un rapport de bug à ce sujet. Ce qui est un peu dommage, c’est qu’on renomme ici les entrées/sorties (sources/sinks) individuellement, au lieu de renommer le périphérique matériel dans sa globalité. Aussi, les versions renommées ne sont pas prises en compte par les paramètres de son de GNOME Control Center.

En tout cas, après ce test concluant, on peut ajouter les lignes suivantes dans le fichier ~/.config/pulse/default.pa pour rendre le changement permanent (je préfère éditer ce fichier dans mon dossier personnel plutôt que le fichier système “/etc/pulse/default.pa”, pour que ça persiste après des “clean installs” de distros:

.include /etc/pulse/default.pa
load-module module-device-manager

Note: “default.pa”, contrairement à daemon.conf (dans lequel j’ai simplement mis “flat-volumes = no” pour revenir au mixage à l’ancienne), n’hérite pas automatiquement des paramètres système, c’est pourquoi j’ai inséré la ligne “.include”.

by nekohayo at January 03, 2017 04:00 PM

December 29, 2016

Sebastian PölsterlAnnouncing scikit-survival – a Python library for survival analysis build on top of scikit-learn

I've meant to do this release for quite a while now and last week I finally had some time to package everything and update the dependencies. scikit-survival contains the majority of code I developed during my Ph.D.

About Survival Analysis

Survival analysis – also referred to as reliability analysis in engineering – refers to type of problem in statistics where the objective is to establish a connections between a set of measurements (often called features or covariates) and the time to an event. The name survival analysis originates from clinical research: in many clinical studies, one is interested in predicting the time to death, i.e., survival. Broadly speaking, survival analysis is a type of regression problem (one wants to predict a continuous value), but with a twist. Consider a clinical study, which investigates coronary heart disease and has been carried out over a 1 year period as in the figure below.

Patient A was lost to follow-up after three months with no recorded cardiovascular event, patient B experienced an event four and a half months after enrollment, patient D withdrew from the study two months after enrollment, and patient E did not experience any event before the study ended. Consequently, the exact time of a cardiovascular event could only be recorded for patients B and C; their records are uncensored. For the remaining patients it is unknown whether they did or did not experience an event after termination of the study. The only valid information that is available for patients A, D, and E is that they were event-free up to their last follow-up. Therefore, their records are censored.

Formally, each patient record consists of a set of covariates $x \in \mathbb{R}^d$ , and the time $t > 0$ when an event occurred or the time $c > 0$ of censoring. Since censoring and experiencing and event are mutually exclusive, it is common to define an event indicator $\delta \in \{0; 1\}$ and the observable survival time $y > 0$. The observable time $y$ of a right censored sample is defined as
\[ y = \min(t, c) =
\begin{cases}
t & \text{if } \delta = 1 , \\
c & \text{if } \delta = 0 ,
\end{cases}
\]

What is scikit-survival?

Recently, many methods from machine learning have been adapted for these kind of problems: random forest, gradient boosting, and support vector machine, many of which are only available for R, but not Python. Some of the traditional models are part of lifelines or statsmodels, but none of those libraries plays nice with scikit-learn, which is the quasi-standard machine learning framework for Python.

This is exactly where scikit-survival comes in. Models implemented in scikit-survival follow the scikit-learn interfaces. Thus, it is possible to use PCA from scikit-learn for dimensionality reduction and feed the low-dimensional representation to a survival model from scikit-survival, or cross-validate a survival model using the classes from scikit-learn. You can see an example of the latter in this notebook.

Download and Install

The source code is available at GitHub and can be installed via Anaconda (currently only for Linux) or pip.

conda install -c sebp scikit-survival

pip install scikit-survival

The API documentation is available here and scikit-survival ships with a couple of sample datasets from the medical domain to get you started.

by sebp at December 29, 2016 02:08 PM

Sebastian PölsterlAnnouncing scikit-survival – a Python library for survival analysis build on top of scikit-learn

I've meant to do this release for quite a while now and last week I finally had some time to package everything and update the dependencies. scikit-survival contains the majority of code I developed during my Ph.D.

About Survival Analysis

Survival analysis – also referred to as reliability analysis in engineering – refers to type of problem in statistics where the objective is to establish a connections between a set of measurements (often called features or covariates) and the time to an event. The name survival analysis originates from clinical research: in many clinical studies, one is interested in predicting the time to death, i.e., survival. Broadly speaking, survival analysis is a type of regression problem (one wants to predict a continuous value), but with a twist. Consider a clinical study, which investigates coronary heart disease and has been carried out over a 1 year period as in the figure below.

Patient A was lost to follow-up after three months with no recorded cardiovascular event, patient B experienced an event four and a half months after enrollment, patient D withdrew from the study two months after enrollment, and patient E did not experience any event before the study ended. Consequently, the exact time of a cardiovascular event could only be recorded for patients B and C; their records are uncensored. For the remaining patients it is unknown whether they did or did not experience an event after termination of the study. The only valid information that is available for patients A, D, and E is that they were event-free up to their last follow-up. Therefore, their records are censored.

Formally, each patient record consists of a set of covariates $x \in \mathbb{R}^d$ , and the time $t > 0$ when an event occurred or the time $c > 0$ of censoring. Since censoring and experiencing and event are mutually exclusive, it is common to define an event indicator $\delta \in \{0; 1\}$ and the observable survival time $y > 0$. The observable time $y$ of a right censored sample is defined as
\[ y = \min(t, c) =
\begin{cases}
t & \text{if } \delta = 1 , \\
c & \text{if } \delta = 0 ,
\end{cases}
\]

What is scikit-survival?

Recently, many methods from machine learning have been adapted for these kind of problems: random forest, gradient boosting, and support vector machine, many of which are only available for R, but not Python. Some of the traditional models are part of lifelines or statsmodels, but none of those libraries plays nice with scikit-learn, which is the quasi-standard machine learning framework for Python.

This is exactly where scikit-survival comes in. Models implemented in scikit-survival follow the scikit-learn interfaces. Thus, it is possible to use PCA from scikit-learn for dimensionality reduction and feed the low-dimensional representation to a survival model from scikit-survival, or cross-validate a survival model using the classes from scikit-learn. You can see an example of the latter in this notebook.

Download and Install

The source code is available at GitHub and can be installed via Anaconda (currently only for Linux) or pip.

conda install -c sebp scikit-survival

pip install scikit-survival

The API documentation is available here and scikit-survival ships with a couple of sample datasets from the medical domain to get you started.

by sebp at December 29, 2016 02:08 PM

December 28, 2016

Arun RaghavanSynchronised Playback and Video Walls

Hello again, and I hope you’re having a pleasant end of the year (if you are, maybe don’t check the news until next year).

I’d written about synchronised playback with GStreamer a little while ago, and work on that has been continuing apace. Since I last wrote about it, a bunch of work has gone in:

  • Landed support for sending a playlist to clients (instead of a single URI)

  • Added the ability to start/stop playback

  • The API has been cleaned up considerably to allow us to consider including this upstream

  • The control protocol implementation was made an interface, so you don’t have to use the built-in TCP server (different use-cases might want different transports)

  • Made a bunch of robustness fixes and documentation

  • Introduced API for clients to send the server information about themselves

  • Also added API for the server to send video transformations for specific clients to apply before rendering

While the other bits are exciting in their own right, in this post I’m going to talk about the last two items.

Video walls

For those of you who aren’t familiar with the term, a video wall is just an array of displays stacked to make a larger display. These are often used in public installations.

One way to set up a video wall is to have each display connected to a small computer (such as the Raspberry Pi), and have them play a part of the entire video, cropped and scaled for the display that is connected. This might look something like:

A 4×4 video wall

The tricky part, of course, is synchronisation — which is where gst-sync-server comes in. Since we’re able to play a given stream in sync across devices on a network, the only missing piece was the ability to distribute a set of per-client transformations so that clients could apply those, and that is now done.

In order to keep things clean from an API perspective, I took the following approach:

  • Clients now have the ability to send a client ID and a configuration (which is just a dictionary) when they first connect to the server

  • The server API emits a signal with the client ID and configuration, which allows you to know when a client connects, what kind of display it’s running, and where it is positioned

  • The server now has additional fields to send a map of client ID to a set of video transformations

This allows us to do fancy things like having each client manage its own information with the server dynamically adapting the set of transformations based on what is connected. Of course, the simpler case of having a static configuration on the server also works.

Demo

Since seeing is believing, here’s a demo of the synchronised playback in action:

The setup is my laptop, which has an Intel GPU, and my desktop, which has an NVidia GPU. These are connected to two monitors (thanks go out to my good friends from Uncommon for lending me their thin-bezelled displays).

The video resolution is 1920×800, and I’ve adjusted the crop parameters to account for the bezels, so the video actually does look continuous. I’ve uploaded the text configuration if you’re curious about what that looks like.

As I mention in the video, the synchronisation is not as tight than I would like it to be. This is most likely because of the differing device configurations. I’ve been working with Nicolas to try to address this shortcoming by using some timing extensions that the Wayland protocol allows for. More news on this as it breaks.

More generally, I’ve done some work to quantify the degree of sync, but I’m going to leave that for another day.

p.s. the reason I used kmssink in the demo was that it was the quickest way I know of to get a full-screen video going — I’m happy to hear about alternatives, though

Future work

Make it real

My demo was implemented quite quickly by allowing the example server code to load and serve up a static configuration. What I would like is to have a proper working application that people can easily package and deploy on the kinds of embedded systems used in real video walls. If you’re interested in taking this up, I’d be happy to help out. Bonus points if we can dynamically calculate transformations based on client configuration (position, display size, bezel size, etc.)

Hardware acceleration

One thing that’s bothering me is that the video transformations are applied in software using GStreamer elements. This works fine(ish) for the hardware I’m developing on, but in real life, we would want to use OpenGL(ES) transformations, or platform specific elements to have hardware-accelerated transformations. My initial thoughts are for this to be either API on playbin or a GstBin that takes a set of transformations as parameters and internally sets up the best method to do this based on whatever sink is available downstream (some sinks provide cropping and other transformations).

Why not audio?

I’ve only written about video transformations here, but we can do the same with audio transformations too. For example, multi-room audio systems allow you to configure the locations of wireless speakers — so you can set which one’s on the left, and which on the right — and the speaker will automatically play the appropriate channel. Implementing this should be quite easy with the infrastructure that’s currently in place.

Merry Happy *.*

I hope you enjoyed reading that — I’ve had great responses from a lot of people about how they might be able to use this work. If there’s something you’d like to see, leave a comment or file an issue.

Happy end of the year, and all the best for 2017!

by Arun at December 28, 2016 06:01 PM

December 15, 2016

Bastien NoceraMaking your own retro keyboard

(Bastien Nocera) We're about a week before Christmas, and I'm going to explain how I created a retro keyboard as a gift to my father, who introduced me to computers when he brought back a Thomson TO7 home, all the way back in 1985.

The original idea was to use a Thomson computer to fit in a smaller computer, such as a CHIP or Raspberry Pi, but the software update support would have been difficult, the use limited to the builtin programs, and it would have required a separate screen. So I restricted myself to only making a keyboard. It was a big enough task, as we'll see.

How do keyboards work?

Loads of switches, that's how. I'll point you to Michał Trybus' blog post « How to make a keyboard - the matrix » for details on this works. You'll just need to remember that most of the keyboards present in those older computers have no support for xKRO, and that the micro-controller we'll be using already has the necessary pull-up resistors builtin.

The keyboard hardware

I chose the smallest Thomson computer available for my project, the MO5. I could have used a stand-alone keyboard, but would have lost all the charm of it (it just looks like a PC keyboard), some other computers have much bigger form factors, to include cartridge, cassette or floppy disk readers.

The DCMoto emulator's website includes tons of documentation, including technical documentation explaining the inner workings of each one of the chipsets on the mainboard. In one of those manuals, you'll find this page:



Whoot! The keyboard matrix in details, no need for us to discover it with a multimeter.

That needs a wash in soapy water

After opening up the computer, and eventually giving the internals, and the keyboard especially if it has mechanical keys, a good clean, we'll need to see how the keyboard is connected.

Finicky metal covered plastic

Those keyboards usually are membrane keyboards, with pressure pads, so we'll need to either find replacement connectors at our local electronics store, or desolder the ones on the motherboard. I chose the latter option.

Desoldered connectors

After matching the physical connectors to the rows and columns in the matrix, using a multimeter and a few key presses, we now know which connector pin corresponds to which connector on the matrix. We can start soldering.

The micro-controller

The micro-controller in my case is a Teensy 2.0, an Atmel AVR-based micro-controller with a very useful firmware that makes it very very difficult to brick. You can either press the little button on the board itself to upload new firmware, or wire it to an external momentary switch. The funny thing is that the Atmega32U4 is 16 times faster than the original CPU (yeah, we're getting old).

I chose to wire it to the "Initial. Prog" ("Reset") button on the keyboard, so as to make it easy to upload new firmware. To do this, I needed to cut a few traces coming out of the physical switch on the board, to avoid interferences from components on the board, using a tile cutter. This is completely optional, and if you're only going to use firmware that you already know at least somewhat works, you can set a key combo to go into firmware upload mode in the firmware. We'll get back to that later.

As far as connecting and soldering to the pins, we can use any I/O pins we want, except D6, which is connected to the board's LED. Note that any deviation from the pinout used in your firmware, you'd need to make changes to it. We'll come back to that again in a minute.

The soldering

Colorful tinning

I wanted to keep the external ports full, so it didn't look like there were holes in the case, but there was enough headroom inside the case to fit the original board, the teensy and pins on the board. That makes it easy to rewire in case of error. You could also dremel (yes, used as a verb) a hole in the board.

As always, make sure early that things would fit, especially the cables!

The unnecessary pollution

The firmware

Fairly early on during my research, I found the TMK keyboard firmware, as well as very well written forum post with detailed explanations on how to modify an existing firmware for your own uses.

This is what I used to modify the firmware for the gh60 keyboard for my own use. You can see here a step-by-step example, implementing the modifications in the same order as the forum post.

Once you've followed the steps, you'll need to compile the firmware. Fedora ships with the necessary packages, so it's a simple:


sudo dnf install -y avr-libc avr-binutils avr-gcc

I also compiled and installed in my $PATH the teensy_cli firmware uploader, and fixed up the udev rules. And after a "make teensy" and a button press...

It worked first time! This is a good time to verify that all the keys work, and you don't see doubled-up letters because of short circuits in your setup. I had 2 wires touching, and one column that just didn't work.

I also prepared a stand-alone repository, with a firmware that uses the tmk_core from the tmk firmware, instead of modifying an existing one.

Some advices

This isn't the first time I hacked on hardware, but I'll repeat some old adages, and advices, because I rarely heed those warnings, and I regret...
  • Don't forget the size, length and non-flexibility of cables in your design
  • Plan ahead when you're going to cut or otherwise modify hardware, because you might regret it later
  • Use breadboard cables and pins to connect things, if you have the room
  • Don't hotglue until you've tested and retested and are sure you're not going to make more modifications
That last one explains the slightly funny cabling of my keyboard.

Finishing touches

All Sugru'ed up

To finish things off nicely, I used Sugru to stick the USB cable, out of the machine, in place. And as earlier, it will avoid having an opening onto the internals.

There are a couple more things that I'll need to finish up before delivery. First, the keymap I have chosen in the firmware only works when a US keymap is selected. I'll need to make a keymap for Linux, possibly hard-coding it. I will also need to create a Windows keymap for my father to use (yep, genealogy software on Linux isn't quite up-to-par).

Prototype and final hardware

All this will happen in the aforementioned repository. And if you ever make your own keyboard, I'm happy to merge in changes to this repository with documentation for your Speccy, C64, or Amstrad CPC hacks.

(If somebody wants to buy me a Sega keyboard, I'll gladly work on a non-destructive adapter. Get in touch :)

by Bastien Nocera (noreply@blogger.com) at December 15, 2016 04:48 PM

November 30, 2016

GStreamerGStreamer 1.10.2 stable release (binaries)

(GStreamer)

Pre-built binary images of the 1.10.2 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

See /releases/1.10/ for the full list of changes.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

November 30, 2016 05:45 PM

November 29, 2016

GStreamerGStreamer 1.10.2 stable release

(GStreamer)

The GStreamer team is pleased to announce the second bugfix release in the stable 1.10 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.10.0. For a full list of bugfixes see Bugzilla.

See /releases/1.10/ for the full release notes.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

November 29, 2016 02:00 PM

GStreamerRevamped documentation and gstreamer.com switch-off

(GStreamer)

The GStreamer project is pleased to announce its new revamped documentation featuring a new design, a new navigation bar, search functionality, source code syntax highlighting as well as new tutorials and documentation about how to use GStreamer on Android, iOS, macOS and Windows.

It now contains the former gstreamer.com SDK tutorials which have kindly been made available by Fluendo & Collabora under a Creative Commons license. The tutorials have been reviewed and updated for GStreamer 1.x. The old gstreamer.com site will be shut down with redirects pointing to the updated tutorials and the official GStreamer website.

Thanks to everyone who helped make this happen.

This is just the beginning. Our goal is to provide a more cohesive documentation experience for our users going forward. To that end, we have converted most of our documentation into markdown format. This should hopefully make it easier for developers and contributors to create new documentation, and to maintain the existing one. There is a lot more work to do, do get in touch if you want to help out. The documentation is maintained in the new gst-docs module.

If you encounter any problems or spot any omissions or outdated content in the new documentation, please file a bug in bugzilla to let us know.

November 29, 2016 12:00 PM

November 24, 2016

Sebastian DrögeWriting GStreamer Elements in Rust (Part 3): Parsing data from untrusted sources like it’s 2016

(Sebastian Dröge)

This is part 3, the older parts can be found here: part 1 and part 2

And again it took quite a while to write a new update about my experiments with writing GStreamer elements in Rust. The previous articles can be found here and here. Since last time, there was also the GStreamer Conference 2016 in Berlin, where I had a short presentation about this.

Progress was rather slow unfortunately, due to work and other things getting into the way. Let’s hope this improves. Anyway!

There will be three parts again, and especially the last one would be something where I could use some suggestions from more experienced Rust developers about how to solve state handling / state machines in a nicer way. The first part will be about parsing data in general, especially from untrusted sources. The second part will be about my experimental and current proof of concept FLV demuxer.

Parsing Data

Safety?

First of all, you probably all saw a couple of CVEs about security relevant bugs in (rather uncommon) GStreamer elements going around. While all of them would’ve been prevented by having the code written in Rust (due to by-default array bounds checking), that’s not going to be our topic here. They also would’ve been prevented by using various GStreamer helper API, like GstByteReader, GstByteWriter and GstBitReader. So just use those, really. Especially in new code (which is exactly the problem with the code affected by the CVEs, it was old and forgotten). Don’t do an accountant’s job, counting how much money/many bytes you have left to read.

But yes, this is something where Rust will also provide an advantage by having by-default safety features. It’s not going to solve all our problems, but at least some classes of problems. And sure, you can write safe C code if you’re careful but I’m sure you also drive with a seatbelt although you can drive safely. To quote Federico about his motivation for rewriting (parts of) librsvg in Rust:

Every once in a while someone discovers a bug in librsvg that makes it all the way to a CVE security advisory, and it’s all due to using C. We’ve gotten double free()s, wrong casts, and out-of-bounds memory accesses. Recently someone did fuzz-testing with some really pathological SVGs, and found interesting explosions in the library. That’s the kind of 1970s bullshit that Rust prevents.

You can directly replace the word librsvg with GStreamer here.

Ergonomics

The other aspect with parsing data is that it’s usually a very boring aspect of programming. It should be as painless as possible, as easy as possible to do it in a safe way, and after having written your 100th parser by hand you probably don’t want to do that again. Parser combinator libraries like Parsec in Haskell provide a nice alternative. You essentially write down something very close to a formal grammar of the format you want to parse, and out of this comes a parser for the format. Other than parser generators like good, old yacc, everything is written in target language though, and there is no separate code generation step.

Rust, being quite a bit more expressive than C, also made people write parser generator libraries. They are all not as ergonomic (yet?) as in Haskell, but still a big improvement over anything else. There’s nom, combine and chomp. All having a slightly different approach. Choose your favorite. I decided on nom for the time being.

A FLV Demuxer in Rust

For implementing a demuxer, I decided on using the FLV container format. Mostly because it is super-simple compared to e.g. MP4 and WebM, but also because Geoffroy, the author of nom, wrote a simple header parsing library for it already and a prototype demuxer using it for VLC. I’ll have to extend that library for various features in the near future though, if the demuxer should ever become feature-equivalent with the existing one in GStreamer.

As usual, the code can be found here, in the “demuxer” branch. The most relevant files are rsdemuxer.rs and flvdemux.rs.

Following the style of the sources and sinks, the first is some kind of base class / trait for writing arbitrary demuxers in Rust. It’s rather unfinished at this point though, just enough to get something running. All the FLV specific code is in the second file, and it’s also very minimal for now. All it can do is to play one specific file (or hopefully all other files with the same audio/video codec combination).

As part of all this, I also wrote bindings for GStreamer’s buffer abstraction and a Rust-rewrite of the GstAdapter helper type. Both showed Rust’s strengths quite well, the buffer bindings by being able to express various concepts of the buffers in a compiler-checked, safe way in Rust (e.g. ownership, reability/writability), the adapter implementation by being so much shorter (it’s missing features… but still).

So here we are, this can already play one specific file (at least) in any GStreamer based playback application. But some further work is necessary, for which I hopefully have some time in the near future. Various important features are still missing (e.g. other codecs, metadata extraction and seeking), the code is rather proof-of-concept style (stringly-typed media formats, lots of unimplemented!() and .unwrap() calls). But it shows that writing media handling elements in Rust is definitely feasible, and generally seems like a good idea.

If only we had Rust already when all this media handling code in GStreamer was written!

State Handling

Another reason why all this took a bit longer than expected, is that I experimented a bit with expressing the state of the demuxer in a more clever way than what we usually do in C. If you take a look at the GstFlvDemux struct definition in C, it contains about 100 lines of field declarations. Most of them are only valid / useful in specific states that the demuxer is in. Doing the same in Rust would of course also be possible (and rather straightforward), but I wanted to try to do something better, especially by making invalid states unrepresentable.

Rust has this great concept of enums, also known as tagged unions or sum types in other languages. These are not to be confused with C enums or unions, but instead allow multiple variants (like C enums) with fields of various types (like C unions). But all of that in a type-safe way. This seems like the perfect tool for representing complicated state and building a state machine around it.

So much for the theory. Unfortunately, I’m not too happy with the current state of things. It is like this mostly because of Rust’s ownership system getting into my way (rightfully, how would it have known additional constraints I didn’t know how to express?).

Common Parts

The first problem I ran into, was that many of the states have common fields, e.g.

enum State {
    ...
    NeedHeader,
    HaveHeader {header: Header, to_skip: usize },
    Streaming {header: Header, audio: ... },
    ...
}

When writing code that matches on this, and that tries to move from one state to another, these common fields would have to be moved. But unfortunately they are (usually) borrowed by the code already and thus can’t be moved to the new variant. E.g. the following fails to compile

match self.state {
        ...
        State::HaveHeader {header, to_skip: 0 } => {
            
            self.state = State::Streaming {header: header, ...};
        },
    }

A Tree of States

Repeating the common parts is not nice anyway, so I went with a different solution by creating a tree of states:

enum State {
    ...
    NeedHeader,
    HaveHeader {header: Header, have_header_state: HaveHeaderState },
    ...
}

enum HaveHeaderState {
    Skipping {to_skip: usize },
    Streaming {audio: ... },
}

Apart from making it difficult to find names for all of these, and having relatively deeply nested code, this works

match self.state {
        ...
        State::HaveHeader {ref header, ref mut have_header_state } => {
            match *have_header_state {
                HaveHeaderState::Skipping { to_skip: 0 } => {
                    *have_header = HaveHeaderState::Streaming { audio: ...};
                }
        },
    }

If you look at the code however, this causes the code to be much bigger than needed and I’m also not sure yet how it will be possible nicely to move “backwards” one state if that situation ever appears. Also there is still the previous problem, although less often: if I would match on to_skip here by reference (or it was no Copy type), the compiler would prevent me from overwriting have_header for the same reasons as before.

So my question for the end: How are others solving this problem? How do you express your states and write the functions around them to modify the states?

Update

I actually implemented the state handling as a State -> State function before (and forgot about that), which seems conceptually the right thing to do. It however has a couple of other problems. Thanks for the suggestions so far, it seems like I’m not alone with this problem at least.

Update 2

I’ve went a bit closer to the C-style struct definition now, as it makes the code less convoluted and allows me to just get forwards with the code. The current status can be seen here now, which also involves further refactoring (and e.g. some support for metadata).

by slomo at November 24, 2016 11:10 PM

November 22, 2016

GStreamerGStreamer 1.10.1 stable release (binaries)

(GStreamer)

Pre-built binary images of the 1.10.1 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

See /releases/1.10/ for the full list of changes.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

November 22, 2016 12:10 PM

November 19, 2016

Víctor JáquezGStreamer-VAAPI 1.10 (now 1.10.1)

A lot of things have happened last month and I have neglected this report. Let me try to fix it.

First, the autumn GStreamer Hackfest and the GStreamer Conference 2016 in Berlin.

The hackfest’s venue was the famous C-Base, where we talked with friends and colleagues. Most of the discussions were leaned towards the release 1.10. Still, Julien Isorce and I talked about our approaches for DMABuf sharing with downstream; we agreed on to explore how to implement an allocator based on GstDmaBufAllocator. We talked with Josep Torra about the performance he was getting with gstreamer-vaapi encoders (and we already did good progress on this). Also we talked with Nicolas Dufresne about v4l2 and kmssink. And many other informal chats along with beer and pizza.

GStreamer Hackfest a the C-Base (from @calvaris tweet)GStreamer Hackfest a the C-Base (from @calvaris tweet)

Afterwards, in the GStreamer Conference, I talked a bit about the current status of GStreamer-VAAPI and what we can expect from release 1.10. Here’s the video that the folks from Ubicast recorded. You can see the slides here too.

I spent most of the conference with Scott, Sree and Josep tracing the bottlenecks in the frame uploading to VA Intel driver, and trying to answer the questions of the folks who gently approached me. I hope my answers were any helpful.

And finally, the first of November, the release 1.10 hit the streets!

As it is already mentioned in the release notes, these are the main new features you can find in GStreamer-VAAPI 1.10

  • All decoders have been split, one plugin feature per codec. So far, the available ones, depending on the driver, are: vaapimpeg2dec, vaapih264dec, vaapih265dec, vaapivc1dec, vaapivp8dec, vaapivp9dec and vaapijpegdec (which already was split).
  • Improvements when mapping VA surfaces into memory, differenciating from negotiation caps and allocations caps, since the allocation memory for surfaces may be bigger than one that is going to be mapped.
  • vaapih265enc got support to constant bitrate mode (CBR).
  • Since several VA drivers are unmaintained, we decide to keep a white list with the va drivers we actually test, which is mostly the i915 and, in some degree, gallium from mesa project. Exporting the environment variable GST_VAAPI_ALL_DRIVERS disable the white list.
  • The plugin features are registered, in run-time, according to its support by the loaded VA driver. So only the decoders and encoder supported by the system are registered. Since the driver can change, some dependencies are tracked to invalidate the GStreamer registry and reload the plugin.
  • DMA-Buf importation from upstream has been improved, gaining performance.
  • vaapipostproc now can negotiate through caps the buffer transformations.
  • Decoders now can do reverse playback! But they only shows I frames, because the surface pool is smaller than the required by the GOP to show all the frames.
  • The upload of frames onto native GL textures has been optimized too, keeping a cache of the internal structures for the offered textures by the sink, improving a lot the performance with glimagesink.

This is the short log summary since branch 1.8:

  1  Allen Zhang
 19  Hyunjun Ko
  1  Jagyum Koo
  2  Jan Schmidt
  7  Julien Isorce
  1  Matthew Waters
  1  Michael Olbrich
  1  Nicolas Dufresne
 13  Scott D Phillips
  9  Sebastian Dröge
 30  Sreerenj Balachandran
  1  Stefan Sauer
  1  Stephen
  1  Thiago Santos
  1  Tim-Philipp Müller
  1  Vineeth TM
154  Víctor Manuel Jáquez Leal

Thanks to Igalia, Intel Open Source Technology Center and many others for making GStreamer-VAAPI possible.

by vjaquez at November 19, 2016 11:31 AM

November 17, 2016

GStreamerGStreamer 1.10.1 stable release

(GStreamer)

The GStreamer team is pleased to announce the first bugfix release in the stable 1.10 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.10.0. For a full list of bugfixes see Bugzilla.

See /releases/1.10/ for the full release notes.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

November 17, 2016 03:00 PM

Jean-François Fortin TamHelping Purism structure its messaging

I meant to finish writing and posting this a month or two ago, but urgent tasks and life kept getting in the way. I don’t often talk about client work here, but since this is public-facing ongoing work for a company that is insanely pro-Free-Software (not just “open source”), a company that ships GNOME3 by default on their laptops (something I have awaited for years), I guess it makes sense to talk about what I’ve been up to recently.

So, for a few weeks three months now, I have been helping Purism structure its messaging and get its business in a better shape. Purism is, in itself, a hugely interesting endeavour. Heck, I could go out on a limb and say this venture, alongside the work Endless is doing, is quite possibly one of the most exciting things that has happened to the Free desktop for the past decade—and yet almost nobody heard of it.

2016-06-01

Before I can even consider visual branding work (maybe someday—when I get to that point, that would mean things are going really well), there was a fundamental need to fill various gaps in the strategy and daily operations, and to address messaging in a way that simultaneously resonates with:

  • hardcore Free Software enthusiasts;
  • “Linux” (GNU/Linux) users and developers just looking for great ultraportable workhorses;
  • the privacy/security-conscious crowd;
  • the public at large (hopefully).

Purism’s website and previous campaigns were full of relevant content, but it was disjointed, sometimes outdated, and sometimes a bit imprecise, confusing, or just plain boring. You really had to be abnormally motivated to dig deep and digest a ton of fairly arid content. I set out to fix all that, so I looked at every single page on the website and external campaigns to rearchitect everything back into a solid foundation, something insightful and pleasant to read.

There was so much buried and scattered information to analyze, I coudn’t have done this with computer screens, no matter how large. I decided to print out a summary for easy annotation. Even with super compacted text (small fonts, tight paragraphs, fewer images, etc.) and redundant text passages removed, my “content review summary” document of the core content was still a whopping 28 pages in “US legal” size (8.5×14 inches). It easily occupies all my office’s desk space:

2016-11-16--18.08.06 2016-11-16--18.14.022016-11-16--18.11.35 2016-11-16--18.08.40

Serious business.

Having no time to waste, and having made a very clear list of issues to fix, I applied my proposed changes directly after completing the semantic and structural analysis.

Nike Just do it basket

The taxonomy and ontology in the blog section was all wrong, too. So I rethought and reorganized all the tags and categories, to create a reliable system for visitors to browse through. For example, I made the tags go from what a noisy mess (pictured on the left) to something much more meaningful (on the right):

beforeafter

The website overhaul led to the new “Why Purism?” section, the new Global Shipping Status page, a much nicer stylesheet (my eyes were bleeding trying to read anything on the website and its blog), URLs that actually make sense, and a rethought product discovery and purchase navigation flow. Some more details on the overall changes can be found here. Hopefully, the current state of the website is much more enticing now. It may not be perfect (feedback welcome) but it’s a good start.

I also paid attention to a bunch of papercuts when it comes to interacting with the community. For example, blog posts lacked a relatable face, so I added avatars for everybody. Forums had an absolutely horrible mail notification system, so I fixed that. People were unsure about warranties, delivery dates, etc., so I made sure to help clarify those as best as possible throughout the website. That sort of thing.

The next big steps, in my view:

  • Helping ensure the business’ viability, beyond just “survival”. Running a “normal” hardware manufacturing business is already fairly complicated from the get go—so making an independent “organically grown” one profitable, that is much harder. Purism is at a turning point right now, and the coming months will be a real test of the feasibility of reconciling Free Software with modern hardware. Most of us have been waiting so long for this, failure is not an option.
  • Improving the customer experience.
  • Improving the product design.

By the way, I have been testing out a “Librem 15” laptop extensively for about two months, and I am hoping to publish a very detailed review and feedback video soon (I don’t do superficial/half-assed reviews, hence the delay). Stay tuned!

The test unit I have transplanted my testing drive intoThe review unit I have transplanted my (ThinkPad’s) testing drive into for further testing (this is why you can see the dash to dock extension active)

by nekohayo at November 17, 2016 05:52 AM

November 15, 2016

Bastien NoceraLyon GNOME Bug day #1

(Bastien Nocera) Last Friday, both a GNOME bug day and a bank holiday, a few of us got together to squash some bugs, and discuss GNOME and GNOME technologies.

Guillaume, a new comer in our group, tested the captive portal support for NetworkManager and GNOME in Gentoo, and added instructions on how to enable it to their Wiki. He also tested a gateway related configuration problem, the patch for which I merged after a code review. Near the end of the session, he also rebuilt WebKitGTK+ to test why Google Docs was not working for him anymore in Web. And nobody believed that he could build it that quickly. Looks like opinions based on past experiences are quite hard to change.

Mathieu worked on removing jhbuild's .desktop file as nobody seems to use it, and it was creating the Sundry category for him, in gnome-shell. He also spent time looking into the tracker blocker that is Mozilla's Focus, based on disconnectme's block lists. It's not as effective as uBlock when it comes to blocking adverts, but the memory and performance improvements, and the slow churn rate, could make it a good default blocker to have in Web.

Haïkel looked into using Emeus, potentially the new GTK+ 4.0 layout manager, to implement the series properties page for Videos.

Finally, I added Bolso to jhbuild, and struggled to get gnome-online-accounts/gnome-keyring to behave correctly in my installation, as the application just did not want to log in properly to the service. I also discussed Fedora's privacy policy (inappropriate for Fedora Workstation, as it doesn't cover the services used in the default installation), a potential design for Flatpak support of joypads and removable devices in general, as well as the future design of the Network panel.

by Bastien Nocera (noreply@blogger.com) at November 15, 2016 09:48 AM

November 10, 2016

Christian SchallerMp3 support now coming to Fedora Workstation 25

(Christian Schaller)

So, in Fedora Workstation 24 we added H264 support through OpenH264. In Fedora Workstation 25 I am happy to tell you all that we are taking another step in improving our codec support by adding support for mp3 playback. I know this has been a big wishlist item for a long time for a lot of people so I am really happy that we are finally in a position to fulfil that wish. You should be able to download the mp3 plugin on day 1 through GNOME Software or through the missing codec installer in various GStreamer applications. For Fedora Workstation 26 I would not be surprised if we decide to ship it on the install media.

Fo the technically inclined out there, our initial enablement is through the mpeg123 library and corresponding GStreamer plugin. The main reason we choose this library over all the others available out there was a combination of using the same license as GStreamer (lgpl v2) and being a well established library used by a lot of different applications already. There might be other mp3 decoders added in the future depending on interest in and effort by the community. So get ready to install Fedora Workstation 25 when its released soon and play some tunes :)

P.S. To be 110% clear we will not be adding encoding support at this time.

by uraeus at November 10, 2016 07:34 PM

November 04, 2016

Arun RaghavanGStreamer and Synchronisation Made Easy

A lesser known, but particularly powerful feature of GStreamer is our ability to play media synchronised across devices with fairly good accuracy.

The way things stand right now, though, achieving this requires some amount of fiddling and a reasonably thorough knowledge of how GStreamer’s synchronisation mechanisms work. While we have had some excellent talks about these at previous GStreamer conferences, getting things to work is still a fair amount of effort for someone not well-versed with GStreamer.

As part of my work with the Samsung OSG, I’ve been working on addressing this problem, by wrapping all the complexity in a library. The intention is that anybody who wants to implement the ability for different devices on a network to play the same stream and have them all synchronised should be able to do so with a few lines of code, and the basic know-how for writing GStreamer-based applications.

I’ve started work on this already, and you can find the code in the creatively named gst-sync-server repo.

Design and API

Let’s make this easier by starting with a picture …

Let’s say you’re writing a simple application where you have two ore more devices that need to play the same video stream, in sync. Your system would consist of two entities:

  • A server: this is where you configure what needs to be played. It instantiates a GstSyncServer object on which it can set a URI that needs to be played. There are other controls available here that I’ll get to in a moment.

  • A client: each device would be running a copy of the client, and would get information from the server telling it what to play, and what clock to use to make sure playback is synchronised. In practical terms, you do this by creating a GstSyncClient object, and giving it a playbin element which you’ve configured appropriately (this usually involves at least setting the appropriate video sink that integrates with your UI).

That’s pretty much it. Your application instantiates these two objects, starts them up, and as long as the clients can access the media URI, you magically have two synchronised streams on your devices.

Control

The keen observers among you would have noticed that there is a control entity in the above diagram that deals with communicating information from the server to clients over the network. While I have currently implemented a simple TCP protocol for this, my goal is to abstract out the control transport interface so that it is easy to drop in a custom transport (Websockets, a REST API, whatever).

The actual sync information is merely a structure marshalled into a JSON string and sent to clients every time something happens. Once your application has some media playing, the next thing you’ll want to do from your server is control playback. This can include

  • Changing what media is playing (like after the current media ends)
  • Pausing/resuming the media
  • Seeking
  • “Trick modes” such as fast forward or reverse playback

The first two of these work already, and seeking is on my short-term to-do list. Trick modes, as the name suggets, can be a bit more tricky, so I’ll likely get to them after other things are done.

Getting fancy

My hope is to see this library being used in a few other interesting use cases:

  • Video walls: having a number of displays stacked together so you have one giant display — these are all effectively playing different rectangles from the same video

  • Multiroom audio: you can play the same music across different speakers in a single room, or multiple rooms, or even group sets of speakers and play different media on different groups

  • Media sharing: being able to play music or videos on your phone and have your friends be able to listen/watch at the same time (a silent disco app?)

What next

At this point, the outline of what I think the API should look like is done. I still need to create the transport abstraction, but that’s pretty much a matter of extracting out the properties and signals that are part of the existing TCP transport.

What I would like is to hear from you, my dear readers who are interested in using this library — does the API look like it would work for you? Does the transport mechanism I describe above cover what you might need? There is example code that should make it easier to understand how this library is meant to be used.

Depending on the feedback I get, my next steps will be to implement the transport interface, refine the API a bit, fix a bunch of FIXMEs, and then see if this is something we can include in gst-plugins-bad.

Feel free to comment either on the Github repository, on this blog, or via email.

And don’t forget to watch this space for some videos and measurements of how GStreamer synchronised fares in real life!

by Arun at November 04, 2016 10:16 AM

November 02, 2016

GStreamerGStreamer 1.10.0 stable release (binaries)

(GStreamer)

Pre-built binary images of the 1.10.0 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

See /releases/1.10/ for the full list of changes.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

November 02, 2016 04:10 PM

November 01, 2016

GStreamerGStreamer 1.10.0 stable release

(GStreamer)

The GStreamer team is proud to announce a new major feature release in the stable 1.x API series of your favourite cross-platform multimedia framework!

As always, this release is again packed with new features, bug fixes and other improvements.

See /releases/1.10/ for the full list of changes.

Binaries for Android, iOS, Mac OS X and Windows will be provided shortly after the source release by the GStreamer project during the stable 1.10 release series.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

November 01, 2016 04:30 PM

Christian SchallerHybrid Graphics and Fedora Workstation 25

(Christian Schaller)

When we started the Fedora Workstation effort one thing we wanted to do was to drain the proverbial swamp of make sure that running Linux on a laptop is a first rate experience. As you see from my last blog entry we have been working on building a team dedicated to that task. There are many facets to this effort, but one that we kept getting asked about was sorting out hybrid graphics. So be aware that some of this has been covered in previous blog entries, but I want to be thorough here. So this blog will cover the big investments of time and effort we are putting into Wayland and X Windows, GNOME Shell and Nouveau (the open source driver for NVidia GPU hardware).

A big part of this story is also that we are trying to make it easier to run the NVidia binary driver on Fedora in general which is actually a fairly large undertaking as there are a lot challenges with trying to do that without requiring any manual configuration from the user and making sure it works fully in the hybrid graphics usecase. This is one of the most requested areas of improvement for Fedora. Many users are trying to run Fedora on their existing laptops or other hardware. Rather than allow this gap to be a reason for people to not run Fedora, we want to provide a better experience. Of course users will have the freedom to make their own choice about installing these drivers, using information provided by Software.

Hybrid graphics is the term used for when you have a laptop with two GPUs, usually one Intel and one NVidia GPU, but there are also some laptops available which comes with Intel/AMD CPU + AMD GPU. The purpose of hybrid graphics is to have your (Intel) Integrated GPU be your ‘day to day GPU’ for running your desktop yet not drawing to much power, but then if you want to for instance play a game on your system you can activate your secondary GPU which got a lot more power, but which also draws more electricity.

What complicated this even more was the fact that most users who wanted to use this setup wanted to use it in combination with the binary NVidia driver, in order to get top performance from their second GPU, which is not surprising since that is the whole point of switching to it. The main challenge with this was that the Mesa and binary NVidia driver both provided an OpenGL implementation that due to the way things works in the X Window System ended up overwriting each other, basically breaking the overwritten driver. The solution for this was a system called Bumblebee which employed some clever hacks to work around this issue, but Bumblebee is a solution to a problem we shouldn’t have to begin with.

Of course dealing with the OpenGL stack wasn’t the only challenge here. For instance different display outputs ended up being connected to different GPUs, with one of the most common setups being that the internal screen is connected to your Intel GPU and your external HDMI and DisplayPort connections are connected to your NVidia or AMD GPU. One some systems there where hardware connectors allowing you to display using either GPU to any screen, but a setup we see becoming more common is that the drivers for both card needs to be initialized in all cases, allowing the display connected GPU to slave itself to the other GPU to allow rendering on one GPU and displaying on the other. Within Mesa this is fairly straightforward to handle, so if both Intel and NVidia as rendering using Mesa things are fairly straightforward. What makes this a lot more challenging here is the NVidia binary driver, so once you install the binary driver we need to find a way to bridge and interoperate between these two stacks.

And of course that is just the highlights, like anything complex like this there are a long laundry list of items from the point where you can checkbox having a feature to it working really well and seamless.

What to expect in Fedora Workstation 25
Lets start with a word of caution here, we are working on a tight deadline here to get all the pieces lined up, so I can not be 100% sure what will make it for the day of release. What we are confident about though is that we will have all the low level plumbing sorted so that we can improve this over the course of the Fedora 25 lifecycle. As always I hope the community will join us in testing and developing this, to ensure we have even the corner cases worked out for Fedora Workstation 26.

The initial step we took and this is quite some time ago was that Dave Airlie worked on making sure we could handle two GPUs within the Mesa stack. As a bit of wordplay on the NVidia solution being called Optimus the open source Mesa solution was named Prime. This work basically allowed you to use Mesa for your Intel card and use Mesa (Nouveau) for your NVidia card and launch applications on the NVidia card while the screen was connected to the Intel card. You can choose which one is used by setting the DRI_PRIME=1 environment variable.

The second step was Adam Jackson collaborating with NVidia on something called libglvnd. Libglvnd stands for GL Vendor Neutral Dispatch and it is basically a dispatch layer that allows your OpenGL calls to be dispatched to more than one OpenGL implementation. Nvidia created this specification and have been steadily working on supporting it in their binary driver, but Adam Jackson stepped up to help review their patches and work on supporting glvnd in the Mesa drivers. This means that for Fedora Workstation 25 you should for the first time ever be able to have both the binary NVidia driver and Mesa installed without any file conflicts. So the X server now has the infrastructure to route your OpenGL call to the correct stack when that DRI_PRIME variable is set. That said there is a major catch here, due to the way things currently work once the NVidia binary driver is installed it expects to be rendering the screen all the time. So we need for the short term to figure out a way to allow it to do that, and in the long run we need to work with NVidia to figure out how the Intel open source driver can collaborate with the Nvidia driver to allow us to only use the Intel driver at times (to save on power for instance). We are still pushing hard on trying to have the short term fix in place, but as I write this we haven’t fully nailed this down yet.

The third step is work that Ben Skeggs has been doing on dealing with the monitor handling here, which includes adding MST support to Nouveau, because a lot of external ports and docking stations have not been working with these hybrid graphics setups due to the external screens all being routed through the NVidia chip. These patches just got accepted upstream and we will be including them in Fedora Workstation 25.

The fourth step has been work that Hans de Goede has been doing around Prime, fixing the modesetting driver and fixing cursor handling with hybrid graphics. In some sense the work Hans has been doing, and check his blog entry linked, is part of that washlist I talked about, the many smaller items that needs to work smoothly for this to go from ‘checkbox works’ to ‘actually works’. He is also working with the DNF team to allow us to control the kernel updates if you use the binary NVidia driver, meaning that we will only bump up your kernel driver when we have the binary NVidia driver module ready too. If for any reason the binary Nvidia driver doesn’t work we want to have graceful fallback to Nouveau.

Fifth step has been Jonas Ådahls work on enabling the binary NVidia driver for Wayland. He has put together a set of patches to be able to support NVidias EGLStreams interface, which means that starting from Fedora Workstation 25 you will be able to use Wayland also with NVidias binary driver.
You can see his work in progress patches here. Jonas will also be looking at implementing hybrid graphics support in Wayland, so we ensure that also for this usecase Wayland is on par with X for Fedora Workstation 26.

The sixth step has been work done by Bastien Nocera to ensure we expose this functionality in the user interface. The idea is that you should be able to configure on a per application basis which GPU they are being launched on. It also means that you can now get all your GPU information in the GNOME about screen also when you have dual-GPU systems. More details in his blog post here.
gnome-shell-discrete-gpu-menuChoosing discete GPU in GNOME Shell

The seventh step is the work that Simone Caronni from Negativo17 has been doing on working with us on packaging the binary NVidia driver in a way that makes all this work. You can find his repo here on Negativo17, but Simone has also been working with Kalev Lember and Richard Hughes to ensure the driver show up correctly in GNOME Software once you have the repository enabled.
NVidia driver in GNOME SoftwareNVidia driver in GNOME Software
The plan is to offer the driver in Fedora Workstation 25 as third party software, but we haven’t yet made the formal proposal due to wanting to be sure we ironed out all important bugs first, both on our side and on NVidia side.

So as you can see from all of this there are a lot of components involved and since we are trying to support both X and Wayland with all of this the job hasn’t exactly been easy. There are for sure some things that will not be ready in time for Fedora Workstation 25, for instance if you want to use the binary NVidia driver we will not be able to make that work with XWayland. So native Wayland applications will be fine, including games using SDL on top of Wayland, but due to how the stack is architected we haven’t gotten around to implementing bridging OpenGL from Xwayland down to the binary NVidia driver. With Nouveau it should work however. This corner case we will have to figure out for Fedora Workstation 26, but for now we have decided that if we detect a Optimus system we will default you to X instead of Wayland.

Get involved
As always we love for more people to join us in this effort and one good way to get started with that is to join our Hybrid graphics test day on Thursday the 3rd of November!

by uraeus at November 01, 2016 02:05 PM

October 31, 2016

Bastien NoceraFlatpak cross-compilation support: Epilogue

(Bastien Nocera) You might remember my attempts at getting an easy to use cross-compilation for ARM applications on my x86-64 desktop machine.

With Fedora 25 approaching, I'm happy to say that the necessary changes to integrate the feature have now rolled into Fedora 25.

For example, to compile the GNU Hello Flatpak for ARM, you would run:

$ flatpak install gnome org.freedesktop.Platform/arm org.freedesktop.Sdk/arm
Installing: org.freedesktop.Platform/arm/1.4 from gnome
[...]
$ sudo dnf install -y qemu-user-static
[...]
$ TARGET=arm ./build.sh

For other applications, add the --arch=arm argument to the flatpak-builder command-line.

This example also works for 64-bit ARM with the architecture name aarch64.

by Bastien Nocera (noreply@blogger.com) at October 31, 2016 12:00 PM

October 28, 2016

Reynaldo VerdejoWayland uninstalled. The easy way


This article is an adaptation of another one first posted on the Samsung Open Source Group blog.

I recently started looking at some GStreamer & Wayland integration issues and, as everyone would, commenced by trying to setup a Wayland development environment.

Before getting my feet wet though, I decided to have a chat about this with Derek Forman, OSG's resident Wayland expert. This isn't surprising because on our team, pretty much every task starts by having a conversation with one of our field specialists. The idea is to save time, as you might have guessed.

This time around I was looking for a fairly trivial piece of info:

Me - "Hey Derek, I have Wayland installed on my distro for some reason - I don't really want to take a look at now - and I would like to setup an upstream Wayland environment without messing it up. Do you have some script like GStreamer's gst-uninstalled so I can perform this feat without messing with my local install?

Derek - "Hey Reynaldo, No."

He said something else I forgot already. Guess he was ranting as usual. Anyhow, the conversation continued with me taking care of the usual I-forgot-to-ask-how-you-were-doings, and later, I'm not sure who's idea it was but we decided to try upstreaming a
wl_uninstalled script --The name had to match existing infrastructure, but that's a boring story you don't want to know about-- that, like gst-uninstalled, would enable developers to quickly setup and use a build/run-time environment consisting of an uninstalled set of interdependent Wayland repos. The idea is not new, but it's still extremely useful. Sure, you can perform similar feats by littering your local disk with custom-prefix installs of pretty much anything, but hey, it is bad taste OK? and you will waste a lot of time, at the very least once.

As if the foreword wasn't boring enough and to continue trying your resilience, here are the instructions to use the thing we come up with:

First the What

Essentially, wl_uninstalled is a helper script that automates the setup needed to build and work with an uninstalled Wayland/Weston environment comprised at least of the wayland, wayland-protocols, libinput, and weston repositories. The provides a shell environment where all build and run-time dependencies are resolved so the uninstalled versions of these modules take precedence.

Then the How

I'll use Weston as an example although other Wayland-based projects should work as well.
Edit a local copy of the script to make $WLD point to the base directory where the repositories are located, make sure to use the absolute path. Then, after executing the script, issue the following commands to get everything built and Weston running from the uninstalled environment:
cd $WLD
for i in wayland wayland-protocols libinput weston; do cd $i && ./autogen.sh && make && cd ..; done
weston &

This work is now upstream, you can see the discussions by searching for 'uninstalled' in the Wayland mailing lists archive. As an added bonus, it turned out that fiddling with this uninstalled environment mambo-jambo lead to the uncovering of a few additional broken pieces of buildsystem/pkg-config cruft that we fixed too. 'Cause we are good as that see ;). Not everything has been merged though. A word on that at the end.

Back to my own use case --GStreamer / Wayland integration work-- now I can simply call the wl_uninstalled script before gst-uninstalled and I'm all set. My uninstalled GStreamer environment will use my (in-turn) uninstalled Wayland/Weston environment making debugging, fixing and integrating against the upstream version of both projects a breeze --Not really but still--, and my filesystem more FHS-compliant. But then again, who cares about that; you probably just want all this to be quick and painless. Hopeless.

Note:

While all patches for this are on Wayland's public mailing list, some have not been merged yet. If you want to try the procedure described here you will need to manually apply the missing ones till they find their way into the official repository:
The actual script is in the wayland-build-tools repository now.

by reynaldo (noreply@blogger.com) at October 28, 2016 05:14 PM