In this work we present a facial skin detection method, based on a deep learning architecture, that is able to precisely associate a skin label to each pixel of a given image depicting a face. This is an important preliminary step in many applications, such as remote photoplethysmography (rPPG) in which the hearth rate of a subject needs to be estimated analyzing a video of his/her face. The proposed method can detect skin pixels even in low resolution grayscale face images (64 × 32 pixel). A dataset is also described and proposed in order to train the deep learning model. Given the small amount of data available, a transfer learning approach is adopted and validated in order to learn to solve the skin detection problem exploiting a colorization network. Qualitative and quantitative results are reported testing the method on different datasets and in presence of general illumination, facial expressions, object occlusions and it is able to work regardless of the gender, age and ethnicity of the subject.
Deep Skin Detection on Low Resolution Grayscale Images
Paracchini, Marco;Marcon, Marco;Villa, Federica;Tubaro, Stefano
2020-01-01
Abstract
In this work we present a facial skin detection method, based on a deep learning architecture, that is able to precisely associate a skin label to each pixel of a given image depicting a face. This is an important preliminary step in many applications, such as remote photoplethysmography (rPPG) in which the hearth rate of a subject needs to be estimated analyzing a video of his/her face. The proposed method can detect skin pixels even in low resolution grayscale face images (64 × 32 pixel). A dataset is also described and proposed in order to train the deep learning model. Given the small amount of data available, a transfer learning approach is adopted and validated in order to learn to solve the skin detection problem exploiting a colorization network. Qualitative and quantitative results are reported testing the method on different datasets and in presence of general illumination, facial expressions, object occlusions and it is able to work regardless of the gender, age and ethnicity of the subject.File | Dimensione | Formato | |
---|---|---|---|
11311-1130482_Paracchini.pdf
accesso aperto
:
Post-Print (DRAFT o Author’s Accepted Manuscript-AAM)
Dimensione
1.64 MB
Formato
Adobe PDF
|
1.64 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.