In recent years, the possibility of easily editing video sequences led to the diffusion of user generated video compilations obtained by splicing together in time different video shots. In order to perform forensic analysis on this kind of videos, it can be useful to split the whole sequence into the set of originating shots. As video shots are seldom obtained with a single device, a possible way to identify each video shot is to exploit sensor-based traces. State-of-the-art solutions for sensor attribution rely on Photo Response Non Uniformity (PRNU). Despite this approach has proved robust and efficient for images, exploiting PRNU in the video domain is still challenging. In this paper, we tackle the problem of blind video temporal splicing detection leveraging PRNU-based source attribution. Specifically, we consider videos composed by few-second shots coming from various sources that have been temporally combined. The focus is on blind detection and temporal localization of splicing points. The analysis is carried out on a recently released dataset composed by videos acquired with mobile devices. The method is validated on both non-stabilized and stabilized videos, thus showing the difficulty of working in the latter scenario.
Blind detection and localization of video temporal splicing exploiting sensor-based footprints
Mandelli, Sara;Bestagini, Paolo;Tubaro, Stefano;
2018-01-01
Abstract
In recent years, the possibility of easily editing video sequences led to the diffusion of user generated video compilations obtained by splicing together in time different video shots. In order to perform forensic analysis on this kind of videos, it can be useful to split the whole sequence into the set of originating shots. As video shots are seldom obtained with a single device, a possible way to identify each video shot is to exploit sensor-based traces. State-of-the-art solutions for sensor attribution rely on Photo Response Non Uniformity (PRNU). Despite this approach has proved robust and efficient for images, exploiting PRNU in the video domain is still challenging. In this paper, we tackle the problem of blind video temporal splicing detection leveraging PRNU-based source attribution. Specifically, we consider videos composed by few-second shots coming from various sources that have been temporally combined. The focus is on blind detection and temporal localization of splicing points. The analysis is carried out on a recently released dataset composed by videos acquired with mobile devices. The method is validated on both non-stabilized and stabilized videos, thus showing the difficulty of working in the latter scenario.File | Dimensione | Formato | |
---|---|---|---|
camera_ready.pdf
accesso aperto
:
Post-Print (DRAFT o Author’s Accepted Manuscript-AAM)
Dimensione
2.74 MB
Formato
Adobe PDF
|
2.74 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.