The recent development of multimedia has made video editing accessible to everyone. Unfortunately, forensic analysis tools capable of detecting traces left by video processing operations in a blind fashion are still at their beginnings. One of the reasons is that videos are customary stored and distributed in a compressed format, and codec-related traces tends to mask previous processing operations. In this paper, we propose to capture video codec traces through convolutional neural networks (CNNs) and exploit them as an asset. Specifically, we train two CNN s to extract information about the used video codec and coding quality, respectively. Building upon these CNN s, we propose a system to detect and localize temporal splicing for video sequences generated from the concatenation of different video segments, which are characterized by inconsistent coding schemes and/or parameters (e.g., video compilations from different sources or broadcasting channels). The proposed solution is validated using videos at different resolutions (i.e., CIF, 4CIF, PAL and 720p) encoded with four common codecs (i.e., MPEG2, MPEG4, H264 and H265) at different qualities (i.e., different constant and variable bitrates, as well as constant quantization parameters).

Video Codec Forensics Based on Convolutional Neural Networks

VERDE, SEBASTIANO;Bondi, L.;Bestagini, P.;Tubaro, S.
2018-01-01

Abstract

The recent development of multimedia has made video editing accessible to everyone. Unfortunately, forensic analysis tools capable of detecting traces left by video processing operations in a blind fashion are still at their beginnings. One of the reasons is that videos are customary stored and distributed in a compressed format, and codec-related traces tends to mask previous processing operations. In this paper, we propose to capture video codec traces through convolutional neural networks (CNNs) and exploit them as an asset. Specifically, we train two CNN s to extract information about the used video codec and coding quality, respectively. Building upon these CNN s, we propose a system to detect and localize temporal splicing for video sequences generated from the concatenation of different video segments, which are characterized by inconsistent coding schemes and/or parameters (e.g., video compilations from different sources or broadcasting channels). The proposed solution is validated using videos at different resolutions (i.e., CIF, 4CIF, PAL and 720p) encoded with four common codecs (i.e., MPEG2, MPEG4, H264 and H265) at different qualities (i.e., different constant and variable bitrates, as well as constant quantization parameters).
2018
2018 25TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)
9781479970612
Deep learning; Forgery detection; Temporal splicing; Video codec identification; Video forensics; Software; 1707; Signal Processing
File in questo prodotto:
File Dimensione Formato  
paper.pdf

accesso aperto

: Post-Print (DRAFT o Author’s Accepted Manuscript-AAM)
Dimensione 860.08 kB
Formato Adobe PDF
860.08 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1086343
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 13
  • ???jsp.display-item.citation.isi??? 6
social impact