Nowadays, a significant fraction of the available video content is created by reusing already existing online videos. In these cases, the source video is seldom reused as is. Conversely, it is typically time clipped to extract only a subset of the original frames, and other transformations are commonly applied (e.g., cropping, logo insertion, etc.). In this paper, we analyze a pool of videos related to the same event or topic. We propose a method that aims at automatically reconstructing the content of the original source videos, i.e., the parent sequences, by splicing together sets of near-duplicate shots seemingly extracted from the same parent sequence. The result of the analysis shows how content is reused, thus revealing the intent of content creators, and enables us to reconstruct a parent sequence also when it is no longer available online. In doing so, we make use of a robust-hash algorithm that allows us to detect whether groups of frames are near-duplicates. Based on that, we developed an algorithm to automatically find near-duplicate matchings between multiple parts of multiple sequences. All the near-duplicate parts are finally temporally aligned to reconstruct the parent sequence. The proposed method is validated with both synthetic and real world datasets downloaded from YouTube.

Who is my parent? Reconstructing video sequences from partially matching shots

LAMERI, SILVIA;BESTAGINI, PAOLO;TAGLIASACCHI, MARCO;TUBARO, STEFANO
2014-01-01

Abstract

Nowadays, a significant fraction of the available video content is created by reusing already existing online videos. In these cases, the source video is seldom reused as is. Conversely, it is typically time clipped to extract only a subset of the original frames, and other transformations are commonly applied (e.g., cropping, logo insertion, etc.). In this paper, we analyze a pool of videos related to the same event or topic. We propose a method that aims at automatically reconstructing the content of the original source videos, i.e., the parent sequences, by splicing together sets of near-duplicate shots seemingly extracted from the same parent sequence. The result of the analysis shows how content is reused, thus revealing the intent of content creators, and enables us to reconstruct a parent sequence also when it is no longer available online. In doing so, we make use of a robust-hash algorithm that allows us to detect whether groups of frames are near-duplicates. Based on that, we developed an algorithm to automatically find near-duplicate matchings between multiple parts of multiple sequences. All the near-duplicate parts are finally temporally aligned to reconstruct the parent sequence. The proposed method is validated with both synthetic and real world datasets downloaded from YouTube.
2014
Proceedings IEEE International Conference on Image Processing 2014 (ICIP 2014)
978-1-4799-5751-4
video forensics, video phylogeny, video alignment, near-duplicates detection
File in questo prodotto:
File Dimensione Formato  
2014_ICIP_parent_reconstruction.pdf

Accesso riservato

: Post-Print (DRAFT o Author’s Accepted Manuscript-AAM)
Dimensione 812.09 kB
Formato Adobe PDF
812.09 kB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/961651
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 22
  • ???jsp.display-item.citation.isi??? 10
social impact