Due to the 360° video camera's ability to rapidly capture the entire scene and be ori-ented in any direction, the arduous task of digitally measuring at city scale for record-ing purposes is facilitated. In addition, the classification of the collected data is the foundation for all applications. Numerous deep learning (DL) techniques for the se-mantic segmentation of point clouds have been developed due to the increasing capab-ilities of artificial intelligence (AI). However, these methods require the annotation of enormous quantities of data, which precludes their use in the classification of city point cloud data. This research aims to classify point clouds generated from 360° vi-deos without the requirement for manually labeled data using convolutional neural networks (CNNs). We begin by capturing 360° videos with low-cost 360° cameras, and then generating dense point clouds using the photogrammetric/structure from motion (SfM) pipeline. The classification network was subsequently trained using ConvPoint as the neural network model and the Paris-Lille-3D Dataset as the training data, prior to being tested on our 360° video generated point cloud. In a preliminary case study conducted in a city center, our method demonstrated great potential for the rapid gene-ration and classification of point clouds, with an overall accuracy (OA) of over 90 percent.

A Deep Learning Classification Technique for Large Scale Point Cloud Generated from 360° Video

Yuwei CAO;Mattia PREVITALI;Luigi BARAZZETTI;Marco SCAIONI
2023-01-01

Abstract

Due to the 360° video camera's ability to rapidly capture the entire scene and be ori-ented in any direction, the arduous task of digitally measuring at city scale for record-ing purposes is facilitated. In addition, the classification of the collected data is the foundation for all applications. Numerous deep learning (DL) techniques for the se-mantic segmentation of point clouds have been developed due to the increasing capab-ilities of artificial intelligence (AI). However, these methods require the annotation of enormous quantities of data, which precludes their use in the classification of city point cloud data. This research aims to classify point clouds generated from 360° vi-deos without the requirement for manually labeled data using convolutional neural networks (CNNs). We begin by capturing 360° videos with low-cost 360° cameras, and then generating dense point clouds using the photogrammetric/structure from motion (SfM) pipeline. The classification network was subsequently trained using ConvPoint as the neural network model and the Paris-Lille-3D Dataset as the training data, prior to being tested on our 360° video generated point cloud. In a preliminary case study conducted in a city center, our method demonstrated great potential for the rapid gene-ration and classification of point clouds, with an overall accuracy (OA) of over 90 percent.
2023
22. Int. Geodätische Woche Obergurgl 2023
978-3-87907-738-0
File in questo prodotto:
File Dimensione Formato  
final_Geodaetische Woche2023.pdf

Accesso riservato

: Publisher’s version
Dimensione 1.14 MB
Formato Adobe PDF
1.14 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1246497
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact