Background: Mobile and web technologies allow people to actively participate in the enrichment of maps with accessibility information consisting of reports and/or images on barriers or points of interest (e.g., Wheelmap, MEP App, mPASS, etc.): maps can display geolocalized pictures of barriers or report information about accessibility of buildings/streets, typically as non-accessible (often represented with a red icon/segment), accessible with some difficulties (yellow), accessible (green). Data collected with the help of users can improve their mobility, but one of the challenges consists in validating the crowd-sensed data, to publish only correct and accurate information. Method: The proposed solution is based on a crowdsourcing engine and on a mobile application - called MEP Crowd - that allows users to visualize pictures of barriers reported by other users and answer some questions created by the engine on the declared type of obstacle (e.g., the user is reporting a narrow path), its criticality level (e.g., the user declares that it is not accessible) and about the quality of the pictures; the same questions are distributed to several different users. They are invited to answer some simple questions (e.g., “Does the picture show stairs?”), with “yes”, “no”, “I don’t know”. This allows to evaluate both the reliability of the person who uploaded the report, and – by comparing the answers of the single individuals with the answers provided by other users for the same task – to evaluate also the reliability of the evaluator itself. Users of the MEP CROWD application are engaged with mechanisms based on gamification techniques (e.g., scoring systems, achievements, badges, etc.); a notification system gives feedback to the user about his progress through an established schedule. For questions related to possible explicit content, we apply an image recognition filter that discards those deemed to be harmful a priori; to be compliant with GDPR, the app MEP Crowd is PEGI 16. Key results: The approach has been applied to over 3500 reports consisting of pictures and forms filled with data about obstacles reported in a survey done with middle school students accompanied by target users, and have been evaluated by people of different age and sex. About 25% of the images and reports were considered unclear: in case of single evaluations on the same barrier, they were discarded; instead, in the cases where more than one report of the same obstacle type exists in the same area, all reports can be merged into a single report characterized by an overall evaluation, which improves the average quality; moreover, images can be ranked and only the images with highest quality are shown to the final users. Conclusion: MEP Crowd is a system that exploits crowdsourcing quality control techniques in an application that is completely based on people’s reports: it identifies and keeps only valid answers, ranks images; evaluates the reliability of both the users providing the reports and the MEP Crowd users themselves.

MEP CROWD: Improving Mobility of Users with Data and Images of High Quality

S. Comai;E. De Bernardi;A. Masciadri;F. Salice
2019-01-01

Abstract

Background: Mobile and web technologies allow people to actively participate in the enrichment of maps with accessibility information consisting of reports and/or images on barriers or points of interest (e.g., Wheelmap, MEP App, mPASS, etc.): maps can display geolocalized pictures of barriers or report information about accessibility of buildings/streets, typically as non-accessible (often represented with a red icon/segment), accessible with some difficulties (yellow), accessible (green). Data collected with the help of users can improve their mobility, but one of the challenges consists in validating the crowd-sensed data, to publish only correct and accurate information. Method: The proposed solution is based on a crowdsourcing engine and on a mobile application - called MEP Crowd - that allows users to visualize pictures of barriers reported by other users and answer some questions created by the engine on the declared type of obstacle (e.g., the user is reporting a narrow path), its criticality level (e.g., the user declares that it is not accessible) and about the quality of the pictures; the same questions are distributed to several different users. They are invited to answer some simple questions (e.g., “Does the picture show stairs?”), with “yes”, “no”, “I don’t know”. This allows to evaluate both the reliability of the person who uploaded the report, and – by comparing the answers of the single individuals with the answers provided by other users for the same task – to evaluate also the reliability of the evaluator itself. Users of the MEP CROWD application are engaged with mechanisms based on gamification techniques (e.g., scoring systems, achievements, badges, etc.); a notification system gives feedback to the user about his progress through an established schedule. For questions related to possible explicit content, we apply an image recognition filter that discards those deemed to be harmful a priori; to be compliant with GDPR, the app MEP Crowd is PEGI 16. Key results: The approach has been applied to over 3500 reports consisting of pictures and forms filled with data about obstacles reported in a survey done with middle school students accompanied by target users, and have been evaluated by people of different age and sex. About 25% of the images and reports were considered unclear: in case of single evaluations on the same barrier, they were discarded; instead, in the cases where more than one report of the same obstacle type exists in the same area, all reports can be merged into a single report characterized by an overall evaluation, which improves the average quality; moreover, images can be ranked and only the images with highest quality are shown to the final users. Conclusion: MEP Crowd is a system that exploits crowdsourcing quality control techniques in an application that is completely based on people’s reports: it identifies and keeps only valid answers, ranks images; evaluates the reliability of both the users providing the reports and the MEP Crowd users themselves.
2019
City Accessibility, Crowdsourcing, Mobile solutions
File in questo prodotto:
File Dimensione Formato  
CameraReady AAATE19_MEP.pdf

accesso aperto

Descrizione: Articolo principale
: Post-Print (DRAFT o Author’s Accepted Manuscript-AAM)
Dimensione 79.77 kB
Formato Adobe PDF
79.77 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1128532
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact