Transformers have reshaped sequence modelling, yet their value for embedded electronic-nose (e-nose) systems remains largely anecdotal. We present a systematic evaluation of ConvTran, a lightweight self-attention network originally proposed for generic time-series analysis, in the specific context of portable gas sensing. Two open datasets acquired with an eight-sensor Bosch BME688 board underpin the study: CoffeePow-4 (3583 traces across three coffee powders plus ambient air) and the imbalanced Aroma-7 extension that adds three fragranced creams for a total of 4750 traces. ConvTran is benchmarked against five deep-learning baselines (FCN, ResNet, InceptionTime, LSTM-FCN, ALSTM-FCN) and the proprietary Bosch AI-Studio classifier using accuracy, macro-F1 and macro false-positive rate, complemented by on-device memory and latency profiling. On CoffeePow-4, ConvTran attains 96.42% macro-F1 with 1.71 ms inference time while occupying 115 kB of flash. On Aroma-7, it preserves 96.03% macro-F1 at 2 ms and 557 kB, remaining faster than ResNet or InceptionTime, whose accuracy drops by 1.3%. Fully convolutional networks accuracy exceed ConvTran by 0.6% on the four-class task but loses 17% when the number of classes increases, highlighting the advantage of global self-attention for chemically similar odors. These findings demonstrate that an off-the-shelf transformer can satisfy the requirements of embedded devices, delivering state-of-the-art accuracy across balanced and imbalanced odor sets.

Transformer-based odor recognition on E-nose platforms

Stefanone, Alessandro;Rossoni, Marco;Colombo, Giorgio
2025-01-01

Abstract

Transformers have reshaped sequence modelling, yet their value for embedded electronic-nose (e-nose) systems remains largely anecdotal. We present a systematic evaluation of ConvTran, a lightweight self-attention network originally proposed for generic time-series analysis, in the specific context of portable gas sensing. Two open datasets acquired with an eight-sensor Bosch BME688 board underpin the study: CoffeePow-4 (3583 traces across three coffee powders plus ambient air) and the imbalanced Aroma-7 extension that adds three fragranced creams for a total of 4750 traces. ConvTran is benchmarked against five deep-learning baselines (FCN, ResNet, InceptionTime, LSTM-FCN, ALSTM-FCN) and the proprietary Bosch AI-Studio classifier using accuracy, macro-F1 and macro false-positive rate, complemented by on-device memory and latency profiling. On CoffeePow-4, ConvTran attains 96.42% macro-F1 with 1.71 ms inference time while occupying 115 kB of flash. On Aroma-7, it preserves 96.03% macro-F1 at 2 ms and 557 kB, remaining faster than ResNet or InceptionTime, whose accuracy drops by 1.3%. Fully convolutional networks accuracy exceed ConvTran by 0.6% on the four-class task but loses 17% when the number of classes increases, highlighting the advantage of global self-attention for chemically similar odors. These findings demonstrate that an off-the-shelf transformer can satisfy the requirements of embedded devices, delivering state-of-the-art accuracy across balanced and imbalanced odor sets.
2025
Artificial intelligence; E-nose; Odor classification; Transformer;
Artificial intelligence; E-nose; Odor classification; Transformer
File in questo prodotto:
File Dimensione Formato  
1-s2.0-S2590123025023813-main (1).pdf

accesso aperto

: Publisher’s version
Dimensione 5.93 MB
Formato Adobe PDF
5.93 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1294406
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 1
social impact