< Back to list

CAA-Net: Conditional Atrous CNNs With Attention for Explainable Device-Robust Acoustic Scene Classification

journalArticle

DOI:10.1109/TMM.2020.3037534
Authors: Ren Zhao / Kong Qiuqiang / Han Jing / Plumbley Mark D. / Schuller Bjorn W.

Extracted Abstract:

—Acoustic Scene Classification (ASC) aims to clas- sify the environment in which the audio signals are recorded. Recently, Convolutional Neural Networks (CNNs) have been successfully applied to ASC. However, the data distributions of the audio signals recorded with multiple devices are different. There has been little research on the training of robust neural networks on acoustic scene datasets recorded with multiple devices, and on explaining the operation of the internal layers of the neural networks. In this article, we focus on training and explaining device-robust CNNs on multi-device acoustic scene data. We propose conditional atrous CNNs with attention for multi-device ASC. Our proposed system contains an ASC branch and a device classification branch, both modelled by CNNs. We visualise and analyse the intermediate layers of the atrous CNNs. A time-frequency attention mechanism is employed to analyse the contribution of each time-frequency bin of the feature maps in the CNNs. On the Detection and Classification of Acoustic Scenes and Events (DCASE) 2018 ASC dataset, recorded with three devices, our proposed model performs significantly better than CNNs trained on single-device data. Index Terms—Acoustic Scene Classification, Multi-device Data, Conditional Atrous Convolutional Neural Networks, Attention, Visualisation. I.

Level 1: Include/Exclude

  • Papers must discuss situated information visualization* (by Willet et al.) in the application domain of CH.
    *A situated data representation is a data representation whose physical presentation is located close to the data’s physical referent(s).
    *A situated visualization is a situated data representation for which the presentation is purely visual – and is typically displayed on a screen.
  • Representation must include abstract data (e.g., metadata).
  • Papers focused solely on digital reconstruction without information visualization aspects are excluded.
  • Posters and workshop papers are excluded to focus on mature research contributions.
Show all meta-data