< Back to list

A novel and simple approach to regularise attention frameworks and its efficacy in segmentation

conferencePaper

DOI:10.1109/EMBC40787.2023.10340201
Authors: Rajamani Srividya Tirunellai / Rajamani Kumar / Schuller Björn W.

Extracted Abstract:

— Deep neural networks with attention mechanism have shown promising results in many computer vision and medical image processing applications. Attention mechanisms help to capture long range interactions. Recently, more sophisti- cated attention mechanisms like criss-cross attention have been proposed for efficient computation of attention blocks. In this paper, we introduce a simple and low-overhead approach of adding noise to the attention block which we discover to be very effective when using an attention mechanism. Our proposed methodology of introducing regularisation in the attention block by adding noise makes the network more robust and resilient, especially in scenarios where there is limited training data. We incorporate this regularisation mechanism in the criss-cross attention block. This criss-cross attention block enhanced with regularisation is integrated in the bottleneck layer of a U-Net for the task of medical image segmentation. We evaluate our proposed framework on a challenging subset of the NIH dataset for segmenting lung lobes. Our proposed methodology results in improving dice-scores by 2.5 % in this context of medical image segmentation. I.

Level 1: Include/Exclude

  • Papers must discuss situated information visualization* (by Willet et al.) in the application domain of CH.
    *A situated data representation is a data representation whose physical presentation is located close to the data’s physical referent(s).
    *A situated visualization is a situated data representation for which the presentation is purely visual – and is typically displayed on a screen.
  • Representation must include abstract data (e.g., metadata).
  • Papers focused solely on digital reconstruction without information visualization aspects are excluded.
  • Posters and workshop papers are excluded to focus on mature research contributions.
Show all meta-data