Content-aware Token Sharing for Efficient Semantic Segmentation with Vision Transformers
Chenyang Lu*
Daan de Geus*
Gijs Dubbelman
*Both authors contributed equally.
[Paper]
[GitHub]

Abstract

This paper introduces Content-aware Token Sharing (CTS), a token reduction approach that improves the computational efficiency of semantic segmentation networks that use Vision Transformers (ViTs). Existing works have proposed token reduction approaches to improve the efficiency of ViT-based image classification networks, but these methods are not directly applicable to semantic segmentation, which we address in this work. We observe that, for semantic segmentation, multiple image patches can share a token if they contain the same semantic class, as they contain redundant information. Our approach leverages this by employing an efficient, class-agnostic policy network that predicts if image patches contain the same semantic class, and lets them share a token if they do. With experiments, we explore the critical design choices of CTS and show its effectiveness on the ADE20K, Pascal Context and Cityscapes datasets, various ViT backbones, and different segmentation decoders. With Content-aware Token Sharing, we are able to reduce the number of processed tokens by up to 44%, without diminishing the segmentation quality.


Code

We publicly release te code of CTS applied to Segmenter in this GitHub repository.

 [GitHub]


Paper and Supplementary Material

C. Lu*, D. de Geus*, G. Dubbelman.
Content-aware Token Sharing for Efficient Semantic Segmentation with Vision Transformers.
In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023.
(PDF)


[Bibtex]


Acknowledgements

This template was originally made by Phillip Isola and Richard Zhang for a colorful ECCV project; the code can be found here.