Label-guided Attention Distillation for Lane Segmentation

04/04/2023
by   Zhikang Liu, et al.
0

Contemporary segmentation methods are usually based on deep fully convolutional networks (FCNs). However, the layer-by-layer convolutions with a growing receptive field is not good at capturing long-range contexts such as lane markers in the scene. In this paper, we address this issue by designing a distillation method that exploits label structure when training segmentation network. The intuition is that the ground-truth lane annotations themselves exhibit internal structure. We broadcast the structure hints throughout a teacher network, i.e., we train a teacher network that consumes a lane label map as input and attempts to replicate it as output. Then, the attention maps of the teacher network are adopted as supervisors of the student segmentation network. The teacher network, with label structure information embedded, knows distinctly where the convolution layers should pay visual attention into. The proposed method is named as Label-guided Attention Distillation (LGAD). It turns out that the student network learns significantly better with LGAD than when learning alone. As the teacher network is deprecated after training, our method do not increase the inference time. Note that LGAD can be easily incorporated in any lane segmentation network.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 6

page 7

research
11/07/2017

Moonshine: Distilling with Cheap Convolutions

Model distillation compresses a trained machine learning model, such as ...
research
10/11/2022

Repainting and Imitating Learning for Lane Detection

Current lane detection methods are struggling with the invisibility lane...
research
12/31/2021

Conditional Generative Data-Free Knowledge Distillation based on Attention Transfer

Knowledge distillation has made remarkable achievements in model compres...
research
09/23/2021

LGD: Label-guided Self-distillation for Object Detection

In this paper, we propose the first self-distillation framework for gene...
research
02/22/2023

Debiased Distillation by Transplanting the Last Layer

Deep models are susceptible to learning spurious correlations, even duri...
research
08/02/2019

Learning Lightweight Lane Detection CNNs by Self Attention Distillation

Training deep models for lane detection is challenging due to the very s...
research
12/06/2019

LaTeS: Latent Space Distillation for Teacher-Student Driving Policy Learning

We describe a policy learning approach to map visual inputs to driving c...

Please sign up or login with your details

Forgot password? Click here to reset