Two Independent Teachers are Better Role Model

06/09/2023
by   Afifa Khaled, et al.
1

Recent deep learning models have attracted substantial attention in infant brain analysis. These models have performed state-of-the-art performance, such as semi-supervised techniques (e.g., Temporal Ensembling, mean teacher). However, these models depend on an encoder-decoder structure with stacked local operators to gather long-range information, and the local operators limit the efficiency and effectiveness. Besides, the MRI data contain different tissue properties (TPs) such as T1 and T2. One major limitation of these models is that they use both data as inputs to the segment process, i.e., the models are trained on the dataset once, and it requires much computational and memory requirements during inference. In this work, we address the above limitations by designing a new deep-learning model, called 3D-DenseUNet, which works as adaptable global aggregation blocks in down-sampling to solve the issue of spatial information loss. The self-attention module connects the down-sampling blocks to up-sampling blocks, and integrates the feature maps in three dimensions of spatial and channel, effectively improving the representation potential and discriminating ability of the model. Additionally, we propose a new method called Two Independent Teachers (2IT), that summarizes the model weights instead of label predictions. Each teacher model is trained on different types of brain data, T1 and T2, respectively. Then, a fuse model is added to improve test accuracy and enable training with fewer parameters and labels compared to the Temporal Ensembling method without modifying the network architecture. Empirical results demonstrate the effectiveness of the proposed method.

READ FULL TEXT

page 4

page 8

page 11

research
03/06/2017

Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results

The recently proposed Temporal Ensembling has achieved state-of-the-art ...
research
12/29/2019

Infant brain MRI segmentation with dilated convolution pyramid downsampling and self-attention

In this paper, we propose a dual aggregation network to adaptively aggre...
research
05/28/2023

LowDINO – A Low Parameter Self Supervised Learning Model

This research aims to explore the possibility of designing a neural netw...
research
01/12/2020

Concurrently Extrapolating and Interpolating Networks for Continuous Model Generation

Most deep image smoothing operators are always trained repetitively when...
research
09/02/2023

Deep-Learning Framework for Optimal Selection of Soil Sampling Sites

This work leverages the recent advancements of deep learning in image pr...
research
09/11/2019

Local block-wise self attention for normal organ segmentation

We developed a new and computationally simple local block-wise self atte...

Please sign up or login with your details

Forgot password? Click here to reset