Fabric Image Representation Encoding Networks for Large-scale 3D Medical Image Analysis

06/28/2020
by   Siyu Liu, et al.
0

Deep neural networks are parameterised by weights that encode feature representations, whose performance is dictated through generalisation by using large-scale feature-rich datasets. The lack of large-scale labelled 3D medical imaging datasets restrict constructing such generalised networks. In this work, a novel 3D segmentation network, Fabric Image Representation Networks (FIRENet), is proposed to extract and encode generalisable feature representations from multiple medical image datasets in a large-scale manner. FIRENet learns image specific feature representations by way of 3D fabric network architecture that contains exponential number of sub-architectures to handle various protocols and coverage of anatomical regions and structures. The fabric network uses Atrous Spatial Pyramid Pooling (ASPP) extended to 3D to extract local and image-level features at a fine selection of scales. The fabric is constructed with weighted edges allowing the learnt features to dynamically adapt to the training data at an architecture level. Conditional padding modules, which are integrated into the network to reinsert voxels discarded by feature pooling, allow the network to inherently process different-size images at their original resolutions. FIRENet was trained for feature learning via automated semantic segmentation of pelvic structures and obtained a state-of-the-art median DSC score of 0.867. FIRENet was also simultaneously trained on MR (Magnatic Resonance) images acquired from 3D examinations of musculoskeletal elements in the (hip, knee, shoulder) joints and a public OAI knee dataset to perform automated segmentation of bone across anatomy. Transfer learning was used to show that the features learnt through the pelvic segmentation helped achieve improved mean DSC scores of 0.962, 0.963, 0.945 and 0.986 for automated segmentation of bone across datasets.

READ FULL TEXT

page 1

page 3

page 8

page 9

page 10

research
06/07/2019

Unsupervised Feature Learning with K-means and An Ensemble of Deep Convolutional Neural Networks for Medical Image Classification

Medical image analysis using supervised deep learning methods remains pr...
research
01/26/2022

Class-Aware Generative Adversarial Transformers for Medical Image Segmentation

Transformers have made remarkable progress towards modeling long-range d...
research
07/11/2021

A Spatial Guided Self-supervised Clustering Network for Medical Image Segmentation

The segmentation of medical images is a fundamental step in automated cl...
research
07/16/2018

Sparsity-based Convolutional Kernel Network for Unsupervised Medical Image Analysis

The availability of large-scale annotated image datasets coupled with re...
research
06/08/2020

Complexity for deep neural networks and other characteristics of deep feature representations

We define a notion of complexity, motivated by considerations of circuit...
research
10/08/2020

Large Scale Indexing of Generic Medical Image Data using Unbiased Shallow Keypoints and Deep CNN Features

We propose a unified appearance model accounting for traditional shallow...
research
03/22/2023

MI-SegNet: Mutual Information-Based US Segmentation for Unseen Domain Generalization

Generalization capabilities of learning-based medical image segmentation...

Please sign up or login with your details

Forgot password? Click here to reset