Longer Version for "Deep Context-Encoding Network for Retinal Image Captioning"

by   Jia-Hong Huang, et al.

Automatically generating medical reports for retinal images is one of the promising ways to help ophthalmologists reduce their workload and improve work efficiency. In this work, we propose a new context-driven encoding network to automatically generate medical reports for retinal images. The proposed model is mainly composed of a multi-modal input encoder and a fused-feature decoder. Our experimental results show that our proposed method is capable of effectively leveraging the interactive information between the input image and context, i.e., keywords in our case. The proposed method creates more accurate and meaningful reports for retinal images than baseline models and achieves state-of-the-art performance. This performance is shown in several commonly used metrics for the medical report generation task: BLEU-avg (+16 (+10.2


Contextualized Keyword Representations for Multi-modal Retinal Image Captioning

Medical image captioning automatically generates a medical description t...

DeepOpht: Medical Report Generation for Retinal Images via Deep Models and Visual Explanation

In this work, we propose an AI-based method that intends to improve the ...

Knowledge Matters: Radiology Report Generation with General and Specific Knowledge

Automatic radiology report generation is critical in clinics which can r...

Reinforced Medical Report Generation with X-Linear Attention and Repetition Penalty

To reduce doctors' workload, deep-learning-based automatic medical repor...

Automatic Radiology Report Generation based on Multi-view Image Fusion and Medical Concept Enrichment

Generating radiology reports is time-consuming and requires extensive ex...

Confidence-Guided Radiology Report Generation

Medical imaging plays a pivotal role in diagnosis and treatment in clini...

Identification via Retinal Vessels Combining LBP and HOG

With development of information technology and necessity for high securi...