## 1. Introduction

Recently, diagnosis, therapy and monitoring of human diseases involve a variety of imaging modalities, such as magnetic resonance imaging(MRI), computed tomography(CT), Ultrasound(US) and Positron-emission tomography(PET) as well as a variety of modern optical techniques. Over the past two decade, it has been recognized that advanced image processing techniques provide valuable information to physicians for diagnosis, image guided therapy and surgery, and monitoring of the treated organ to the therapy. Many researchers and companies have invested significant efforts in the developments of advanced medical image analysis methods; especially in the two core studies of medical image segmentation and registration, segmentations of organs and lesions are used to quantify volumes and shapes used in diagnosis and monitoring treatment; registration of multimodality images of organs improves detection, diagnosis and staging of diseases as well as image-guided surgery and therapy, registration of images obtained from the same modality are used to monitor progression of therapy. In this work, we focus on these two most challenging problems of medical image analysis and show recent progresses in developing efficient and robust computational tools by modern convex optimization.

Thanks to a series of pioneering works [10, 9, 36]

during recent ten years, convex optimization was developed as a powerful tool to analyze and solve most variational problems of image processing, computer vision and machine learning efficiently. For example, the total-variation-based image denoising

[27, 11]where is a convex penalty function, e.g. or norm; the -normed sparse image reconstruction [4]

where is some linear operator; and many other problems which are initially nonconvex but can be finally solved by convex optimization, such as the spatially continuous min-cut model for image segmentation [10, 36]

(1) |

for which its binary constraint can be relaxed as , hence results in a convex optimzation problem [10].

In this paper, we consider the optimization problems of medical image segmentation and registration as the minimization of a finite sum of convex function terms:

(2) |

which actually includes the convex constrained optimization problem as one special case such that the convex constraint set on the unknown function

can be reformulated by adding its convex characteristic function

into the energy function of (2).

Given the very high dimension of the solution , which is the usual case of
medical image analysis where the input image volume often includes over millions of
pixels, the iterative first-order gradient-descent schemes play the central role in builing
up a practical algorithmic implementation, which typically has a affordable
computational cost per iteration along with proved iteration complexity.
In this perspective, the duality of each convex function term provides one most
powerful tool in both analyzing and developing such first-order
iterative algorithms, where the introduced new dual variable for each
function term just represents the first-order gradient of
implicitly; it brings two equivalent optimization models, a.k.a.
the *primal-dual model*

(3) |

and the *dual model*

(4) |

to the studied convex minimization problem (2).

Comparing with the traditional first-order gradient-descent algorithms which
directly evaluate the gradient of each function term at each iteration and improve the
approximation of optimum iteratively,
the dual model (4) provides another expression to analyze the
original convex optimization model (2) and delivers a novel point
of view to design new first-order iterative algorithms, where the optimum
of (2) just works as the optimal multiplier to the linear equality
constraint as demonstrated in the Lagrangian function of the primal-dual
model (3) (see more details in
Sec. 2.).
In practice, such dual formulation based approach enjoys great advantages in
both mathematical analysis and algorithmic design:
a. each function term of its energy function depends solely on an
independent variable , which naturally leads to an efficient splitting
scheme to tackle the optimization problem in a simple separate-and-conquer way,
or a stochastic descent scheme with low iteration-cost; b.
a unified algorithmic framework to compute the optimum multiplier can be
developed by the *augmented Lagrangian method* (ALM), which involves two
sequential steps at each iteration:

(5) | ||||

(6) |

with capable of setting up high-performance parallel implementations under the same numerical perspective; c. the equivalent dual model in (4) additionally brings new insights to facilitate analyzing its original model (2) and discovers close connections from distinct optimization topics (see Sec. 3. and 4. for details).

Comments

There are no comments yet.