Inverse Problems, Regularization and Applications

06/13/2019
by   Abinash Nayak, et al.
0

Inverse problems arise in a wide spectrum of applications in fields ranging from engineering to scientific computation. Connected with the rise of interest in inverse problems is the development and analysis of regularization methods, such as truncated singular value decomposition (TSVD), Tikhonov regularization or iterative regularization methods (like Landerweb), which are a necessity in most inverse problems due to their ill-posedness. In this thesis we propose a new iterative regularization technique to solve inverse problems, without any dependence on external parameters and thus avoiding all the difficulties associated with their involvement. To boost the convergence rate of the iterative method different descent directions are provided, depending on the source conditions, which are based on some specific a-priori knowledge about the solution. We show that this method is very robust to the presence of (extreme) errors in the data. In addition, we also provide a very efficient (heuristic) stopping strategy, which is very essential for an iterative regularization, (even) in the absence of noise information. This is very crucial since most of the regularization methods depends critically on the noise information (error norm) to determine the stopping rule, but for a real life data it is usually unknown. To illustrate the effectiveness and the computational efficiency of this method we apply this technique to numerically solve some classical integral inverse problems, like Fredholm or Volterra type integral equations (in particular, numerical differentiation), and compare the results with certain standard regularization methods, like Tikhonov and TSVD regularization methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro