Local Regularization for the Nonlinear Autoconvolution Problem

Zhewei Dai and Patricia K. Lamm

Submitted


Abstract:

We develop a local regularization theory for the nonlinear inverse autoconvolution problem. Unlike classical regularization techniques such as Tikhonov regularization, this theory provides regularization methods that preserve the causal nature of the autoconvolution problem, allowing for fast sequential numerical solution (O(rN2 - r2N) flops where r << N for the method discussed in this paper as applied to the nonlinear problem; in comparison, the cost for Tikhonov regularization applied to a general linear problem is O(N3) flops). We prove the convergence of the regularized solutions to the true solution as the noise level in the data shrinks to zero and supply convergence rates for the case of both L2 and continuous data. We propose several regularization methods and provide a theoretical basis for their convergence; of note is that this class of methods does not require an initial guess of the unknown solution. Our numerical results confirm effectiveness of the methods, with results comparing favorably to numerical examples found in the literature for the autoconvolution problem (e.g., [Fleischer, 1999] for examples using Tikhonov regularization with total variation constraints, and [Janno, 2000] for examples using the method of Lavrent'ev); this especially seems to be true when it comes to the recovery of sharp features in the unknown solution. We also show the effectiveness of our method in cases not covered by the theory.

Text of paper:


Contact: lamm@math.msu.edu