We present a ``predictor-corrector'' type of regularization method for inverse problems modeled by first-kind Volterra integral equations and extend the convergence/regularization theory developed in [P. K. Lamm, J. Math Analysis Applic., 195:469-494, 1995] to the case where the integral kernel satisfies general $\nu$-smoothing conditions. The theoretical basis for this method comes from replacing the original first-kind equation by a related second-kind equation which is constructed using ``future values'' of the original kernel and the data on a small interval of length $\delr >0$. In practical implementation this method takes the form of a sequential regularization scheme in which one first predicts a rigid (regularized) solution over a small interval and then, before moving forward in the sequential process, one makes a correction of the solution in order to avoid over-regularization and to improve accuracy.
In addition to the convergence theory developed for noise-free data, we show how selection of the regularization parameter $\delr$ as a function of the level $\delta$ of error present in the data serves to facilitate convergence in the case of noisy data. Finally, to further examine the extent to which $\delr$ improves stability, we show how an increase in $\delr$ serves to decrease the condition number of the matrices associated with a discretization of the original problem.
Text of paper: