Abstract: The objective of this report is the development and the examination of Bayes Linear methods in the Bayesian inverse problems framework. Firstly, this framework, along with that of the classical inverse problem, is motivated and briefly introduced. Thereafter, a rigorous presentation of the theory of both conditional expectations and Bayes Linear approximations for Hilbert space valued random variables is given, both of which focus on the best estimator property. These estimators will then be used as point estimators of the solution of a Bayesian inverse problem. However, due to the complexity of the examined model, the (mostly simulation based) derivation of the conditional expectation is relatively computationally expensive and the results of the less expensive Bayes Linear estimator are typically rather poor. Thus, the Ensemble Kalman Filter, which is an efficient method to approximate conditional expectations, is motivated as a sequential Bayes Linear strategy and presented in both the data assimilation and the Sequential Monte Carlo setting. With the aim of improving the efficiency of the Ensemble Kalman Filter by an adaptive step length, a gradient-free Wolfe-type condition is constructed based on the Bayes Linear estimator and discussed.
All presented techniques are numerically examined in several experiments. In particular, estimation results of analytical and simulation based Bayes Linear estimators are compared with the Ensemble Kalman Filter and with a Monte Carlo simulation (based on autonormalised importance sampling) of the conditional expectation. Furthermore, the Ensemble Kalman Filter both with and without the above-mentioned Bayes Linear line search are tested against each other. Finally, the Ensemble Kalman Filter is considered in a situation, where the posterior distribution is multi modal.