Oberwolfach Reports


Full-Text PDF (584 KB) | Introduction as PDF | Metadata | Table of Contents | OWR summary
Volume 2, Issue 3, 2005, pp. 2091–2138
DOI: 10.4171/OWR/2005/37

Published online: 2006-06-30

Mini-Workshop: Convergence of Adaptive Algorithms

Mark Ainsworth[1], Carsten Carstensen[2] and Willy Dörfler[3]

(1) Strathclyde University, Glasgow, United Kingdom
(2) Humboldt-Universität zu Berlin, Germany
(3) Technische Universität Karlsruhe, Germany

\noindent The Oberwolfach mini-workshop ``Convergence of Adaptive Algorithms'' originated from a previous Oberwolfach meeting 16/2004 on the topic of ``Self-adaptive Methods for Partial Differential Equations'' which took place in Spring 2004. One motivation for the mini-workshop was the resolution of the key issues of `error reduction' in adaptive finite element schemes and the necessity, or otherwise, for 'coarsening strategies' in adaptive algorithms. While the former topic might be regarded as more theoretical, the latter has important practical repercussions in the sense that essentially every practical numerical example would indicate that coarsening is unnecessary. However, the existing proofs of optimal complexity would seem to suggest that coarsening is essential if one is to control discretisation error at an optimal computational cost. Set against this background, the mini-workshop comprised of 18 leading experts on the convergence of adaptive finite element methods representing 8 different countries and three continents, who identified and discussed the following specific open questions: \begin{enumerate} \item For which class of problems and adaptive finite element methods can convergence and error reduction properties be guaranteed? \item In what sense and for which classes of algorithm and mesh refinement schemes are adaptive algorithms optimal? \item Is coarsening necessary to guarantee the optimality of an adaptive algorithm? \item Can the proofs of convergence for adaptive algorithms be carried over from the bulk criterion to other more widespread criteria often used in practice, such as the maximum criterion? \end{enumerate} During the mini-workshop 11 talks were given concerning adaptive finite element me\-thods and covering a range of new extensions to the classical convergence analysis were presented. The talks directly addressed the important issues including the role of coarsening, marking rules, $hp$-adaptive refinement strategies, discrete weighted residual (DWR) adaptive methods in addition to the convergence of non-conforming and mixed methods. The participants also presented very recent work on applications to new classes of equations, e.g.~ for rough and non-conforming obstacles, for the Laplace--Beltrami operator and the Stokes equations. The presentations were complemented by several more wide-ranging discussion sessions on open questions and future directions in the field. In particular, it was widely felt that in the case of the class of adaptive algorithms for which there is a proof of optimality, more numerical experiments are necessary to achieve a deeper understanding of the insights and issues highlighted by the abstract analysis. Moreover, numerical experiments were seen to be important in providing quantitative information on the generic constants that appear in the abstract error bounds, where it appears infeasible to derive realistic estimates of the constants that arise in the existing theory. In a similar vein, it would also be of considerable interest to quantify the saving in computational effort through the use of different adaptive schemes and in comparison to uniform refinement. Furthermore, the theory may be used to identify specific examples where coarsening steps are really needed to attain an optimal algorithm. More generally, the identification of a suite benchmark tests and comparisons with other adaptive strategies, for which current theory is lacking, was also suggested. The importance of understanding the relationship between the numerical solution and the best approximation in the pre-asymptotic range as one can construct problems for which the cost of computations in the asymptotic range is prohibitively high. Duality-based adaptive strategies compute a weighting of the relevance of the data in the course of the calculation. Starting with this aspect, it was also discussed how the convergence analysis of adaptive algorithms can be related to a data analysis of the problem. Participants proposed that the analysis of duality-based strategies provides an indication that after sufficiently many adaptive refinement steps it may simply be the case that the best strategy to continue the computation with uniform refinement. The presence of singularities in the solution may play a subtle role here. Part of the session was dedicated to adaptive refinement strategies in three space dimensions, including the question of convergence of adaptive methods in this setting. Another topic hotly discussed were outstanding $hp$-approximation issues. Participants agreed that automatic decision mechanisms when $h$- and when $p$-refinement is preferable but that there is a definite need for further fundamental improvements. The issue of the development and analysis of reliable and efficient error estimators is less developed for the $p$- and $hp$-version of the finite element method than for the $h$-version. Similarly, convergence proofs for $hp$-adaptive finite element methods need to be addressed in future in more detail.

No keywords available for this article.

Ainsworth Mark, Carstensen Carsten, Dörfler Willy: Mini-Workshop: Convergence of Adaptive Algorithms. Oberwolfach Rep. 2 (2005), 2091-2138. doi: 10.4171/OWR/2005/37