Oberwolfach Reports

Full-Text PDF (735 KB) | Introduction as PDF | Metadata | Table of Contents | OWR summary
Volume 13, Issue 3, 2016, pp. 1875–1941
DOI: 10.4171/OWR/2016/33

Published online: 2017-04-22

Learning Theory and Approximation

Andreas Christmann[1], Kurt Jetter[2], Steve Smale[3] and Ding-Xuan Zhou[4]

(1) Universität Bayreuth, Germany
(2) Universität Hohenheim, Stuttgart, Germany
(3) City University of Hong Kong, China
(4) City University of Hong Kong, China

The main goal of this workshop – the third one of this type at the MFO – has been to blend mathematical results from statistical learning theory and approximation theory to strengthen both disciplines and use synergistic effects to work on current research questions. Learning theory aims at modeling unknown function relations and data structures from samples in an automatic manner. Approximation theory is naturally used for the advancement and closely connected to the further development of learning theory, in particular for the exploration of new useful algorithms, and for the theoretical understanding of existing methods. Conversely, the study of learning theory also gives rise to interesting theoretical problems for approximation theory such as the approximation and sparse representation of functions or the construction of rich kernel reproducing Hilbert spaces on general metric spaces. This workshop has concentrated on the following recent topics: Pitchfork bifurcation of dynamical systems arising from mathematical foundations of cell development; regularized kernel based learning in the Big Data situation; deep learning; convergence rates of learning and online learning algorithms; numerical refinement algorithms to learning; statistical robustness of regularized kernel based learning.

No keywords available for this article.

Christmann Andreas, Jetter Kurt, Smale Steve, Zhou Ding-Xuan: Learning Theory and Approximation. Oberwolfach Rep. 13 (2016), 1875-1941. doi: 10.4171/OWR/2016/33