Learning Theory and Approximation

  • Andreas Christmann

    Universität Bayreuth, Germany
  • Kurt Jetter

    Universität Hohenheim, Stuttgart, Germany
  • Steve Smale

    City University of Hong Kong, China
  • Ding-Xuan Zhou

    City University of Hong Kong, China
Learning Theory and Approximation cover

A subscription is required to access this article.

Abstract

The main goal of this workshop – the third one of this type at the MFO – has been to blend mathematical results from statistical learning theory and approximation theory to strengthen both disciplines and use synergistic effects to work on current research questions. Learning theory aims at modeling unknown function relations and data structures from samples in an automatic manner. Approximation theory is naturally used for the advancement and closely connected to the further development of learning theory, in particular for the exploration of new useful algorithms, and for the theoretical understanding of existing methods. Conversely, the study of learning theory also gives rise to interesting theoretical problems for approximation theory such as the approximation and sparse representation of functions or the construction of rich kernel reproducing Hilbert spaces on general metric spaces. This workshop has concentrated on the following recent topics: Pitchfork bifurcation of dynamical systems arising from mathematical foundations of cell development; regularized kernel based learning in the Big Data situation; deep learning; convergence rates of learning and online learning algorithms; numerical refinement algorithms to learning; statistical robustness of regularized kernel based learning.

Cite this article

Andreas Christmann, Kurt Jetter, Steve Smale, Ding-Xuan Zhou, Learning Theory and Approximation. Oberwolfach Rep. 13 (2016), no. 3, pp. 1875–1941

DOI 10.4171/OWR/2016/33