In function approximation problems, two fundamental challenges arise: selecting informative data points and designing an effective approximation ansatz. This paper presents and analyzes greedy reconstruction algorithms that address both challenges simultaneously. Our algorithms select an optimal subset of data points while constructing a minimal yet accurate structure for the set of approximating functions. Two classes of algorithms are proposed, each leveraging the dataset and function evaluations in distinct ways across different phases of the approximation process. A key strength of our approach is its flexibility, which allows for linear and nonlinear parameterizations of the approximation ansatz. To demonstrate the broad applicability of our methods, we consider three representative classes of problems: polynomial interpolation, kernel-based approximation, and neural network approximation. The first two classes, which involve linear parameterizations, are accompanied by theoretical convergence results and are shown to be equivalent to existing greedy strategies. For neural networks, which involve nonlinear parameterizations, we validate the effectiveness of our framework through extensive numerical experiments.

Greedy reconstruction algorithms for function approximation

Ciaramella, Gabriele;Verani, Marco
2026-01-01

Abstract

In function approximation problems, two fundamental challenges arise: selecting informative data points and designing an effective approximation ansatz. This paper presents and analyzes greedy reconstruction algorithms that address both challenges simultaneously. Our algorithms select an optimal subset of data points while constructing a minimal yet accurate structure for the set of approximating functions. Two classes of algorithms are proposed, each leveraging the dataset and function evaluations in distinct ways across different phases of the approximation process. A key strength of our approach is its flexibility, which allows for linear and nonlinear parameterizations of the approximation ansatz. To demonstrate the broad applicability of our methods, we consider three representative classes of problems: polynomial interpolation, kernel-based approximation, and neural network approximation. The first two classes, which involve linear parameterizations, are accompanied by theoretical convergence results and are shown to be equivalent to existing greedy strategies. For neural networks, which involve nonlinear parameterizations, we validate the effectiveness of our framework through extensive numerical experiments.
2026
Data-driven methods
Function approximation
Greedy reconstruction algorithms
Neural networks
Polynomial interpolation
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1309346
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact