You have just completed your registration at OpenAire.
Before you can login to the site, you will need to activate your account.
An e-mail will be sent to you with the proper instructions.
Important!
Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version
of the site upon release.
Under mild assumptions on the kernel, we obtain the best known error rates in a regularized learning scenario taking place in the corresponding reproducing kernel Hilbert space (RKHS). The main novelty in the analysis is a proof that one can use a regularization term that grows significantly slower than the standard quadratic growth in the RKHS norm.
We present an argument based on the multidimensional and the uniform central limit theorems, proving that, under some geometrical assumptions between the target function T and the learning class F, the excess risk of the empirical risk minimization algorithm is lower bounded by
\[\frac{\mathbb{E}\sup_{q\in Q}G_{q}}{\sqrt{n}}\delta\] ,
¶
where (Gq)q∈Q is a canonical Gaussian process associated with Q (a well chosen subset of F) and δ is a parameter governing the oscillations of the em...
We study the performances of the empirical risk minimization procedure (ERM for short), with respect to the quadratic risk, in the context of \textit{convex aggregation}, in which one wants to construct a procedure whose risk is as close as possible to the best function in the convex hull of an arbitrary finite class $F$. We show that ERM performed in the convex hull of $F$ is an optimal aggregation procedure for the convex aggregation problem. We also show that if this procedure is used for ...
We present a very general chaining method which allows one to control the supremum of the empirical process $\sup_{h \in H} |N^{-1}\sum_{i=1}^N h^2(X_i)-\E h^2|$ in rather general situations. We use this method to establish two main results. First, a quantitative (non asymptotic) version of the classical Bai-Yin Theorem on the singular values of a random matrix with i.i.d entries that have heavy tails, and second, a sharp estimate on the quadratic empirical process when $H=\{\inr{t,\cdot} : t...
Given a finite class of functions $F$, the problem of aggregation is to construct a procedure with a risk as close as possible to the risk of the best element in the class. A classical procedure (PAC-Bayesian statistical learning theory (2004) Paris 6, Statistical Learning Theory and Stochastic Optimization (2001) Springer, Ann. Statist. 28 (2000) 75–87) is the aggregate with exponential weights (AEW), defined by
¶
\[\tilde{f}^{\mathrm{AEW}}=\sum_{f\in F}\widehat{\theta}(f)f,\qquad\mbox...
No project research data found
No project statistics found
Scientific Results
Chart is loading... It may take a bit of time. Please be patient and don't reload the page.
PUBLICATIONS BY ACCESS MODE
Chart is loading... It may take a bit of time. Please be patient and don't reload the page.
Publications in Repositories
Chart is loading... It may take a bit of time. Please be patient and don't reload the page.