13 October 2017
14:30
Louvain-la-Neuve
ISBA - C115 (Seminar Room Bernoulli)
Optimal survey schemes for stochastic gradient descent with applications to M-estimation
Abstract:
Iterative stochastic approximation methods are widely used to solve M-estimation problems, in the context of predictive learning in particular. In certain situations that shall be undoubtedly more and more common in the Big Data era, the datasets available are so massive that computing statistics over the full sample is hardly feasible, if not unfeasible. A natural and popular approach to gradient descent in this context consists in substituting the "full data" statistics with their counterparts based on subsamples picked at random, of manageable size. The main purpose of this research is to investigate the impact of survey sampling with unequal inclusion probabilities on stochastic gradient descent-based M-estimation methods. Precisely, we prove that, in presence of some a priori information, one may significantly increase statistical accuracy in terms of limit variance, when choosing appropriate first order inclusion probabilities. These results are described by asymptotic theorems and are also supported by illustrative numerical experiments.
This is a joint work with Stéphan Clémençon (Télécom ParisTech), Patrice Bertail (Université Paris Ouest) and Guillaume Papa (Télécom ParisTech).