F. Poirion, Q. Mercier (ONERA)
This paper is about optimization under uncertainty, when the uncertain parameters are modeled through random variables. Contrary to traditional robust approaches, which deal with a deterministic problem through a worst-case scenario formulation, the stochastic algorithms presented introduce the distribution of the random variables modeling the uncertainty. For single-objective problems such methods are currently classical, based on the Robbins-Monro algorithm. When several objectives are involved, the optimization problem becomes much more difficult and the few available methods in the literature are based on a genetic approach coupled with Monte-Carlo approaches, which are numerically very expensive. We present a new algorithm for solving the expectation formulation of stochastic smooth or non-smooth multi-objective optimization problems. The proposed method is an extension of the classical stochastic gradient algorithm to multi-objective optimization, using the properties of a common descent vector. The mean square and the almost-certain convergence of the algorithm are proven. The algorithm efficiency is illustrated and assessed on an academic example.