Actuary by formation, I focused during my PhD on high dimensional statistics and dependence structure estimations applied to internal modeling in a reinsurance context. I am currently a Post-Doctoral researcher at UCLouvain, in Belgium, under an FNRS grant. I do have a taste for numerical code and open-source software, and most of my work is freely available.
PhD: Dependence structure and risk agregation in high dimensions., 2019-2022
ICJ & SCOR
Master in Mathematics - Probability, 2018-2019
Bachelor, Master and Diploma in Actuarial Sciences, 2015-2018
Bachelor in Mathematics, 2012-2015
University of Straßbourg
A probability distribution is n-divisible if its nth convolution root exists. While modeling the dependence structure between several (re)insurance losses by an additive risk factor model, the infinite divisibility, that is the n-divisibility for all interger n, is a very desirable property. Moreover, the capacity to compute the distribution of a piece (i.e., a convolution root) is also desirable. Unfortunately, if many useful distributions are infinitely divisible, computing the distributions of their pieces is usually a challenging task that requires heavy numerical computations. However, in a few selected cases, particularly the Gamma case, the extraction of the distribution of the pieces can be performed fully parametrically, that is with negligible numerical cost and zero error. We show how this neat property of Gamma distributions can be leveraged to approximate the pieces of other distributions, and we provide several illustrations of the resulting algorithms.
Multivariate generalized Gamma convolutions are distributions defined by a convolutional semi-parametric structure. Their flexible dependence structures, the marginal possibilities and their useful convolutional expression make them appealing to the practitioner. However, fitting such distributions when the dimension gets high is a challenge. We propose stochastic estimation procedures based on the approximation of a Laguerre integrated square error via (shifted) cumulants approximation, evaluated on random projections of the dataset. Through the analysis of our loss via tools from Grassmannian cubatures, sparse optimization on measures and Wasserstein gradient flows, we show the convergence of the stochastic gradient descent to a proper estimator of the high dimensional distribution. We propose several examples on both low and high-dimensional settings.
The generalized gamma convolutions class of distributions appeared in Thorin’s work while looking for the infinite divisibility of the log-Normal and Pareto distributions. Although these distributions have been extensively studied in the univariate case, the multivariate case and the dependence structures that can arise from it have received little interest in the literature. Furthermore, only one projection procedure for the univariate case was recently constructed, and no estimation procedures are available. By expanding the densities of multivariate generalized gamma convolutions into a tensorized Laguerre basis, we bridge the gap and provide performant estimation procedures for both the univariate and multivariate cases. We provide some insights about performance of these procedures, and a convergent series for the density of multivariate gamma convolutions, which is shown to be more stable than Moschopoulos’s and Mathai’s univariate series. We furthermore discuss some examples.
The R package cort implements object-oriented classes and methods to estimate, simulate and visualize certain types of non-parametric copulas.
We construct the Copula Recursive Tree (CORT) estimator: a flexible, consistent, piecewise linear estimator of a copula, leveraging the patchwork copula formalization and various piecewise constant density estimators. While the patchwork structure imposes a grid, the CORT estimator is data-driven and constructs the (possibly irregular) grid recursively from the data, minimizing a chosen distance on the copula space. The addition of the copula constraints makes usual denisty estimators unusable, whereas the CORT estimator is only concerned with dependence and guarantees the uniformity of margins. Refinements such as localized dimension reduction and bagging are developed, analyzed, and tested through applications on simulated data.