ARC 12/17-045

Modelling dependence between risks is a central issue for decision making in risky environments. Optimal decisions exist when the dependence scheme is simple, which happens under strong assumptions on the underlying data generating process, such as Gaussianity, weak dependence, and stationarity. Such assumptions, however, are rarely justified in modern risk management. Features of the data to be taken into account include non-elliptical dependence, heavy tails, long memory, and nonstationarity stemming from structural breaks or time-varying cross-covariances. Moreover, the ease of data collection nowadays makes it possible to consider datasets with a large number of variables or data measured nearly continuously over time or in space. This situation is typical for actuarial and financial risks and it particularly concerns questions related to risk diversification or institutional solvability. Although the risks we have in mind stem from insurance, finance, and economics, the mathematical models to describe
them are generic and apply to other fields too.

The project concerns fundamental research on probabilistic, statistical and econometric models for dependence. The global aim of the project is to construct new ways of measuring and modelling risks in systems with intricate dependence structures, moving towards model assumptions that better reflect the underlying complexity. Particular attention is to be paid to the behaviour of such systems in periods of distress, that is, upon the arrival of shocks, after structural breaks, or when comovements between risk factors are higher than usual. The models are stochastic in nature, that is, formulated in the language of probability, statistics, and econometrics. Within this context, we will focus on a number of specific challenges:

  • the inadequacy of the Gaussian distribution to capture important features of the data, in particular the occurrence of events of large magnitude (fat tails) and their interdependencies (tail dependence);
  • the time-varying nature of data generating processes, violating usual stationarity assumptions;
  • the complexity of data objects, taking the form of large vectors, curves or images.  

 

 Research Cafe