Objective
We calculate the actuarial price and optimum retention point of a Stop-Loss reinsurance for the costs associated with high-cost illiness (HCI onwards) medical services, with emphasis on surgery services. We apply Single Loss Analysis Albrecher, Beirlant, & Teugels (2017). And we discuss the validity of the LogNormal assumption used in some works for the valuation of this kind of services.
Introduction
We assume the classical Cramer-Lundberg collective risk model, for the
aggregate loss as
\begin{align*} \label{ALD} S= \sum_{i=1}^N X_i \tag{1} \end{align*}
where, $X_1,X_2,\ldots,X_N$
is a sequence of random variables iid
$X_i\geq 0$
, with cumulative distribution
$F_X(x)=\mathbb{P}(X\le x)$
with support $[0,\infty)$
, such that
$F_{X}(x)<1$
, $\forall x>0$
, $F_{X}(0_+)=0$
, and with
$\mathbb{E}(X_{i})<\infty$
. $N$
and $X_{i}$
are asumed mutually
independent random variables $\forall i=1,2,\ldots$
. (Bowers, Gerber,
Hickman, Jones, & Nesbitt, 1997, p. 367).
The variable $N$
represents the number of surgery claims in one year
of HCI, with a mass function $p_{n}(x) = \mathbb{P}(N=x)$
. We employ
several models included in R library gamlss from
Rigby, Stasinopoulos, Heller, & Voudouris (2014), which include
several Mixed Poisson distributions.
The variable $X$
represent the severities of individual surgeries in
HCI. Their distribution is model by mixtures of light and heavy-tailed
distributions, also called spliced distribution, defined as
\begin{align*} F_X(x)=F_L(x)1_{\{x\leq u\}}+ F_H(x)1_{\{x>u\}} \tag{2} \end{align*}
where $1_{x\leq u}$
and $1_{x>u}$
are indicator variables, $u$
is
a threshold, $F_L$
and $F_H$
are light and heavy-tailed
distributions, respectively. A requisite is that $1-F_H$
is a regular
varying function with index $-\alpha$
, i.e,
$1-F_H\in \mathcal{R}_{-\alpha}$
.
The aggregate loss distributions $F_{S}(x)$
is defined as
\begin{align*} F_{S}(x) & =\sum_{n=0}^{\infty}p_{n}F_{X}^{*n}(x) \tag{3} \end{align*}
where $p_{n}=\mathbb{P}(N=n)$
is the probability distribution of $N$
evaluated in $n$
and
$F_{X}^{*n}(x)=\mathbb{P}(X_{1}+X_{2}+\ldots+X_{n}\le x)$
is the
$n$
-th convolution of $F_X$
with itself.
The calculation of the distribution and the quantil for $S$
is known
a difficult task. Some authors have proposed algorithms and methods
for their approximation. Some are described in Kaas, Goovaerts, Dhaene,
& Denuit (2008, ch. 2-3), and Albrecher et al. (2017, ch. 6).
For this approximation are use for the calculation of the optimal retention point and Stop-Loss reinsurance premium, defined later, we use two alternatives proposed in the literature:
- The methodology Single Loss Approximation (SLA onwards).
- The assumption of the Black-Scholes methodology on which
$S\sim LogNormal(\mu,\sigma^2)$
Frequency Model
Mixed Poisson Law
The Mixed Poisson Law is applied when the variance of the random
variable $N$
is significantly greater than its expected value. We
assume that the mass function of $N$
, follows a Poisson Law with
random parameter $\lambda$
(replaced by $\theta$
), and mixing
distribution function $U$
(also called risk structure function),
such that
\begin{align*} \mathbb{P}(N=x) = \int_{0}^{\infty} \frac{e^{-\theta t}(\theta t)^x}{x!}dU(\theta) \tag{4} \end{align*}
obtaining goodness of fit tests
DEL | PIG | GPO | SCH | NB | PO | |
---|---|---|---|---|---|---|
Kolmogorov-Smirnov | 0.25 | 0.00 | 0.01 | 0.01 | 0.00 | 0.00 |
Cramer-von Mises | 0.18 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Kuiper | 0.54 | 0.04 | 0.07 | 0.15 | 0.02 | 0.00 |
Severity Model
Spliced distribution
Albrecher et al. (2017, p. 50) defines a m-component spliced
distribution with a probability density function as
\begin{align*} \begin{split} f_{X}(x) = \begin{cases} \pi_{1} \frac{f_{1}(x)}{F_{1}(c_{1})-F_{1}(c_{0})} & c_{0} < x \leq c_{1} \\ \pi_{2} \frac{f_{2}(x)}{F_{2}(c_{2})-F_{2}(c_{1})} & c_{1} < x \leq c_{2} \\ \quad \quad \quad \vdots & \quad \quad \ \vdots \\ \pi_{m} \frac{f_{m}(x)}{F_{m}(c_{m})-F_{m}(c_)}& c_ < x \leq c_{m} \end{cases} \end{split} \tag{5} \end{align*}
where $f_{i}(x)$
and $F_{i}(x)$
are the pdf and cdf of the
$i$
-th interval of a random variable $X$
, respectively.
$\pi_{i}>0$
are the weights of each of the categories, with
$\sum_{i=1}^m \pi_{i} = 1$
. $c_{k_i}$
are the intervals for which
the random variable $X$
is defined in each category or also called
union points.
obtaining goodness of fit tests
W.GP | BCPE | WEI | GG | GU | GP | |
---|---|---|---|---|---|---|
Kolmogorov-Smirnov | 0.22 | 0.28 | 0.00 | 0.01 | 0.00 | 0.00 |
Cramer-von Mises | 0.29 | 0.30 | 0.00 | 0.01 | 0.00 | 0.00 |
Kuiper | 0.33 | 0.24 | 0.00 | 0.00 | 0.00 | 0.00 |
Supremum Class Upper Tail Anderson-Darling | 0.27 | 0.38 | 0.02 | 0.20 | 0.19 | 0.24 |
Quadratic Class Upper Tail Anderson-Darling | 0.03 | 0.21 | 0.00 | 0.01 | 0.00 | 0.06 |
Single Loss Approximation
For the implementation of this approach, we start with the Böcker &
Klüppelberg (2005) result, where the authors point out that, if
$F_{X_k}$
could be classified within Subexponential class
distributions, such that
\begin{align*} \lim_{x\to\infty}\frac{1-F_{X}^{*n}(x)}{1-F_{X}(x)}=\lim_{x\to\infty}\frac{\bar{F}_{X}^{*n}(x)}{\bar{F}_{X}(x)}=n \tag{6} \end{align*}
and assuming that $p_{n}$
satisfies
\begin{align*} \sum_{n=0}^\infty (1+\varepsilon)^np_{n}<\infty \text{ for some } \varepsilon>0 \tag{7} \end{align*}
then, it is said that $F_{S}$
is classify within the Subexponential
class, when its tail behavior is given by
\begin{align*} \label{BK} \bar{F}_{S}(x) \sim \mathbb{E}(N)\bar{F}_{X}(x); \quad \quad x\to\infty \tag{8} \end{align*}
Risk Measures
Value at Risk (VaR)
From the equation (\ref{BK}) Böcker & Klüppelberg (2005, p. 91),
present the first approximation of the VaR by SLA method, for a
level $\kappa$
, with $0<\kappa<1$
and defined as
\begin{align*} \label{VaR1} VaR_{S}(\kappa) = F_{X}^{-1}\left(1 - \frac{1-\kappa}{\mathbb{E}(N)})\right); \quad \quad \kappa\to1 \tag{9} \end{align*}
Although the equation (\ref{VaR1}) has a very attractive structure, in
Böcker & Sprittulla (2006, p. 96), the authors point out that this
equation must be used carefully because this approximation
underestimate the VaR, since the aproximation does not take into
account all loss events $X_i$
which contribute to the aggregate loss
$S$
.
In order to overcome such an inconvenience and give analytical support
to the SLA method, from Albrecher et al. (2017, p. 199) we can
obtain a corrected expression for VaR
\begin{align*} \label{VaR2} VaR_S(\kappa)\approx F_{X}^{-1}\left(1 - \frac{1-\kappa}{\mathbb{E}(N)}\right)+ \mathbb{E}(X)\left(\frac{\mathbb{E}(N^2)}{\mathbb{E}(N)}-1\right); \quad \kappa\to1 \tag{10} \end{align*}
Expected Shortfall (ES/TVaR/AVaR/CVaR)
For the calculation of the ES, we use the basic definition of ES in
Kaas et al. (2008, p. 129)
\begin{align*} \label{VaR3} ES_{S}(\kappa)= \frac{1}{1-\kappa}\int_\kappa^1 VaR_{S}(\theta)\text{d}\theta \tag{11} \end{align*}
Stop-Loss Premium
Finally, from VaR and ES, it is possible to define the Stop-Loss
premium by the identity [Kaas2008, p. 129]
\begin{align*} \label{VaR4} \pi(\kappa)=(1-\kappa) \left[ES_{S}(\kappa) - VaR_{S}(\kappa)\right] \tag{12} \end{align*}
Reinsurance
If we define the aggregate costs $S$
as the equation (\ref{ALD}), for
the validity period of a policy (usually one year). Then a *reinsurance
contract for a *HCI **can be defined as
\begin{align*} \label{RE1} S= & D + R \tag{13} \end{align*}
where $D$
represents the amount deductible or retained by the insurer
and $R$
represents the amount paid by the reinsurer.
In order to define the reinsurance premium $\delta(\kappa)$
, we assume
the expected value principle, defined as
\begin{align*} \label{RE2} \delta(M^*)=(1+\rho)\pi(M^*) \tag{14} \end{align*}
where $M^*=VaR_S(\rho*)$
is the optimal retention point,
$\rho^*=\frac{1}{1+\rho}$
is the optimal condition for the retention,
$\rho>0$
is defined as the reinsurer’s relative safety load
factor, which can be interpreted as a risk premium rateand $\pi(.)$
is the Stop-Loss premium.
Optimum Retention Point Estimation
From the equations (\ref{VaR2}), (\ref{VaR3}), (\ref{VaR4}),
(\ref{RE1}), (\ref{RE2}) and the adjusted distribution DEL for $N$
and W-GP for $X$
, we calculate optimal retention points and
optimal premiums for different levels of $\rho$
.
\(\rho\) | \(\kappa_{\rho^*}\) | \(M^*\) | \(\delta(M^*)\) |
---|---|---|---|
0.1 | 0.090909 | 1112.357 | 199.564 |
0.5 | 0.333333 | 1118.603 | 156.130 |
1.0 | 0.500000 | 1126.077 | 119.829 |
5.0 | 0.833333 | 1179.411 | 509.51 |
10.0 | 0.909091 | 1238.787 | 854.577 |
20.0 | 0.952381 | 1347.044 | 1483.711 |
50.0 | 0.980392 | 1636.039 | 3163.199 |
Discussion of the validity of the LogNormal assumption for the aggregated costs
- To justify the LogNormal assumption for the aggregated cost
$S= \sum_{i=1}^N X_i$
requires that a least the first two moments of$S$
exists, in order to specify a LogNormal distribution. And then calculate$VaR_S(\kappa)$
,$ES_S(\kappa)$
, and$\pi(\kappa)$
. - As LogNormal distribution is belongs to the Subexponential class of
heavy-tailed distributions. The problem is, to guarantee that
$S$
is LogNormal without any other kind of information. We do not know of any kind of conditions on severities$X_i$
which guarantee that$S$
is LogNormal. - Based on this considerations and the results exposed, we consider that the assumption of LogNormal for the aggegated costs is not justified. As a conclusion several studies which expose valuation of the medical services in surgery or another HCI, based on Black-Scholes methodology, used for instance in (Chicaíza & Cabedo (2009), and Girón & Herrera (2015)), are lacking a more rigurous justification.
Conclusions
- The spliced distributions present a better adjustment in the adjustment of the severities than the other adjusted distributions.
- Although the goodness-of-fit tests may have higher values in their P-values for some distributions, it does not mean that these are the ones that offer a better adjustment. Therefore, it is always recommended to accompany these measures with a support graphic.
- For the calculation of the reinsurance premium, we assume the
expected value principle, which depends on the reinsurer’s relative
safety load factor
$\rho$
, but different principles can be used to calculate the optimal retention point. - It is not correct to make assumptions of distributions, if you do not have a strict justification for making such assumptions.
Albrecher, H., Beirlant, J., & Teugels, J. (2017). Reinsurance: Actuarial and statistical aspects. John Wiley & Sons.
Bowers, N., Gerber, H., Hickman, J., Jones, D., & Nesbitt, C. (1997). Actuarial mathematics. Society of Actuaries, 2.
Böcker, K., & Klüppelberg, C. (2005). Operational var: A closed-form approximation. Risk, Operational Risk, 18(12), 90–93.
Böcker, K., & Sprittulla, J. (2006). Operational var: Meaningful means. Risk, Brief Communication, 96–98.
Chicaíza, L., & Cabedo, D. (2009). Using the Black-Scholes method for estimating high-cost illness insurance premiums in Colombia. Innovar, 19(33), 119–130.
Girón, L., & Herrera, F. (2015). Cálculo y comparación de la prima de un reaseguro de salud usando el modelo de opciones de Black-Scholes y el modelo actuarial. Revista de Economía Del Rosario. Julio-Diciembre, 18(2), 211–248. https://doi.org/10.12804/rev.econ.rosario.18.02.2015.03
Kaas, R., Goovaerts, M., Dhaene, J., & Denuit, M. (2008). Modern actuarial risk theory: Using r (Vol. 2). Springer Science & Business Media.
Rigby, B., Stasinopoulos, M., Heller, G., & Voudouris, V. (2014). The distribution toolbox of gamlss. The GAMLSS Team.