# Optimal reinsurance with expectile under the Vajda condition

## Yanhong Chen

#### Need to know

• The author studies optimal reinsurance designs by minimizing the adjusted value of the liability of an insurer and the risk margin is determined by expectile
• The author considers a class of ceded loss functions that are subject to Vajda condition
• The premium principles are assumed to satisfy the properties of law invariance, risk loading and convex order preserving
• We show that the optimal ceded loss functions are of the form of three interconnected line segments

#### Abstract

In this paper, we revisit optimal reinsurance problems by minimizing the adjusted value of the liability of an insurer, which encompasses a risk margin. The risk margin is determined by expectile. To reflect the spirit of reinsurance of protecting the insurer, we assume that both the insurer’s retained loss and the proportion paid by a reinsurer are increasing in indemnity. The premium principles are assumed to satisfy the following three properties: law invariance, risk loading and convex order preservation. We show that the optimal ceded loss functions take the form of three interconnected line segments. Further, if the reinsurance premium is translation invariant or follows the expected value principle, simplified forms of the optimal reinsurance treaties are obtained. Finally, when the reinsurance premium is assumed to be the expected value principle or Wang’s premium principle, the explicit expression for the optimal reinsurance treaty is also given.

## 1 Introduction

Reinsurance is a contract between an insurance company (insurer) and a reinsurance company (reinsurer); it is a popular risk management strategy for an insurer. In a reinsurance treaty, there exist three basic elements: a set of admissible ceded loss functions, a reinsurance premium principle and an optimal criterion. By changing one or more of the above three aspects, a number of optimal reinsurance treaties have been studied.

Regarding the variety of different classes of ceded loss functions considered for optimal reinsurance problems, if one does not take into account the constraints on the risk of the insurer or the reinsurer, there are two typical classes of ceded loss functions employed in the literature. The first is the class of ceded loss functions such that the retained and ceded loss functions are increasing, which was suggested by Huberman et al (1983) to preclude moral hazard. For example, Chi (2012) studied optimal reinsurance treaties by minimizing the adjusted value of the liability of an insurer, where value-at-risk (VaR) and conditional value-at-risk (CVaR) were used to determine the risk margin. Chi (2012) proved that layer reinsurance is often optimal. For more studies about optimal reinsurance treaties with VaR or CVaR, we refer to Liu et al (2016), Chi et al (2017), Cai et al (2017) and the references therein. Cai and Weng (2016) studied optimal reinsurance treaties by minimizing the adjusted value of the liability of an insurer, where the risk margin is determined by expectile. They proved that a two-layer reinsurance treaty is optimal. Optimal reinsurance treaties under distortion risk measures have also been studied in the literature, see, for example, Cui et al (2013), Assa (2015), Zhuang et al (2016), Cheung and Lo (2017), Lo (2017a,b) and Jiang et al (2018). However, in reality, when an insurer is facing an increasing total claim amount, the insurer will want to ask the reinsurer to pay not only an increasing amount of the liability but also an increasing proportion of the total claim amount. Therefore, the second is the class of ceded loss functions such that the retained loss functions are increasing and the ceded loss functions satisfy the Vajda condition, which was proposed by Vajda (1962) to reflect the spirit of reinsurance of protecting the insurer. A ceded loss function is called to satisfy the Vajda condition if the proportion of the ceded loss function is increasing, and such a ceded loss function is usually called a Vajda function. Hesselager (1990, 1993) studied some optimal reinsurance problems under the Vajda condition. Chi and Weng (2013) studied optimal reinsurance treaties under VaR or CVaR over the class of ceded loss functions such that the retained loss functions were increasing and the ceded loss functions satisfied the Vajda condition. They proved that the optimal ceded loss functions take the form of three interconnected line segments. Chen and Hu (2020) studied optimal reinsurance from the perspectives of both insurers and reinsurers under the VaR risk measure and the Vajda condition.

Recently, the expectile, as first proposed by Newey and Powell (1987), has been of interest in statistics and finance. Ziegel (2014) (see also Bellini et al 2014) has pointed out that VaR is elicitable but not coherent and CVaR is coherent but not elicitable, and that the expectile is the only elicitable, law-invariant and coherent risk measure. However, in order to score the estimation of risks, according to Gneiting (2011), elicitability is a natural property that must be satisfied by a risk measure. From this point of view, the expectile can be considered as a natural candidate beyond VaR and CVaR. For instance, Bellini and Di Bernardino (2017) studied risk management with expectiles. Emmer et al (2015) compared these risk measures from a practical point of view. Maume-Deschamps et al (2017) and Herrmann et al (2018) proposed two kinds of multivariate expectiles. Hu and Zheng (2020) studied the application of expectiles in a capital asset pricing model. Cai and Weng (2016) studied expectiles’ applications in reinsurance as well as optimal reinsurance with expectiles over the class of ceded loss functions such that the retained and ceded loss functions were increasing. Among a class of reinsurance premium principles satisfying the law invariance, risk loading and convex order preservation properties, they proved that a two-layer reinsurance treaty is optimal. To account for the spirit of reinsurance of protecting the insurer, as suggested by Vajda (1962), we must answer an important question: what happens to the optimal reinsurance treaty with the expectile if the class of ceded loss functions is replaced by the smaller class of Vajda functions, which means that the retained loss functions are increasing and the ceded loss functions are Vajda functions? It will turn out that the optimal reinsurance treaty obtained in the case of Vajda functions is quite different from that of Cai and Weng (2016).

In this paper, motivated by Chi and Weng (2013) and Cai and Weng (2016), we study optimal reinsurance treaties that minimize the risk-adjusted value of an insurer’s liability over the class of ceded loss functions such that the retained loss functions are increasing and the ceded loss functions satisfy the Vajda condition. By employing the expectile as a risk measure to calculate the capital at risk included in the risk-adjusted value of an insurer’s liability, we show that the optimal ceded loss functions always take the form of three interconnected line segments for a wide class of reinsurance premium principles that satisfy three properties: law invariance, risk loading and convex order preservation. Moreover, further simplified forms of optimal reinsurance treaties are obtained for the expected value principle or the reinsurance premium principles that satisfy the translation invariant property.

The rest of the paper is organized as follows. In Section 2, we will introduce some preliminaries, including a definition of the expectile and the formulation of our optimal reinsurance treaties. In Section 3, we will study the optimal reinsurance design that minimizes the expectile of the insurer over the class of ceded loss functions such that the retained loss functions are increasing and the ceded loss functions satisfy the Vajda condition. The optimal reinsurance treaties will be provided, but their proofs will be postponed to Section 5. Some examples will be presented in Section 4 to demonstrate how the parameters in the optimal ceded loss functions can be determined explicitly.

## 2 Preliminaries

In this section, we will briefly introduce some preliminaries. The claim faced by an insurer is characterized by a nonnegative random variable $X$ on some probability space $(\varOmega,\mathcal{F},P)$ with finite expectation $E[X]$. We denote by $F(x):=P(X\leq x)$, $x\in\mathbb{R}$, the distribution function of $X$, and by $\bar{F}(x):=1-F(x)$ the survival function of $X$. We denote by $\mathcal{X}$ the class of nonnegative random variables with finite expectation.

In a classical reinsurance design, the insurer would cede part of the loss $X$, say $f(X)$, to a reinsurer and retain part of the loss $X$, say $R_{f}(X):=X-f(X)$. In a reinsurance contract, the functions $f(x)\colon[0,\infty)\rightarrow[0,\infty)$ and $R_{f}(x)\colon[0,\infty)\rightarrow[0,\infty)$ are referred to as the ceded loss function and the retained loss function, respectively. When an insurer cedes part of the loss to a reinsurer, the insurer needs to pay a reinsurance premium $\varPi(f(X))$ to the reinsurer according to a premium principle $\varPi$. In the presence of reinsurance, the liability of an insurer, denoted by $T_{f}(X)$, is

 $\displaystyle T_{f}(X):=R_{f}(X)+\varPi(f(X)).$

According to Risk Margin Working Group (2009), the risk-adjusted value of the insurer’s liability is calculated as

 $\displaystyle L_{f}(X):=E[T_{f}(X)]+\delta\rho(T_{f}(X)-E[T_{f}(X)]),$ (2.1)

where $\delta>0$ is a constant and $\rho$ is a risk measure used to quantify the gap between the total risk exposure $T_{f}(X)$ and its actuarial reserve $E[T_{f}(X)]$. For more works on evaluating the insurer’s liability using the risk margin, we refer to Swiss Federal Office of Private Insurance (2006), Risk Margin Working Group (2009), Wüthrich et al (2010), Chi (2012), Asmit et al (2013), Chi and Weng (2013), Cai and Weng (2016), Cheung and Lo (2017), Chi et al (2017) and the references therein.

To preclude the moral hazard and, further, to reflect the spirit of reinsurance of protecting the insurer, the set of admissible ceded loss functions is assumed to be the class of ceded loss functions such that the insurer’s retained loss and the proportion paid by a reinsurer are increasing in indemnity, which was first suggested by Vajda (1962). Namely, we search for the optimal reinsurance treaties among the following set of ceded loss functions:

 $\displaystyle\mathfrak{C}:=\{0\leq f(x)\leq x\colon\text{both }R_{f}(x)\text{ % and }f(x)/x\text{ are increasing in }x\}.$ (2.2)

Note that, as proved by Chi and Weng (2013), $\mathfrak{C}\subsetneq\mathfrak{F}$, where

 $\displaystyle\mathfrak{F}:=\{0\leq f(x)\leq x\colon\text{both }R_{f}(x)\text{ % and }f(x)\text{ are increasing in }x\}$ (2.3)

was suggested by Huberman et al (1983).

Meanwhile, we consider a wide class of reinsurance premium principles satisfying the following three properties.

1. (i)

Law invariance: $\varPi(Y)$ depends only on the cumulative distribution function $F_{Y}(y)$ of $Y$.

2. (ii)

Risk loading: $\varPi(Y)\geq E[Y]$ for any $Y\in\mathcal{X}$.

3. (iii)

Convex order preservation: $\varPi(Y)\leq\varPi(Z)$ for any $Y,Z\in\mathcal{X}$ with $Y\leq_{\mathrm{cx}}Z$, ie,

 $\displaystyle E[Y]=E[Z]\quad\text{and}\quad E[(Y-d)_{+}]\leq E[(Z-d)_{+}]\quad% \text{for all }d\in\mathbb{R},$

provided that the expectations exist, where $x_{+}:=\max\{x,0\}$ for $x\in\mathbb{R}$.

In this paper, we denote by $\mathcal{P}$ the set of all the premium principles of $\varPi$ satisfying the above three properties. As pointed out by Chi (2012) and Chi and Weng (2013), the proposed class of premium principles includes all the premium principles listed in Young (2004) except the Esscher principle.

In the present paper, we use the expectile to determine the risk margin of (2.1). It is defined as follows.

###### Definition 2.1.

The expectile of a random variable $Z$ with $E[Z^{2}]<\infty$ at a given confidence level $\alpha$ is defined as

 $\displaystyle\mathcal{E}(Z;\alpha)=E[Z]+\beta E[(Z-\mathcal{E}(Z;\alpha))_{+}]% ,\quad Z\in\mathcal{X},$ (2.4)

where $\beta=(2\alpha-1)/(1-\alpha)$ and $0<\alpha<1$.

Note that $\beta$ is strictly increasing in $\alpha\in(0,1)$ and $-1<\beta<\infty$. In insurance and finance, the economic meaning of a risk measure of a loss random variable $Z$ is usually considered as a premium or regulatory capital, which is often required to be larger than the expected loss $E[Z]$. Hence, in insurance and finance, we are usually interested in $\alpha>0.5$, since $\mathcal{E}(Z;\alpha)\geq E[Z]$ for $\alpha>0.5$, $\mathcal{E}(Z;\alpha)\leq E[Z]$ for $\alpha<0.5$, and $\mathcal{E}(Z;\alpha)=E[Z]$ for $\alpha=0.5$. Bellini et al (2014) proved that $\mathcal{E}(Z;\alpha)$ is a coherent risk measure when $0.5\leq\alpha<1$. Ziegel (2014) proved that the expectile is the only elicitable, law-invariant and coherent risk measure. Hence, from the point of view that diversification can reduce risk, and in order to score the estimation of risks, the expectile can be considered a natural candidate beyond the VaR and CVaR. For more studies on properties of the expectile, we refer to Emmer et al (2015), Bellini and Di Bernardino (2017), Maume-Deschamps et al (2017), Herrmann et al (2018) and the references therein.

Using the expectile to calculate the risk margin (2.1), the liability of the insurer, denoted by $L_{f}(X)$, is

 $\displaystyle L_{f}(X):=E[T_{f}(X)]+\delta\mathcal{E}(T_{f}(X)-E[T_{f}(X)];% \alpha),$

and the optimal reinsurance treaty problem can be formulated as follows:

 $\displaystyle L_{f^{*}}(X)=\min_{f\in\mathfrak{C}}L_{f}(X),$ (2.5)

where $f^{*}$ is the resulting optimal Vajda condition among $\mathfrak{C}$ defined in (2.2).

## 3 Optimal reinsurance design

In this section, we shall investigate the solution to the optimal reinsurance problem (2.5).

We begin with some notation. For $0\leq\theta_{1}\leq\theta_{2}\leq 1$ and $0\leq d\leq\infty$, we define

 $\displaystyle J_{\theta_{1},\theta_{2},d}(x):=\min\{\max\{\theta_{1}x,(x-d)_{+% }\},\theta_{2}x\},\quad x\geq 0.$ (3.1)

Note that $J_{\theta_{1},\theta_{2},d}\in\mathfrak{C}$. Moreover, $J_{\theta_{1},\theta_{2},d}$ can be rewritten as

 $\displaystyle J_{\theta_{1},\theta_{2},d}(x)=\begin{cases}\theta_{1}x,&0\leq x% \leq\dfrac{d}{1-\theta_{1}},\\ x-d,&\dfrac{d}{1-\theta_{1}} (3.2)

Graphs of the functions $J_{\theta_{1},\theta_{2},d}(x)$ with $\theta_{1}=0.1$, $\theta_{2}=0.4$, $d=9$ and with $\theta_{1}=0$, $\theta_{2}=0.4$, $d=9$ are given in Figure 1 and Figure 2, respectively, by the solid curve.

Note that $J_{0,1,d}(x)=(x-d)_{+}$ and $J_{0,\theta,0}(x)=J_{\theta,\theta,d}(x)=\theta x$, hence, stop-loss reinsurance and quota share reinsurance are special cases of $J_{\theta_{1},\theta_{2},d}(x)$. $J_{\theta_{1},\theta_{2},d}(x)$ can be considered as a combination of quota share reinsurance and stop-loss reinsurance.

Denote by

 $\mathcal{E}_{f}^{T}:=\mathcal{E}(T_{f}(X);\alpha)\quad\text{and}\quad\mathcal{% E}_{f}^{R}:=\mathcal{E}(R_{f}(X);\alpha)$

the expectile of the insurer’s total risk exposure $T_{f}(X)$ and the retained loss $R_{f}(X)$, respectively. For any $f\in\mathfrak{C}$, define

 $\displaystyle x_{f}:=\inf\{x\geq 0\colon R_{f}(x)\geq\mathcal{E}^{R}_{f}\}.$ (3.3)

Then, $\mathcal{E}^{R}_{f}\leq x_{f}<\infty$, which was proved by Lemma 3.2 of Cai and Weng (2016) since $\mathfrak{C}\subsetneq\mathfrak{F}$. Based on $x_{f}$, we further define

 $\displaystyle h_{f}(x):=\begin{cases}f(x),&0\leq x\leq x_{f},\\ x-\mathcal{E}^{R}_{f},&x_{f} (3.4)

with a constant $\theta_{2}\in[f(x_{f})/x_{f},1]$ satisfying

 $\displaystyle E[((1-\theta_{2})X-\mathcal{E}^{R}_{f})_{+}]=E[(R_{f}(X)-% \mathcal{E}^{R}_{f})_{+}].$ (3.5)

The following lemma shows the well definedness and some properties of $h_{f}$, defined by (3.4), which can be considered as analogous to that of Lemma 3.2 in Cai and Weng (2016).

###### Lemma 3.1.

For any $f\in\mathfrak{C}$, define $x_{f}$ and $h_{f}$ as in (3.3) and (3.4), respectively. Then, we have the following:

1. (1)

$\mathcal{E}^{R}_{f}\leq x_{f}<\infty$;

2. (2)

$R_{f}(x)\geq\mathcal{E}^{R}_{f}$ if and only if $x\geq x_{f}$, and, moreover, $R_{f}(x_{f})=\mathcal{E}^{R}_{f}$;

3. (3)

$h_{f}$ is well defined in the sense that there exists a constant $\theta_{2}\in[f(x_{f})/x_{f},1]$ to satisfy (3.5);

4. (4)

$R_{h_{f}}(x)\geq\mathcal{E}^{R}_{f}$ if and only if $x\geq x_{f}$;

5. (5)

$E[R_{f}(X)]=E[R_{h_{f}}(X)]$;

6. (6)

$E[(R_{h_{f}}(X)-\mathcal{E}^{R}_{f})_{+}]=E[(R_{f}(X)-\mathcal{E}^{R}_{f})_{+}]$;

7. (7)

$\mathcal{E}^{R}_{h_{f}}=\mathcal{E}^{R}_{f}$; and

8. (8)

$\varPi(h_{f}(X))\leq\varPi(f(X))$ for any $\varPi\in\mathcal{P}$.

###### Lemma 3.2.

For any $f\in\mathfrak{C}$, there exists a ceded loss function $J_{\theta_{1},\theta_{2},d}\in\mathfrak{C}_{0}$ satisfying

 $\displaystyle E[J_{\theta_{1},\theta_{2},d}(X)]$ $\displaystyle=E[f(X)],$ (3.6) $\displaystyle E[(R_{J_{\theta_{1},\theta_{2},d}}(X)-\mathcal{E}^{R}_{f})_{+}]$ $\displaystyle=E[(R_{f}(X)-\mathcal{E}^{R}_{f})_{+}],$ (3.7) $\displaystyle\mathcal{E}^{R}_{J_{\theta_{1},\theta_{2},d}}$ $\displaystyle=\mathcal{E}^{R}_{f},$ (3.8)

and

 $\displaystyle\varPi(J_{\theta_{1},\theta_{2},d}(X))\leq\varPi(f(X))\quad\text{% for any }\varPi\in\mathcal{P},$ (3.9)

where

 $\displaystyle\mathfrak{C}_{0}$ $\displaystyle:=\{J_{\theta_{1},\theta_{2},d}(x)\colon 0\leq\theta_{1}\leq% \theta_{2}\leq 1,d=\mathcal{E}^{R}_{J_{\theta_{1},\theta_{2},d}},$ $\displaystyle\qquad E[((1-\theta_{2})X-\mathcal{E}^{R}_{J_{\theta_{1},\theta_{% 2},d}})_{+}]=E[(R_{J_{\theta_{1},\theta_{2},d}}(X)-\mathcal{E}^{R}_{J_{\theta_% {1},\theta_{2},d}})_{+}]\}.$ (3.10)

Now we are in a position to state the main result of the present paper, which provides the optimal reinsurance treaty for problem (2.5).

###### Theorem 3.3.

For any premium principle $\varPi\in\mathcal{P}$,

 $\displaystyle\min_{f\in\mathfrak{C}}L_{f}(X)=\min_{J_{\theta_{1},\theta_{2},d}% \in\mathfrak{C}_{0}}L_{J_{\theta_{1},\theta_{2},d}}(X).$ (3.11)
###### Remark 3.4.

Comparing Theorem 3.3 of this paper with Theorem 3.4 of Chi and Weng (2013), the optimal reinsurance form under $\mathfrak{C}$ with the expectile and the CVaR all take the form of three interconnected line segments, ie, they have the same shape but the conditions satisfied by the parameters are different. More precisely, Theorem 3.3 shows that the optimal ceded loss functions are $J_{\theta_{1},\theta_{2}d}(x)$ with $\theta_{1},\theta_{2}$ and $d$ satisfying some conditions, ie, they take the form of three interconnected line segments. In other words, the optimal reinsurance form for (2.5) is a combination of quota share reinsurance and stop-loss reinsurance. We believe that this kind of combination could be practical in insurance companies.

Theorem 3.3 shows that the study of infinite-dimensional optimal reinsurance model (2.5) can be simplified to solve an optimal problem of three variables in (3.11). The following theorem shows that, with an additional mild condition on the premium principle $\varPi$, the dimension of this problem can be further reduced.

###### Theorem 3.5.

If the premium principle $\varPi\in\mathcal{P}$ is translation invariant, ie,

 $\displaystyle\varPi(Y+c)=\varPi(Y)+c\quad\text{for any constant }c\geq 0\text{% and }Y\in\mathcal{X},$

or if $\varPi$ is the expected value principle, ie,

 $\displaystyle\varPi(Y)=(1+\eta)E[Y]\quad\text{for any }Y\in\mathcal{X},$

where $\eta>0$ is a loading factor, then

 $\displaystyle\min_{f\in\mathfrak{C}}L_{f}(X)=\min_{J_{0,\theta,d}\in\mathfrak{% C}_{1}}L_{J_{0,\theta,d}}(X),$ (3.12)

where

 $\displaystyle\mathfrak{C}_{1}$ $\displaystyle:=\{J_{0,\theta,d}(x)\colon 0\leq\theta\leq 1,d=\mathcal{E}^{R}_{% J_{0,\theta,d}},$ $\displaystyle\qquad E[((1-\theta)X-\mathcal{E}^{R}_{J_{0,\theta,d}})_{+}]=E[(R% _{J_{0,\theta,d}}(X)-\mathcal{E}^{R}_{J_{0,\theta,d}})_{+}]\}.$ (3.13)
###### Remark 3.6.

Comparing Theorem 3.5 of this paper with Corollary 3.5 of Chi and Weng (2013), when the premium principle $\varPi\in\mathcal{P}$ is translation invariant or $\varPi\in\mathcal{P}$ is the expected value principle, the optimal ceded loss functions under $\mathfrak{C}$ with the expectile and the CVaR are $J_{0,\theta,d}(x)$; the difference is the conditions satisfied by $\theta$ and $d$.

## 4 Examples

In this section, we will use the results obtained in the previous section to derive explicit expressions for optimal reinsurance treaties, assuming that the reinsurance premium principle is calculated by the expected value premium principle and Wang’s premium principle.

Under the conditions in Theorem 3.5, the optimal ceded loss function is of the following form:

 $\displaystyle J_{0,\theta,d}(x)=\min\{(x-d)_{+},\theta x\}=\begin{cases}0,&0% \leq x (4.1)

with $\smash{d=\mathcal{E}^{R}_{J_{0,\theta,d}}}$. In this section, we will show that parameters $\theta$ and $d$ in the optimal ceded loss function $\smash{J_{0,\theta,d}}$ can be determined explicitly under the expected value premium principle or Wang’s premium principle.

Denote

 $K(d,\theta(d)):=E[R_{J_{0,\theta,d}}(X)]+\beta E[(R_{J_{0,\theta,d}}(X)-d)_{+}% ],\quad 0\leq\theta\leq 1,\,0\leq d\leq\infty;$ (4.2)

then, for any $J_{0,\theta,d}\in\mathfrak{C}_{1}$ and $d=\mathcal{E}^{R}_{J_{0,\theta,d}}$, we have $K(d,\theta(d))=d$.

Moreover, denote

 $J(d,\theta):=L_{J_{0,\theta,d}}(X)=E[R_{J_{0,\theta,d}}(X)]+\varPi(J_{0,\theta% ,d}(X))+\gamma E[(R_{J_{0,\theta,d}}(X)-d)_{+}],$ (4.3)

where $\gamma=\delta\beta$. Then, under the conditions in Theorem 3.5, a ceded loss function $J_{0,\theta,d}$ is the optimal solution to (2.1) if and only if the parameter $(\theta,d)$ is the solution of the following problem:

 $\displaystyle\min_{(d,\theta)\in A}J(d,\theta),$ (4.4)

where $A:=\{(d,\theta)\colon 0\leq d\leq\infty,0\leq\theta\leq 1\text{ and }K(d,\theta)=d\}$.

From (4.1), it is easy to obtain

 $\displaystyle E[J_{0,\theta,d}(X)]$ $\displaystyle=\int_{d}^{\infty}\bar{F}(t)\mathrm{d}t-(1-\theta)\int_{d/(1-% \theta)}^{\infty}\bar{F}(t)\mathrm{d}t,$ (4.5) $\displaystyle E[R_{J_{0,\theta,d}}(X)]$ $\displaystyle=E[X]-\int_{d}^{\infty}\bar{F}(t)\mathrm{d}t+(1-\theta)\int_{d/(1% -\theta)}^{\infty}\bar{F}(t)\mathrm{d}t,$ (4.6) $\displaystyle E[(R_{J_{0,\theta,d}}(X)-d)_{+}]$ $\displaystyle=E[((1-\theta)X-d)_{+}]=(1-\theta)\int_{d/(1-\theta)}^{\infty}% \bar{F}(t)\mathrm{d}t,$ (4.7)

where $\bar{F}(x)=1-F(x)$ is the survival function of $X$. Thus,

 $\displaystyle K(d,\theta)=E[X]-\int_{d}^{\infty}\bar{F}(t)\mathrm{d}t+(1-% \theta)(1+\beta)\int_{d/(1-\theta)}^{\infty}\bar{F}(t)\mathrm{d}t.$

Given $d\geq 0$, $K(d,\theta)$ is strictly decreasing in $\theta$ and thus $K(d,\theta)=d$ has at most one solution for $\theta$, and the solution is unique if it exists. This means that the implicit function $\theta=\theta(d)$, constrained with $K(d,\theta)=d$ and $0\leq\theta\leq 1$, has the domain of

 $\displaystyle B:=\{d\geq 0\colon\exists\text{ unique }\theta\in[0,1]\text{ % such that }K(d,\theta)=d\}.$

The next lemma will show that the set $B$ is nonempty, which implies that the function $\theta=\theta(\cdot)\colon B\rightarrow[0,1]$ is well defined, and it will give some properties of the function $\theta(d)$. Let

 $\displaystyle H(d)$ $\displaystyle:=E[X]+\beta E[(X-d)_{+}],$ $\displaystyle d\in\mathbb{R},$ $\displaystyle G(d)$ $\displaystyle:=d-H(d),$ $\displaystyle d\in\mathbb{R}.$

Lemma 4.1 of Cai and Weng (2016) has shown that $G(d)=0$ admits a unique solution $\tilde{d}>0$ over $(0,\infty)$.

###### Lemma 4.1.

Suppose that the nonnegative random variable $X$ with $E[X]>0$ has a continuous distribution function $F$ on $(0,\infty)$ with $F(x)>0$ for any $x>0$. Then, the following results hold.

1. (1)

The function $\theta(\cdot)$ defined on $B$ is a strictly decreasing function with domain $B=[0,\tilde{d}]$.

2. (2)

$\theta(0)=1$, $\theta(\tilde{d})=0$, and the derivative

 $\displaystyle\theta^{\prime}(d)=-\frac{F(d)+(1+\beta)\bar{F}\bigg{(}\dfrac{d}{% 1-\theta}\bigg{)}}{(1+\beta)\bigg{(}\dfrac{d}{1-\theta}\bar{F}\bigg{(}\dfrac{d% }{1-\theta}\bigg{)}+\displaystyle\int_{d/(1-\theta)}^{\infty}\bar{F}(t)\mathrm% {d}t\bigg{)}},\quad d\in(0,\tilde{d}).$ (4.8)

In terms of the function $\theta(d)$, the optimal ceded loss function is given by $J_{0,\theta(d^{*}),d^{*}}$, with $d^{*}$ solved via

 $\displaystyle\arg\min_{d\in B}J(d,\theta(d)).$ (4.9)

In the rest of this section, we will solve (4.9) for the expected value premium principle and Wang’s premium principle.

###### Example 4.2 (Expected value premium principle).

For the expected value premium principle, by (4.5), we have

 $\displaystyle\varPi(J_{0,\theta(d),d})$ $\displaystyle=(1+\eta)E[J_{0,\theta,d}(X)]$ $\displaystyle=(1+\eta)\int_{d}^{\infty}\bar{F}(t)\mathrm{d}t+(1+\eta)(\theta-1% )\int_{d/(1-\theta)}^{\infty}\bar{F}(t)\mathrm{d}t$

for a loading factor $\eta>0$. Thus, from (4.3), (4.5), (4.6) and (4.7), the objective function $J(d,\theta(d))$ in (4.9) is reduced to

 $\displaystyle J(d,\theta(d))=E[X]+\eta\int_{d}^{\infty}\bar{F}(t)\mathrm{d}t+(% \gamma-\eta)(1-\theta)\int_{d/(1-\theta)}^{\infty}\bar{F}(t)\mathrm{d}t.$

Hence, by (4.8), we obtain

 $\displaystyle\frac{\mathrm{d}J(d,\theta(d))}{\mathrm{d}d}=\frac{\eta\beta+% \gamma}{1+\beta}\bigg{[}F(d)-\frac{(1+\beta)\eta}{\eta\beta+\gamma}\bigg{]}.$ (4.10)

Let $p_{0}:=(1+\beta)\eta/(\eta\beta+\gamma)$.

1. (1)

Assume $p_{0}\geq 1$; then it follows from (4.10) that $\mathrm{d}J_{0,\theta(d),d}/\mathrm{d}d\leq 0$ for any $0. Thus, the objective function $J(d,\theta(d))$ in (4.9) is decreasing in $d$, and then its minimum is attained at $\smash{d^{*}=\tilde{d}}$. From Lemma 4.1, we know that $\smash{\theta(\tilde{d})=0}$. Hence, the optimal ceded loss function is $J_{0,0,0}\equiv 0$, ie, the optimal strategy for the insurer is not to seek any reinsurance in this case.

2. (2)

Assume $p_{0}<1$; let $y_{0}:=\inf\{x\geq 0\colon F(x)\geq p_{0}\}$. Then, $J(d,\theta(d))$ is decreasing on $[0,y_{0}]$ and increasing on $[y_{0},\infty)$. Thus, $J(d,\theta(d))$ attains its minimum at $d^{*}=y_{0}\wedge\tilde{d}$ and the optimal ceded loss function is $J_{0,\theta(d^{*}),d^{*}}$. That is,

1. (i)

if $y_{0}\geq\tilde{d}$ (note that $\theta(\tilde{d})=0$), we have the optimal ceded loss function is $J_{0,0,\tilde{d}}\equiv 0$, ie, the optimal strategy for the insurer is not to seek any reinsurance in this case;

2. (ii)

if $y_{0}<\tilde{d}$, then the optimal ceded loss function is $J_{0,\theta(y_{0}),y_{0}}$.

###### Remark 4.3.

Under the expected value premium principle, Cai and Weng (2016, Example 4.1) show that a solution to the expectile-based optimal reinsurance model (2.5) with the admissible set $\mathfrak{F}$ is

 $\displaystyle f^{*}(x)=\begin{cases}0&\text{if~{}}p_{0}\geq 1,\\ 0&\text{if~{}}p_{0}<1\text{ and }y_{0}>\tilde{d},\\ (x-y_{0})_{+}-(x-m(y_{0}))_{+}&\text{if~{}}p_{0}<1\text{ and }y_{0}\leq\tilde{% d},\end{cases}$

where $m(y_{0})$ is the unique solution of

 $E[X]-\int_{y_{0}}^{\infty}\bar{F}(t)\mathrm{d}t+(1+\beta)\int_{m}^{\infty}\bar% {F}(t)\mathrm{d}t=y_{0}.$

In contrast, an optimal ceded loss function among the set $\mathfrak{C}$ according to the above example is

 $\displaystyle f^{*}(x)=\begin{cases}0&\text{if~{}}p_{0}\geq 1,\\ 0&\text{if~{}}p_{0}<1\text{ and }y_{0}>\tilde{d},\\ J_{0,\theta(y_{0}),y_{0}}(x)&\text{if~{}}p_{0}<1\text{ and }y_{0}\leq\tilde{d}% ,\end{cases}$

where $\theta(y_{0})$ is the unique solution of

 $E[X]-\int_{y_{0}}^{\infty}\bar{F}(t)\mathrm{d}t+(1-\theta)(1+\beta)\int_{y_{0}% /(1-\theta)}^{\infty}\bar{F}(t)\mathrm{d}t=y_{0}.$

Obviously, $m(y_{0})>y_{0}/(1-\theta(y_{0}))$.

###### Example 4.4 (Wang’s premium principle).

 $\displaystyle\varPi(X)=\int_{0}^{\infty}g(\bar{F}_{X}(t))\mathrm{d}t,$

where the distortion function $g\colon[0,1]\rightarrow[0,1]$ is increasing and concave with $g(0)=0$ and $g(1)=1$. It is not hard to check that $g$ satisfies $g(x)\geq x$ for any $x\in[0,1]$. Then, the optimal ceded loss function $f^{*}$ to (2.5) is given by

 $\displaystyle f^{*}(x)=J_{0,\theta(d^{*}),d^{*}}(x),$ (4.11)

where

 $\displaystyle d^{*}=\arg\min_{d\in[0,\tilde{d}]}\bigg{\{}$ $\displaystyle{-}\int_{d}^{\infty}\bar{F}(t)\mathrm{d}t+(1-\theta(d))(1+\gamma)% \int_{d/(1-\theta(d))}^{\infty}\bar{F}(t)\mathrm{d}t$ $\displaystyle\qquad+\int_{d}^{\infty}g(\bar{F}(t))\mathrm{d}t+(\theta(d)-1)% \int_{d/(1-\theta(d))}^{\infty}g(\bar{F}(t))\mathrm{d}t\bigg{\}};$ (4.12)

$\theta(d)$ is a function of $d$ satisfying

 $\displaystyle E[X]-\int_{d}^{\infty}\bar{F}(t)\mathrm{d}t+(1-\theta(d))(1+% \beta)\int_{d/(1-\theta(d))}^{\infty}\bar{F}(t)\mathrm{d}t=d;$

and $\tilde{d}$ is a solution of

 $\displaystyle E[X]+\beta E[(X-d)_{+}]=d.$
###### Proof.

Note that Wang’s premium principle is translation invariant:

 $\displaystyle\varPi(J_{0,\theta,d}(X))$ $\displaystyle=\int_{0}^{\infty}g(P[J_{0,\theta,d}(X)>t])\mathrm{d}t$ $\displaystyle=\int_{d}^{\infty}g(\bar{F}(t))\mathrm{d}t+(\theta-1)\int_{d/(1-% \theta)}^{\infty}g(\bar{F}(t))\mathrm{d}t.$

Hence, from (4.3), (4.5), (4.6) and (4.7), the objective function $J(d,\theta(d))$ in (4.9) is reduced to

 $\displaystyle J(d,\theta(d))$ $\displaystyle=E[X]-\int_{d}^{\infty}\bar{F}(t)\mathrm{d}t+(1-\theta(d))(1+% \gamma)\int_{d/(1-\theta(d))}^{\infty}\bar{F}(t)\mathrm{d}t$ $\displaystyle\qquad+\int_{d}^{\infty}g(\bar{F}(t))\mathrm{d}t+(\theta(d)-1)% \int_{d/(1-\theta(d))}^{\infty}g(\bar{F}(t))\mathrm{d}t.$

This means that the minimization problem (2.5) can be reduced to

 $\displaystyle\min_{d\in[0,\tilde{d}]}J(d,\theta(d)).$

It is easy to see that $d^{*}$ defined in (4.12) is a solution to the above minimization problem. ∎

## 5 Proofs of main results

In this section, we will provide all the proofs of the results stated in Section 3 and the proof of Lemma 4.1.

###### Proof of Lemma 3.1.
• (1)

and (2) are obvious by using Cai and Weng (2016, Lemma 3.2(a,b)) and the fact that $\mathfrak{C}\subsetneq\mathfrak{F}$.

• (3)

Let

 $\displaystyle D(\theta_{2}):=E[((1-\theta_{2})X-\mathcal{E}^{R}_{f})_{+}]-E[(R% _{f}(X)-\mathcal{E}^{R}_{f})_{+}],\quad 0\leq\theta_{2}\leq 1.$ (5.1)

Then,

 $\displaystyle D(1)$ $\displaystyle=-E[(R_{f}(X)-\mathcal{E}^{R}_{f})_{+}]\leq 0.$ (5.2) $\displaystyle D\bigg{(}\frac{f(x_{f})}{x_{f}}\bigg{)}$ $\displaystyle=E\bigg{[}\bigg{(}\frac{x_{f}-f(x_{f})}{x_{f}}X-\mathcal{E}^{R}_{% f}\bigg{)}_{+}\bigg{]}$ $\displaystyle=E\bigg{[}\bigg{(}\frac{\mathcal{E}^{R}_{f}}{x_{f}}X-\mathcal{E}^% {R}_{f}\bigg{)}_{+}\bigg{]}-E[(R_{f}(X)-\mathcal{E}^{R}_{f})_{+}]$ $\displaystyle=E\bigg{[}\frac{\mathcal{E}^{R}_{f}}{x_{f}}(X-x_{f})_{+}\bigg{]}-% E[(R_{f}(X)-\mathcal{E}^{R}_{f})\mathbf{1}_{\{X\geq x_{f}\}}]$ $\displaystyle=E\bigg{[}\frac{\mathcal{E}^{R}_{f}}{x_{f}}(X-x_{f})\mathbf{1}_{% \{X\geq x_{f}\}}\bigg{]}-E[(R_{f}(X)-\mathcal{E}^{R}_{f})\mathbf{1}_{\{X\geq x% _{f}\}}]$ $\displaystyle=E\bigg{[}\bigg{(}\frac{\mathcal{E}^{R}_{f}}{x_{f}}X-R_{f}(X)% \bigg{)}\mathbf{1}_{\{X\geq x_{f}\}}\bigg{]}$ $\displaystyle=E\bigg{[}\frac{x_{f}f(X)-f(x_{f})X}{x_{f}}\mathbf{1}_{\{X\geq x_% {f}\}}\bigg{]}$ $\displaystyle\geq 0,$ (5.3)

where the second, third and fifth equations follow on from (2) in Lemma 3.1; the last inequality follows since $f(x)/x$ is increasing in $x$, so it holds that $f(x)/x\geq f(x_{f})/x_{f}$ for any $x\geq x_{f}$. Then, the continuity of $D(\theta_{2})$ in $\theta_{2}$, together with (5.2) and (5.3), implies that there exists a constant $\theta_{2}\in[f(x_{f})/x_{f},1]$ such that

 $\displaystyle E[((1-\theta_{2})X-\mathcal{E}^{R}_{f})_{+}]=E[(R_{f}(X)-% \mathcal{E}^{R}_{f})_{+}].$
• (4)

Note that $R_{f}(x)\geq\mathcal{E}^{R}_{f}$ if and only if $x\geq x_{f}$. Thus,

 $\displaystyle R_{h_{f}}(x)=\begin{cases}R_{f}(x)<\mathcal{E}^{R}_{f},&0\leq x<% x_{f},\\ \mathcal{E}^{R}_{f},&x_{f}\leq x<\dfrac{\mathcal{E}^{R}_{f}}{1-\theta_{2}},\\ (1-\theta_{2})x\geq\mathcal{E}^{R}_{f},&x\geq\dfrac{\mathcal{E}^{R}_{f}}{1-% \theta_{2}},\\ \end{cases}$ (5.4)

which implies the desired result.

• (5)

Using (5.4), we have

 $\displaystyle E[R_{f}(X)-R_{h_{f}}(X)]$ $\displaystyle\qquad\qquad=E[(R_{f}(X)-R_{h_{f}}(X))\mathbf{1}_{\{X\geq x_{f}\}}]$ $\displaystyle\qquad\qquad=E[(R_{f}(X)-\mathcal{E}^{R}_{f})\mathbf{1}_{\{x_{f}% \leq X\leq\mathcal{E}^{R}_{f}/(1-\theta_{2})\}}]$ $\displaystyle\qquad\qquad\qquad+E[(R_{f}(X)-(1-\theta_{2})X)\mathbf{1}_{\{X>% \mathcal{E}^{R}_{f}/(1-\theta_{2})\}}]$ $\displaystyle\qquad\qquad=E[R_{f}(X)\mathbf{1}_{\{X\geq x_{f}\}}]-E[\mathcal{E% }^{R}_{f}\mathbf{1}_{\{x_{f}\leq X\leq\mathcal{E}^{R}_{f}/(1-\theta_{2})\}}]$ $\displaystyle\qquad\qquad\qquad-E[(1-\theta_{2})X\mathbf{1}_{\{X>\mathcal{E}^{% R}_{f}/(1-\theta_{2})\}}]$ $\displaystyle\qquad\qquad=E[(R_{f}(X)-\mathcal{E}^{R}_{f})\mathbf{1}_{\{X\geq x% _{f}\}}]$ $\displaystyle\qquad\qquad\qquad+E[\mathcal{E}^{R}_{f}\mathbf{1}_{\{X\geq% \mathcal{E}^{R}_{f}/(1-\theta_{2})\}}]-E[(1-\theta_{2})X\mathbf{1}_{\{X>% \mathcal{E}^{R}_{f}/(1-\theta_{2})\}}]$ $\displaystyle\qquad\qquad=E[(R_{f}(X)-\mathcal{E}^{R}_{f})_{+}]-E[((1-\theta_{% 2})X-\mathcal{E}^{R}_{f})\mathbf{1}_{\{X>\mathcal{E}^{R}_{f}/(1-\theta_{2})\}}]$ $\displaystyle\qquad\qquad=E[(R_{f}(X)-\mathcal{E}^{R}_{f})_{+}]-E[((1-\theta_{% 2})X-\mathcal{E}^{R}_{f})_{+}]$ $\displaystyle\qquad\qquad=0.$
• (6)

Using (5.4), we have

 $\displaystyle E[(R_{h_{f}}(X)-\mathcal{E}^{R}_{f})_{+}]$ $\displaystyle\qquad\qquad=E[(R_{f}(X)-\mathcal{E}^{R}_{f})_{+}\mathbf{1}_{\{0% \leq X% \mathcal{E}^{R}_{f}/(1-\theta_{2})\}}]$ $\displaystyle\qquad\qquad=E[((1-\theta_{2})X-\mathcal{E}^{R}_{f})_{+}\mathbf{1% }_{\{X>\mathcal{E}^{R}_{f}/(1-\theta_{2})\}}]$ $\displaystyle\qquad\qquad=E[((1-\theta_{2})X-\mathcal{E}^{R}_{f})_{+}]$ $\displaystyle\qquad\qquad=E[(R_{f}(X)-\mathcal{E}^{R}_{f})_{+}],$

where the second equation comes from (2) in Lemma 3.1, and the last equation follows from (3.5).

• (7)

Using (5) and (6) in Lemma 3.1, we obtain that

 $\displaystyle E[R_{h_{f}}(X)]+\beta E[(R_{h_{f}}(X)-\mathcal{E}^{R}_{f})_{+}]$ $\displaystyle=E[R_{f}(X)]+\beta E[(R_{f}(X)-\mathcal{E}^{R}_{f})_{+}]$ $\displaystyle=\mathcal{E}^{R}_{f}.$ (5.5)

If $\mathcal{E}^{R}_{h_{f}}\neq\mathcal{E}^{R}_{f}$, we may assume that $\mathcal{E}^{R}_{h_{f}}<\mathcal{E}^{R}_{f}$, and then

 $\displaystyle\mathcal{E}^{R}_{h_{f}}$ $\displaystyle=E[R_{h_{f}}(X)]+\beta E[(R_{h_{f}}(X)-\mathcal{E}^{R}_{h_{f}})_{% +}]$ $\displaystyle\geq E[R_{h_{f}}(X)]+\beta E[(R_{h_{f}}(X)-\mathcal{E}^{R}_{f})_{% +}]$ $\displaystyle=\mathcal{E}^{R}_{f},$

which contradicts the assumption of $\mathcal{E}^{R}_{h_{f}}<\mathcal{E}^{R}_{f}$. Hence, $\mathcal{E}^{R}_{h_{f}}=\mathcal{E}^{R}_{f}$.

• (8)

We will prove this property in three separate cases: $\theta_{2}=1$, $\theta_{2}=f(x_{f})/x_{f}$ and $f(x_{f})/x_{f}<\theta_{2}<1$.

Case 1. Say $\theta_{2}=1$. Then, the condition (3.5) is equal to

 $E[(R_{f}(X)-\mathcal{E}^{R}_{f})_{+}]=0,$

which, together with the fact that $R_{f}(x)\geq\mathcal{E}^{R}_{f}$ if and only if $x\geq x_{f}$, yields that $R_{f}(X)=\mathcal{E}^{R}_{f}$ P-a.s. in the event that $\{X\geq x_{f}\}.$ Thus, $f(X)=X-\mathcal{E}^{R}_{f}=h_{f}(X)$ P-a.s. in the event that $\{X\geq x_{f}\}$. Further, $f(X)=h_{f}(X)$ P-a.s., which, together with the law invariance property of $\varPi$, yields that $\varPi(f(X))=\varPi(h_{f}(X))$.

Case 2. Say $\theta_{2}=f(x_{f})/x_{f}$. In this case, using (2) in Lemma 3.1, from the condition (3.5), we have

 $\displaystyle 0$ $\displaystyle=E\bigg{[}\bigg{(}\bigg{(}1-\frac{f(x_{f})}{x_{f}}\bigg{)}X-% \mathcal{E}^{R}_{f}\bigg{)}_{+}\bigg{]}-E[(R_{f}(X)-\mathcal{E}^{R}_{f})_{+}]$ $\displaystyle=E\bigg{[}\frac{\mathcal{E}^{R}_{f}}{x_{f}}(X-x_{f})_{+}\bigg{]}-% E[(R_{f}(X)-\mathcal{E}^{R}_{f})\mathbf{1}_{\{X\geq x_{f}\}}]$ $\displaystyle=E\bigg{[}\frac{\mathcal{E}^{R}_{f}}{x_{f}}(X-x_{f})\mathbf{1}_{% \{X\geq x_{f}\}}\bigg{]}-E[(R_{f}(X)-\mathcal{E}^{R}_{f})\mathbf{1}_{\{X\geq x% _{f}\}}]$ $\displaystyle=E\bigg{[}\bigg{(}\frac{\mathcal{E}^{R}_{f}}{x_{f}}X-R_{f}(X)% \bigg{)}\mathbf{1}_{\{X\geq x_{f}\}}\bigg{]}$ $\displaystyle=E\bigg{[}\bigg{(}f(X)-\frac{f(x_{f})}{x_{f}}X\bigg{)}\mathbf{1}_% {\{X\geq x_{f}\}}\bigg{]}.$ (5.6)

Since $f(x)/x$ is increasing in $x$, $f(X)-(f(x_{f})/x_{f})X\geq 0$ P-a.s in the event that $\{X\geq x_{f}\}$, which, together with (5.6), yields that $f(X)=h_{f}(X)$ P-a.s. Hence, $\varPi(f(X))=\varPi(h_{f}(X))$.

Case 3. If $f(x_{f})/x_{f}<\theta_{2}<1$. Let

 $\displaystyle x_{1}:=\inf\{x\geq x_{f}\colon f(x)\geq\theta_{2}x\}.$ (5.7)

We will show that $\mathcal{E}^{R}_{f}/(1-\theta_{2})\leq x_{1}<\infty$, and then $f(x)$ up-crosses $h_{f}(x)$ at $x=x_{1}$.11 1 A function $f_{1}(x)$ is said to up-cross a function $f_{2}(x)$ if there exists an $x_{0}\in\mathbb{R}$ such that $f_{1}(x)\leq f_{2}(x)$ for $x\leq x_{0}$, and $f_{1}(x)\geq f_{2}(x)$ for $x>x_{0}$.

First, we show that $\mathcal{E}^{R}_{f}/(1-\theta_{2})\leq x_{1}<\infty$. In fact, if $x_{1}=\infty$, then by the definition of $x_{1}$ we have $f(x)<\theta_{2}x$ for any $x\geq x_{f}$. Thus, $(1-\theta_{2})x for any $x\geq x_{f}$. Hence,

 $E[(R_{f}(X)-\mathcal{E}^{R}_{f})_{+}]>E[((1-\theta_{2})X-\mathcal{E}^{R}_{f})_% {+}],$

which contradicts the condition (3.5). Hence, $x_{1}<\infty$.

Say $x_{1}<\mathcal{E}^{R}_{f}/(1-\theta_{2})$. From the continuity of $f$, we have $f(x_{1})=\theta_{2}x_{1}$. Further, $R_{f}(x_{1})=x_{1}-f(x_{1})=x_{1}-\theta_{2}x_{1}<\mathcal{E}^{R}_{f}$, which, together with (2) in Lemma 3.1, yields that $x_{1}; this contradicts the definition of $x_{1}$. Hence, $x_{1}\geq\mathcal{E}^{R}_{f}/(1-\theta_{2})$.

Next, we show that $f(x)$ up-crosses $h_{f}(x)$ at $x=x_{1}$:

 $\displaystyle h_{f}(x)-f(x)=\begin{cases}0,&0\leq x (5.8)

where (5.8) follows from the increasing property of $f(x)/x$ in $x$. In fact, for any $x\geq x_{1}$, $f(x)/x\geq f(x_{f})/x_{f}$, and by the continuity of $f$, we have $f(x_{1})/x_{1}=\theta_{2}$. Thus, $\theta_{2}\leq f(x)/x$ for any $x\geq x_{1}$.

Hence, $f(x)$ up-crosses $h_{f}(x)$ at $x=x_{1}$. From (5) in Lemma 3.1, we have $E[f(X)]=E[h_{f}(X)]$. Therefore, using Ohlin’s lemma, we obtain $h_{f}(x)\leq_{\mathrm{cx}}f(x)$, which, together with the convex order preservation of $\varPi$, yields that $\varPi(h_{f}(X))\leq\varPi(f(X))$. The proof of Lemma 3.1 is completed.22 2 For a random variable $Y$ and two increasing functions $f_{1}(y)$ and $f_{2}(y)$ with $E[f_{1}(Y)]=E[f_{2}(Y)]$, Ohlin’s lemma states that if $f_{1}(y)$ up-crosses $f_{2}(y)$, then $f_{2}(Y)\leq_{\mathrm{cx}}f_{1}(Y)$. See Ohlin (1969) for more details.

###### Proof of Lemma 3.2.

For any $f\in\mathfrak{C}$ (3.4), let

 $\displaystyle D_{1}(\theta_{1}):=E[h_{f}(X)-J_{\theta_{1},\theta_{2},\mathcal{% E}^{R}_{f}}(X)],\quad 0\leq\theta_{1}\leq\frac{f(x_{f})}{x_{f}},$

where $h_{f}$ is defined as (3.4) and $\theta_{2}\in[f(x_{f})/x_{f},1]$ satisfies (3.5) (Lemma 3.1 has proved the existence of $\theta_{2}\in[f(x_{f})/x_{f},1]$ satisfying (3.5), ie, the well definedness of $h_{f}$). Note that $0\leq\theta_{1}\leq f(x_{f})/x_{f}$ implies that $\mathcal{E}^{R}_{f}/1-\theta_{1}\leq x_{f}$, giving us

 $\displaystyle D_{1}(\theta_{1})=E[(f(X)-J_{\theta_{1},\theta_{2},\mathcal{E}^{% R}_{f}}(X))\mathbf{1}_{\{0\leq X\leq x_{f}\}}],\quad 0\leq\theta_{1}\leq\frac{% f(x_{f})}{x_{f}}.$

Further,

 $\displaystyle D_{1}(0)$ $\displaystyle=E[f(X)\mathbf{1}_{\{0\leq X<\mathcal{E}^{R}_{f}\}}]+E[(f(X)-X+% \mathcal{E}^{R}_{f})\mathbf{1}_{\{\mathcal{E}^{R}_{f}\leq X\leq x_{f}\}}]$ $\displaystyle=E[(\mathcal{E}^{R}_{f}-R_{f}(X))\mathbf{1}_{\{\mathcal{E}^{R}_{f% }\leq X\leq x_{f}\}}]+E[f(X)\mathbf{1}_{\{X<\mathcal{E}^{R}_{f}\}}]$ $\displaystyle\geq 0,$ (5.9)

where the last inequality above comes from the fact that $R_{f}(x)\geq\mathcal{E}^{R}_{f}$ if and only if $x\geq x_{f}$:

 $\displaystyle D_{1}\bigg{(}\frac{f(x_{f})}{x_{f}}\bigg{)}=E\bigg{[}\bigg{(}f(X% )-\frac{f(x_{f})}{x_{f}}X\bigg{)}\mathbf{1}_{\{0\leq X\leq x_{f}\}}\bigg{]}.$

When $x\leq x_{f}$, using the increasing property of $f(x)/x$ in $x$, we have $f(x)-(f(x_{f})/x_{f})x\leq 0$. Hence,

 $\displaystyle D_{1}\bigg{(}\frac{f(x_{f})}{x_{f}}\bigg{)}\leq 0.$ (5.10)

Hence, from (5.9) and (5.10), and using the continuity of $D_{1}(\theta_{1})$ in $\theta_{1}$, there exists a constant $\theta_{1}\in[0,f(x_{f})/x_{f}]$ such that

 $\displaystyle E[J_{\theta_{1},\theta_{2},\mathcal{E}^{R}_{f}}(X)]=E[h_{f}(X)].$ (5.11)

From (5) in Lemma 3.1, it follows that $E[h_{f}(X)]=E[f(X)]$. Hence, we further have $E[J_{\theta_{1},\theta_{2},\mathcal{E}^{R}_{f}}(X)]=E[f(X)]$, ie, we obtain (3.6).

Next, we will show that the above function $J_{\theta_{1},\theta_{2},\mathcal{E}^{R}_{f}}$ further satisfies (3.7)–(3.9).

To show (3.7), note that

 $\displaystyle R_{J_{\theta_{1},\theta_{2},\mathcal{E}^{R}_{f}}}(x)=\begin{% cases}(1-\theta_{1})x\leq\mathcal{E}^{R}_{f},&0\leq x<\dfrac{\mathcal{E}^{R}_{% f}}{1-\theta_{1}},\\ \mathcal{E}^{R}_{f},&\dfrac{\mathcal{E}^{R}_{f}}{1-\theta_{1}}\leq x<\dfrac{% \mathcal{E}^{R}_{f}}{1-\theta_{2}},\\ (1-\theta_{2})x=R_{h_{f}}(x),&x\geq\dfrac{\mathcal{E}^{R}_{f}}{1-\theta_{2}}.% \end{cases}$

Hence,

 $\displaystyle E[(R_{J_{\theta_{1},\theta_{2},\mathcal{E}^{R}_{f}}}(X)-\mathcal% {E}^{R}_{f})_{+}]=E[(R_{h_{f}}(X)-\mathcal{E}^{R}_{f})_{+}],$

which, together with (6) in Lemma 3.1, implies (3.7).

Then, (3.8) follows from (3.6), (3.7) and the definition of the expectile. Using (3.6)–(3.8), it is easy to check the above function $J_{\theta_{1},\theta_{2},\mathcal{E}^{R}_{f}}\in\mathfrak{C}_{0}$.

To show (3.9), note that

 $\displaystyle h_{f}(x)$ $\displaystyle=\begin{cases}f(x),&0\leq x\dfrac{\mathcal{E}^{R}_{f}}{1-\theta_{2}},\end{cases}$ $\displaystyle J_{\theta_{1},\theta_{2},\mathcal{E}^{R}_{f}}(x)$ $\displaystyle=\begin{cases}\theta_{1}x,&0\leq x<\dfrac{\mathcal{E}^{R}_{f}}{1-% \theta_{1}},\\ x-\mathcal{E}^{R}_{f},&\dfrac{\mathcal{E}^{R}_{f}}{1-\theta_{1}}\leq x\leq% \dfrac{\mathcal{E}^{R}_{f}}{1-\theta_{2}},\\ \theta_{2}x,&x>\dfrac{\mathcal{E}^{R}_{f}}{1-\theta_{2}}.\end{cases}$

Since $\theta_{1}\in[0,f(x_{f})/x_{f}]$, we have $x_{f}\geq\mathcal{E}^{R}_{f}/(1-\theta_{1})$. Hence,

 $\displaystyle J_{\theta_{1},\theta_{2},\mathcal{E}^{R}_{f}}(x)-h_{f}(x)=\begin% {cases}\theta_{1}x-f(x),&0\leq x

We will show that $h_{f}(x)$ up-crosses $J_{\theta_{1},\theta_{2},\mathcal{E}^{R}_{f}}(x)$. Let

 $\displaystyle x_{2}:=\inf\bigg{\{}0\leq x\leq\frac{\mathcal{E}^{R}_{f}}{1-% \theta_{1}}\colon f(x)\geq\theta_{1}x\bigg{\}},$

where $\inf\emptyset=+\infty$.

Case 1. Say $x_{2}<+\infty$. In this case, obviously, $0\leq x_{2}\leq\mathcal{E}^{R}_{f}/(1-\theta_{1})$, and $f(x_{2})=\theta_{1}x_{2}$.

1. (1)

When $0\leq x\leq x_{2}$, we have, from the definition of $x_{2}$, $f(x)\leq\theta_{1}x$, ie, $J_{\theta_{1},\theta_{2},\mathcal{E}^{R}_{f}}(x)-h_{f}(x)=\theta_{1}x-f(x)\geq 0$ for any $0\leq x\leq x_{2}$.

2. (2)

When $x_{2}, $J_{\theta_{1},\theta_{2},\mathcal{E}^{R}_{f}}(x)-h_{f}(x)=\theta_{1}x-f(x)=(f(% x_{2})/x_{2})x-f(x)\leq 0$, since $f(x)/x$ is increasing in $x$.

Hence, $h_{f}(x)$ up-crosses $J_{\theta_{1},\theta_{2},\mathcal{E}^{R}_{f}}(x)$ at $x=x_{2}$.

Case 2. Say $x_{2}=\infty$. In this case, for any $0\leq x\leq\mathcal{E}^{R}_{f}/(1-\theta_{1})$, it follows that $\theta_{1}x>f(x)$.

1. (1)

When $0\leq x\leq\mathcal{E}^{R}_{f}/(1-\theta_{1})$, $J_{\theta_{1},\theta_{2},\mathcal{E}^{R}_{f}}(x)-h_{f}(x)=\theta_{1}x-f(x)\geq 0$.

2. (2)

When $x\geq\mathcal{E}^{R}_{f}/(1-\theta_{1})$, obviously, $J_{\theta_{1},\theta_{2},\mathcal{E}^{R}_{f}}(x)-h_{f}(x)\leq 0$.

Hence, $h_{f}(x)$ up-crosses $J_{\theta_{1},\theta_{2},\mathcal{E}^{R}_{f}}(x)$ at $x=\mathcal{E}^{R}_{f}/(1-\theta_{1})$.

Therefore, from (5.11) and using Ohlin’s lemma, we obtain

 $\displaystyle J_{\theta_{1},\theta_{2},\mathcal{E}^{R}_{f}}(X)\leq_{\mathrm{cx% }}h_{f}(X),$

which, together with the convex order preservation property of $\varPi$, yields that

 $\displaystyle\varPi(J_{\theta_{1},\theta_{2},\mathcal{E}^{R}_{f}}(X))\leq% \varPi(h_{f}(X)).$

Further, using $\varPi(h_{f}(X))\leq\varPi(f(X))$, we obtain

 $\displaystyle\varPi(J_{\theta_{1},\theta_{2},\mathcal{E}^{R}_{f}}(X))\leq% \varPi(f(X)).$

The proof of Lemma 3.2 is completed. ∎

###### Proof of Theorem 3.3.

On the one hand, obviously, $\mathfrak{C}_{0}\subseteq\mathfrak{C}$; thus,

 $\displaystyle\min_{f\in\mathfrak{C}}L_{f}(X)\leq\min_{J_{\theta_{1},\theta_{2}% ,d}\in\mathfrak{C}_{0}}L_{J_{\theta_{1},\theta_{2},d}}(X).$ (5.12)

On the other hand, for any $f\in\mathfrak{C}$, we define $J_{\theta_{1},\theta_{2},d}\in\mathfrak{C}_{0}$, as given by Lemma 3.2, satisfying (3.6)–(3.9). We will further show that

 $\displaystyle\mathcal{E}^{T}_{J_{\theta_{1},\theta_{2},d}}-E[T_{J_{\theta_{1},% \theta_{2},d}}(X)]=\mathcal{E}^{T}_{f}-E[T_{f}(X)].$ (5.13)

For $l=f$ or $J_{\theta_{1},\theta_{2},d}$, define $\smash{\bar{T}_{l}(X)}:=\smash{T_{l}(X)}-\smash{E[T_{l}(X)]}$ and $\smash{\mathcal{E}^{\bar{T}}_{l}}:=\smash{\mathcal{E}(\bar{T}_{l}(X);\alpha)}$. Then, in order to show (5.13), we only need to show

 $\displaystyle\mathcal{E}^{\bar{T}}_{J_{\theta_{1},\theta_{2},d}}=\mathcal{E}^{% \bar{T}}_{f}.$

Note that $\bar{T}_{l}(X)=R_{l}(X)-E[R_{l}(X)]$, $E[\bar{T}_{l}(X)]=0$ and $\mathcal{E}^{\bar{T}}_{l}=\mathcal{E}^{R}_{l}-E[R_{l}(X)]$ for $l=f$ or $l=J_{\theta_{1},\theta_{2},d}$. Hence,

 $\displaystyle\bar{T}_{J_{\theta_{1},\theta_{2},d}}(X)-\mathcal{E}^{\bar{T}}_{f}$ $\displaystyle=R_{J_{\theta_{1},\theta_{2},d}}(X)-\mathcal{E}^{R}_{f},$ $\displaystyle\bar{T}_{f}(X)-\mathcal{E}^{\bar{T}}_{f}$ $\displaystyle=R_{f}(X)-\mathcal{E}^{R}_{f}.$ (5.14)

Further, we have

 $\displaystyle E[\bar{T}_{J_{\theta_{1},\theta_{2},d}}(X)]+\beta E[(\bar{T}_{J_% {\theta_{1},\theta_{2},d}}(X)-\mathcal{E}^{\bar{T}}_{f})_{+}]$ $\displaystyle\qquad\qquad=\beta E[(\bar{T}_{J_{\theta_{1},\theta_{2},d}}(X)-% \mathcal{E}^{\bar{T}}_{f})_{+}]$ $\displaystyle\qquad\qquad=\beta E[(R_{J_{\theta_{1},\theta_{2},d}}(X)-\mathcal% {E}^{R}_{f})_{+}]$ $\displaystyle\qquad\qquad=E[(R_{f}(X)-\mathcal{E}^{R}_{f})_{+}]$ $\displaystyle\qquad\qquad=E[\bar{T}_{f}(X)]+\beta E[(\bar{T}_{f}(X)-\mathcal{E% }^{\bar{T}}_{f})_{+}]$ $\displaystyle\qquad\qquad=\mathcal{E}^{\bar{T}_{f}},$ (5.15)

where the second equation and the penultimate equation come from (5.14), the third equation comes from (3.7), and the last equation comes from the definition of the expectile. Hence,

 $\mathcal{E}^{\bar{T}}_{J_{\theta_{1},\theta_{2},d}}=\mathcal{E}^{\bar{T}}_{f},% \quad\text{ie, }\mathcal{E}^{T}_{J_{\theta_{1},\theta_{2},d}}-E[T_{J_{\theta_{% 1},\theta_{2},d}}(X)]=\mathcal{E}^{T}_{f}-E[T_{f}(X)].$

Using (3.6) and (3.9), we have

 $\displaystyle E[T_{f}(X)]$ $\displaystyle=E[R_{f}(X)]+\varPi(f(X))\geq E[R_{J_{\theta_{1},\theta_{2},d}}(X% )]+\varPi(J_{\theta_{1},\theta_{2},d}(X))$ $\displaystyle=E[T_{J_{\theta_{1},\theta_{2},d}}(X)].$

Therefore,

 $\displaystyle L_{J_{\theta_{1},\theta_{2},d}}(X)$ $\displaystyle=E[T_{J_{\theta_{1},\theta_{2},d}}(X)]+\delta(\mathcal{E}^{T}_{J_% {\theta_{1},\theta_{2},d}}-E[T_{J_{\theta_{1},\theta_{2},d}}(X)])$ $\displaystyle\leq E[T_{f}(X)]+\delta(\mathcal{E}^{T}_{f}-E[T_{f}(X)])$ $\displaystyle=L_{f}(X).$

The proof of Theorem 3.3 is completed. ∎

###### Proof of Theorem 3.5.

For any $J_{\theta_{1},\theta_{2},d}\in\mathfrak{C}_{0}$, define

 $g(x;a):=\mathbf{I}_{(x_{J_{\theta_{1},\theta_{2},d}}-a,x_{J_{\theta_{1},\theta% _{2},d}}]}(x)-a\quad\text{for~{}all~{}}x\geq 0,\,x_{J_{\theta_{1},\theta_{2},d% }}-\frac{d}{1-\theta_{1}}\leq a\leq x_{J_{\theta_{1},\theta_{2},d}}-d,$

where

 $\displaystyle\mathbf{I}_{(a,b]}(x):=\min\{(x-a)_{+},b-a\},\quad x\geq 0,\,0% \leq a\leq b.$

We first show that there exists a constant

 $\tilde{a}\in[x_{J_{\theta_{1},\theta_{2},d}}-(d/(1-\theta_{1})),x_{J_{\theta_{% 1},\theta_{2},d}}-d]$

such that

 $\displaystyle E[J_{\theta_{1},\theta_{2},d}(X)-J_{\theta_{1},\theta_{2},d}(x_{% J_{\theta_{1},\theta_{2},d}})]=E[f_{1}(X)-f_{1}(x_{J_{\theta_{1},\theta_{2},d}% })],$ (5.16)

where $f_{1}\in\mathfrak{C}$ is given by

 $\displaystyle f_{1}(x)-f_{1}(x_{J_{\theta_{1},\theta_{2},d}})$ $\displaystyle\qquad\qquad\qquad{}:=\begin{cases}\mathbf{I}_{(x_{J_{\theta_{1},% \theta_{2},d}}-\tilde{a},x_{J_{\theta_{1},\theta_{2},d}}]}(x)-\tilde{a},&0\leq x%

and

 $\displaystyle f_{1}(x_{J_{\theta_{1},\theta_{2},d}}):=\tilde{a}.$

In fact, when $a=x_{J_{\theta_{1},\theta_{2},d}}-d$, we have

 $\displaystyle g(x;x_{J_{\theta_{1},\theta_{2},d}}-d)$ $\displaystyle=\mathbf{I}_{(d,x_{J_{\theta_{1},\theta_{2},d}}]}(x)-x_{J_{\theta% _{1},\theta_{2},d}}+d$ $\displaystyle=\begin{cases}d-x_{J_{\theta_{1},\theta_{2},d}},&0\leq x

and

 $\displaystyle J_{\theta_{1},\theta_{2},d}(x)$ $\displaystyle=\begin{cases}\theta_{1}x,&0\leq x<\dfrac{d}{1-\theta_{1}},\\ x-d,&\dfrac{d}{1-\theta_{1}}\leq x\geq x_{J_{\theta_{1},\theta_{2},d}}.\end{cases}$ $\displaystyle J_{\theta_{1},\theta_{2},d}(x)-J_{\theta_{1},\theta_{2},d}(x_{J_% {\theta_{1},\theta_{2},d}})$ $\displaystyle\qquad\qquad\qquad\quad=\begin{cases}\theta_{1}x-x_{J_{\theta_{1}% ,\theta_{2},d}}+d,&0\leq x<\dfrac{d}{1-\theta_{1}},\\ x-x_{J_{\theta_{1},\theta_{2},d}},&\dfrac{d}{1-\theta_{1}}\leq x\geq x_{J_{% \theta_{1},\theta_{2},d}}.\end{cases}$

Hence,

 $\displaystyle g(x;x_{J_{\theta_{1},\theta_{2},d}}-d)-(J_{\theta_{1},\theta_{2}% ,d}(x)-J_{\theta_{1},\theta_{2},d}(x_{J_{\theta_{1},\theta_{2},d}}))$ $\displaystyle\qquad\qquad\qquad\qquad=\begin{cases}-\theta_{1}x\leq 0,&0\leq x% <\dfrac{d}{1-\theta_{1}},\\ 0,&\dfrac{d}{1-\theta_{1}}

which implies that

 $g(x;x_{J_{\theta_{1},\theta_{2},d}}-d)\leq J_{\theta_{1},\theta_{2},d}(x)-J_{% \theta_{1},\theta_{2},d}(x_{J_{\theta_{1},\theta_{2},d}})\quad\text{for~{}all~% {}}0\leq x\leq x_{J_{\theta_{1},\theta_{2},d}}.$

When $a=x_{J_{\theta_{1},\theta_{2},d}}-(d/(1-\theta_{1})$, however, we have

 $\displaystyle g\bigg{(}x;x_{J_{\theta_{1},\theta_{2},d}}-\frac{d}{1-\theta_{1}% }\bigg{)}$ $\displaystyle=\mathbf{I}_{(d/(1-\theta_{1}),x_{J_{\theta_{1},\theta_{2},d}}]}(% x)-x_{J_{\theta_{1},\theta_{2},d}}+\frac{d}{1-\theta_{1}}$ $\displaystyle=\begin{cases}\dfrac{d}{1-\theta_{1}}-x_{J_{\theta_{1},\theta_{2}% ,d}},&0\leq x<\dfrac{d}{1-\theta_{1}},\\ x-x_{J_{\theta_{1},\theta_{2},d}},&\dfrac{d}{1-\theta_{1}}\leq x

Thus,

 $\displaystyle g(x;x_{J_{\theta_{1},\theta_{2},d}}-\frac{d}{1-\theta_{1}})-(J_{% \theta_{1},\theta_{2},d}(x)-J_{\theta_{1},\theta_{2},d}(x_{J_{\theta_{1},% \theta_{2},d}}))$ $\displaystyle\qquad\qquad\qquad{}=\begin{cases}\theta_{1}\bigg{(}\dfrac{d}{1-% \theta_{1}}-x\bigg{)}\geq 0,&0\leq x<\dfrac{d}{1-\theta_{1}},\\ 0,&\dfrac{d}{1-\theta_{1}}\leq x\leq x_{J_{\theta_{1},\theta_{2},d}},\end{cases}$

ie,

 $g\bigg{(}x;x_{J_{\theta_{1},\theta_{2},d}}-\frac{d}{1-\theta_{1}}\bigg{)}\geq(% J_{\theta_{1},\theta_{2},d}(x)-J_{\theta_{1},\theta_{2},d}(x_{J_{\theta_{1},% \theta_{2},d}})),\quad\text{for~{}all~{}}0\leq x\leq x_{J_{\theta_{1},\theta_{% 2},d}}.$

Since $E[g(X;a)]$ is continuous and decreasing in $a$, there exists a constant $\tilde{a}\in[x_{J_{\theta_{1},\theta_{2},d}}-(d/(1-\theta_{1})),x_{J_{\theta_{% 1},\theta_{2},d}}-d]$ such that

 $\displaystyle E[g(X;a)\mathbf{1}_{\{0\leq X\leq x\leq x_{J_{\theta_{1},\theta_% {2},d}}\}}]$ $\displaystyle\qquad\qquad{}=E[(J_{\theta_{1},\theta_{2},d}(x)-J_{\theta_{1},% \theta_{2},d}(x_{J_{\theta_{1},\theta_{2},d}}))\mathbf{1}_{\{0\leq X\leq x\leq x% _{J_{\theta_{1},\theta_{2},d}}\}}],$

which further implies (5.16).

Further, it is easy to verify that $J_{\theta_{1},\theta_{2},d}(x)-J_{\theta_{1},\theta_{2},d}(x_{J_{\theta_{1},% \theta_{2},d}})$ up-crosses $f_{1}(x)-f_{1}(x_{J_{\theta_{1},\theta_{2},d}})$; hence, using Ohlin’s lemma, we obtain

 $\displaystyle f_{1}(X)-f_{1}(x_{J_{\theta_{1},\theta_{2},d}})\leq_{\mathrm{cx}% }J_{\theta_{1},\theta_{2},d}(X)-J_{\theta_{1},\theta_{2},d}(x_{J_{\theta_{1},% \theta_{2},d}}),$

which, together with the convex order preservation property of $\varPi$, yields that

 $\displaystyle\varPi(f_{1}(X)-f_{1}(x_{J_{\theta_{1},\theta_{2},d}}))\leq\varPi% (J_{\theta_{1},\theta_{2},d}(X)-J_{\theta_{1},\theta_{2},d}(x_{J_{\theta_{1},% \theta_{2},d}})).$ (5.17)

Next, we will show

 $\displaystyle\mathcal{E}^{\bar{T}_{f_{1}}}=\mathcal{E}^{\bar{T}_{J_{\theta_{1}% ,\theta_{2},d}}},$ (5.18)

where $\bar{T}_{l}(X)=T_{l}(X)-E[T_{l}(X)]$ for $l=f_{1}$ or $l=J_{\theta_{1},\theta_{2},d}$.

In order to show (5.18), we need only prove that

 $\displaystyle E[(\bar{T}_{f_{1}}(X)-\mathcal{E}^{\bar{T}}_{J_{\theta_{1},% \theta_{2},d}})_{+}]=E[(\bar{T}_{J_{\theta_{1},\theta_{2},d}}(X)-\mathcal{E}^{% \bar{T}}_{J_{\theta_{1},\theta_{2},d}})_{+}].$

Note that

 $\displaystyle\bar{T}_{f_{1}}(X)-\mathcal{E}^{\bar{T}}_{J_{\theta_{1},\theta_{2% },d}}$ $\displaystyle=(R_{f_{1}}(X)-E[R_{f_{1}}(X)])-(\mathcal{E}^{R}_{J_{\theta_{1},% \theta_{2},d}}-E[R_{J_{\theta_{1},\theta_{2},d}}(X)])$ $\displaystyle=R_{f_{1}}(X)-(\mathcal{E}^{R}_{J_{\theta_{1},\theta_{2},d}}+E[R_% {f_{1}}(X)]-E[R_{J_{\theta_{1},\theta_{2},d}}(X)])$ $\displaystyle=R_{f_{1}}(X)+f_{1}(x_{J_{\theta_{1},\theta_{2},d}})-J_{\theta_{1% },\theta_{2},d}(x_{J_{\theta_{1},\theta_{2},d}})-\mathcal{E}^{R}_{J_{\theta_{1% },\theta_{2},d}},$ (5.19)

where the last equality follows from (5.16).

On the other hand,

 $\displaystyle(R_{f_{1}}(X)+f_{1}(x_{J_{\theta_{1},\theta_{2},d}})-J_{\theta_{1% },\theta_{2},d}(x_{J_{\theta_{1},\theta_{2},d}})-\mathcal{E}^{R}_{J_{\theta_{1% },\theta_{2},d}})_{+}$ $\displaystyle\qquad\qquad\qquad\qquad{}=\begin{cases}0,&0\leq x<\dfrac{% \mathcal{E}^{R}_{J_{\theta_{1},\theta_{2},d}}}{1-\theta_{2}},\\ (1-\theta_{2})x-\mathcal{E}^{R}_{J_{\theta_{1},\theta_{2},d}},&x\geq\dfrac{% \mathcal{E}^{R}_{J_{\theta_{1},\theta_{2},d}}}{1-\theta_{2}}.\end{cases}$ (5.20)

Thus, we have

 $\displaystyle E[\bar{T}_{f_{1}}(X)]+\beta E[(\bar{T}_{f_{1}}(X)-\mathcal{E}^{% \bar{T}}_{J_{\theta_{1},\theta_{2},d}})_{+}]$ $\displaystyle\qquad=\beta E[(\bar{T}_{f_{1}}(X)-\mathcal{E}^{\bar{T}}_{J_{% \theta_{1},\theta_{2},d}})_{+}]$ $\displaystyle\qquad=\beta E[(R_{f_{1}}(X)+f_{1}(x_{J_{\theta_{1},\theta_{2},d}% })-J_{\theta_{1},\theta_{2},d}(x_{J_{\theta_{1},\theta_{2},d}})-\mathcal{E}^{R% }_{J_{\theta_{1},\theta_{2},d}})_{+}]$ $\displaystyle\qquad=\beta E[((1-\theta_{2})X-\mathcal{E}^{R}_{J_{\theta_{1},% \theta_{2},d}})_{+}]$ $\displaystyle\qquad=\beta E[(R_{J_{\theta_{1},\theta_{2},d}}(X)-\mathcal{E}^{R% }_{J_{\theta_{1},\theta_{2},d}})_{+}]$ $\displaystyle\qquad=E[\bar{T}_{J_{\theta_{1},\theta_{2},d}}(X)]+\beta E[(\bar{% T}_{J_{\theta_{1},\theta_{2},d}}(X)-\mathcal{E}^{\bar{T}}_{J_{\theta_{1},% \theta_{2},d}})_{+}],$ (5.21)

where the second equality follows from (5.19), the third equality follows from (5.20) and the penultimate equality follows from the fact that $\theta_{2}$ satisfies

 $\displaystyle E[((1-\theta_{2})X-\mathcal{E}^{R}_{J_{\theta_{1},\theta_{2},d}}% )_{+}]=E[(R_{J_{\theta_{1},\theta_{2},d}}-\mathcal{E}^{R}_{J_{\theta_{1},% \theta_{2},d}})_{+}].$

Hence, $\mathcal{E}^{\bar{T}}_{f_{1}}=\mathcal{E}^{\bar{T}}_{J_{\theta_{1},\theta_{2},% d}}$, ie,

 $\displaystyle\mathcal{E}^{T}_{f_{1}}-E[T_{f_{1}}(X)]=\mathcal{E}^{T}_{J_{% \theta_{1},\theta_{2},d}}-E[T_{J_{\theta_{1},\theta_{2},d}}(X)].$ (5.22)

We first assume that $\varPi$ is translation invariant. Note that (5.16) is equal to

 $\displaystyle E[R_{f_{1}}(X)]=E[R_{J_{\theta_{1},\theta_{2},d}}(X)]+J_{\theta_% {1},\theta_{2},d}(x_{J_{\theta_{1},\theta_{2},d}})-f_{1}(x_{J_{\theta_{1},% \theta_{2},d}}).$ (5.23)

Hence,

 $\displaystyle E[T_{f_{1}}(X)]$ $\displaystyle=E[R_{f_{1}}(X)]+\varPi(f_{1}(X))$ $\displaystyle\leq E[R_{J_{\theta_{1},\theta_{2},d}}(X)]+J_{\theta_{1},\theta_{% 2},d}(x_{J_{\theta_{1},\theta_{2},d}})-f_{1}(x_{J_{\theta_{1},\theta_{2},d}})$ $\displaystyle\qquad+\varPi(J_{\theta_{1},\theta_{2},d}(X)-J_{\theta_{1},\theta% _{2},d}(x_{J_{\theta_{1},\theta_{2},d}}))$ $\displaystyle=E[R_{J_{\theta_{1},\theta_{2},d}}(X)]+\varPi(J_{\theta_{1},% \theta_{2},d}(X))$ $\displaystyle=E[T_{J_{\theta_{1},\theta_{2},d}}(X)],$ (5.24)

where the second inequality follows from (5.17) and (5.23), and the penultimate equality follows from the translation invariance of $\varPi$. Therefore,

 $\displaystyle L_{f_{1}}(X)$ $\displaystyle=E[T_{f_{1}}(X)]+\delta(\mathcal{E}^{T}_{f_{1}}-E[T_{f_{1}}(X)])$ $\displaystyle\leq E[T_{J_{\theta_{1},\theta_{2},d}}(X)]+\delta(\mathcal{E}^{T}% _{J_{\theta_{1},\theta_{2},d}}-E[T_{J_{\theta_{1},\theta_{2},d}}(X)])=L_{J_{% \theta_{1},\theta_{2},d}}(X).$

When $\varPi$ is the expected value principle, ie, for any loss random variable $Z$ with $E[Z]<\infty$, $\varPi(Z)=(1+\eta)E[Z]$ with a loading factor $\eta>0$. Then,

 $\displaystyle\varPi(J_{\theta_{1},\theta_{2},d}(X)-J_{\theta_{1},\theta_{2},d}% (x_{J_{\theta_{1},\theta_{2},d}}))$ $\displaystyle\qquad\qquad\qquad\qquad=(1+\eta)E[J_{\theta_{1},\theta_{2},d}(X)% -J_{\theta_{1},\theta_{2},d}(x_{J_{\theta_{1},\theta_{2},d}})]$ $\displaystyle\qquad\qquad\qquad\qquad=\varPi(J_{\theta_{1},\theta_{2},d}(X))-(% 1+\eta)J_{\theta_{1},\theta_{2},d}(x_{J_{\theta_{1},\theta_{2},d}}),$

and

 $\displaystyle\varPi(f_{1}(X)-f_{1}(x_{J_{\theta_{1},\theta_{2},d}}))$ $\displaystyle=\varPi(f_{1}(X))-(1+\eta)f_{1}(x_{J_{\theta_{1},\theta_{2},d}}).$

Hence, using (5.17), we have

 $\displaystyle\varPi(J_{\theta_{1},\theta_{2},d}(X))\geq\varPi(f_{1}(X))+(1+% \eta)(J_{\theta_{1},\theta_{2},d}(x_{J_{\theta_{1},\theta_{2},d}})-f_{1}(x_{J_% {\theta_{1},\theta_{2},d}})).$ (5.25)

Further, using (5.23) and (5.25),

 $\displaystyle E[T_{f_{1}}(X)]$ $\displaystyle=E[R_{f_{1}}(X)]+\varPi(f_{1}(X))$ $\displaystyle=E[R_{J_{\theta_{1},\theta_{2},d}}(X)]+J_{\theta_{1},\theta_{2},d% }(x_{J_{\theta_{1},\theta_{2},d}})-f_{1}(x_{J_{\theta_{1},\theta_{2},d}})+% \varPi(f_{1}(X))$ $\displaystyle\leq E[R_{J_{\theta_{1},\theta_{2},d}}(X)]+\varPi(J_{\theta_{1},% \theta_{2},d})-\eta(J_{\theta_{1},\theta_{2},d}-f_{1}(x_{J_{\theta_{1},\theta_% {2},d}}))$ $\displaystyle\leq E[R_{J_{\theta_{1},\theta_{2},d}}(X)]+\varPi(J_{\theta_{1},% \theta_{2},d}(X))$ $\displaystyle=E[T_{J_{\theta_{1},\theta_{2},d}}(X)],$

where the penultimate inequality follows from the fact that

 $x_{J_{\theta_{1},\theta_{2},d}}-(d/(1-\theta_{1}))\leq\tilde{a}=f_{1}(x_{J_{% \theta_{1},\theta_{2},d}})\leq x_{J_{\theta_{1},\theta_{2},d}}-d=J_{\theta_{1}% ,\theta_{2},d}(x_{J_{\theta_{1},\theta_{2},d}}).$

Hence,

 $\displaystyle L_{f_{1}}(X)$ $\displaystyle=E[T_{f_{1}}(X)]+\delta(\mathcal{E}^{T}_{f_{1}}-E[T_{f_{1}}(X)])$ $\displaystyle\leq E[T_{J_{\theta_{1},\theta_{2},d}}(X)]+\delta(\mathcal{E}^{T}% _{J_{\theta_{1},\theta_{2},d}}-E[T_{J_{\theta_{1},\theta_{2},d}}(X)])=L_{J_{% \theta_{1},\theta_{2},d}}(X).$

Finally, building upon $f_{1}(x)$, we follow (3.4) to construct a new Vajda function $f_{2}(x):=h_{f_{1}}(x)$. According to the proof of Theorem 3.3, there exists a constant $\theta\in[f_{1}(x_{f_{1}})/x_{f_{1}},1]$ such that the function

 $\displaystyle f_{2}(x):=h_{f_{1}}(x)=\begin{cases}f_{1}(x),&0\leq x

satisfying $L_{f_{2}}(X)\leq L_{f_{1}}(X)$. Moreover, it is easy to check that $f_{2}(x)\in\mathfrak{C}_{1}$. The proof of Theorem 3.5 is completed. ∎

###### Proof of Lemma 4.1.
1. (1)

Similar to Cai and Weng (2016, Lemma 4.1), we can show that $B$ is a closed set. At the same time, $B$ is bounded, since

 $\displaystyle d$ $\displaystyle=K(d,\theta(d))\leq E[X]+(1+\beta)(1-\theta)\int_{0}^{\infty}\bar% {F}(t)\mathrm{d}t$ $\displaystyle=(1+(1+\beta)(1-\theta))E[X]$

for any $0\leq d\leq\infty$ and $0\leq\theta\leq 1$. Thus, the set $B$ is bounded by $(1+(1+\beta)(1-\theta))E[X]$ from the above.

Second, we show that $\min B=0$ and $\max B=\tilde{d}$. In fact,

 $\displaystyle K(0,\theta)$ $\displaystyle=E[X]-\int_{0}^{\infty}\bar{F}(t)\mathrm{d}t+(1+\beta)(1-\theta)% \int_{0}^{\infty}\bar{F}(t)\mathrm{d}t$ $\displaystyle=(1+\beta)(1-\theta)E[X].$

Hence, $K(0,\theta)=0$ if and only if $\theta=1$, ie, $\theta(0)=1$. Hence, $0\in B$, $\min B=0$.

When $d=\tilde{d}$, from the definition of $\tilde{d}$, we know that

 $\displaystyle\tilde{d}=E[X]+\beta\int_{\tilde{d}}^{\infty}\bar{F}(t)\mathrm{d}t.$ (5.26)

Hence,

 $\displaystyle K(\tilde{d},0)$ $\displaystyle=E[X]-\int_{\tilde{d}}^{\infty}\bar{F}(t)\mathrm{d}t+(1+\beta)% \int_{\tilde{d}}^{\infty}\bar{F}(t)\mathrm{d}t$ $\displaystyle=E[X]+\beta\int_{\tilde{d}}^{\infty}\bar{F}(t)\mathrm{d}t$ $\displaystyle=\tilde{d}.$

Hence, $\tilde{d}\in B$ and $\theta(\tilde{d})=0.$ Further, taking the derivative with respect to $d$ on both sides of the equation $K(d,\theta)=d$, ie,

 $E[X]-\int_{d}^{\infty}\bar{F}(t)\mathrm{d}t+(1+\beta)(1-\theta)\int_{d/(1-% \theta)}^{\infty}\bar{F}(t)\mathrm{d}t=d,$

we obtain

 $\displaystyle(1+\beta)\bigg{(}\frac{d}{1-\theta}\bar{F}\bigg{(}\frac{d}{1-% \theta}\bigg{)}+\int_{d/(1-\theta)}^{\infty}\bar{F}(t)\mathrm{d}t\bigg{)}\frac% {\mathrm{d}\theta}{\mathrm{d}d}$ $\displaystyle\qquad\qquad{}=-\bigg{[}F(d)+(1+\beta)\bar{F}\bigg{(}\frac{d}{1-% \theta}\bigg{)}\bigg{]},$

ie,

 $\displaystyle\frac{\mathrm{d}\theta}{\mathrm{d}d}$ $\displaystyle=-\frac{F(d)+(1+\beta)\bar{F}\bigg{(}\frac{d}{1-\theta}\bigg{)}}{% (1+\beta)\bigg{(}\dfrac{d}{1-\theta}\bar{F}\bigg{(}\dfrac{d}{1-\theta}\bigg{)}% +\displaystyle\int_{d/(1-\theta)}^{\infty}\bar{F}(t)\mathrm{d}t\bigg{)}}<0$ $\displaystyle\qquad\qquad\qquad\qquad\quad\text{for~{}all~{}}(d,\theta)\in A% \text{~{}with~{}}0 (5.27)

Hence, $\theta(\cdot)$ is strictly decreasing. Further, if there exists $d>\tilde{d}$ such that $d\in B$, then $\theta(d)<\theta(\tilde{d})=0$, which contradicts the condition $0\leq\theta\leq 1$. Thus, $\max B=\tilde{d}$.

Finally, we show that $B=[0,\tilde{d}]$, ie, for any $d\in[0,\tilde{d}]$, there exists $\theta\in[0,1]$ such that $K(d,\theta)=d$.

Given $d\in[0,\tilde{d}]$, define function

 $\displaystyle\hat{D}(\theta)$ $\displaystyle:=K(d,\theta)-d$ $\displaystyle\phantom{:}=E[X]-\int_{d}^{\infty}\bar{F}(t)\mathrm{d}t+(1+\beta)% (1-\theta)$ $\displaystyle\qquad\qquad\qquad\qquad\times\int_{d/(1-\theta)}^{\infty}\bar{F}% (t)\mathrm{d}t-d,\quad\theta\in[0,1].$

Then

 $\displaystyle\hat{D}(0)$ $\displaystyle=E[X]-\beta\int_{d}^{\infty}\bar{F}(t)\mathrm{d}t-d$ $\displaystyle\geq E[X]-\beta\int_{\tilde{d}}^{\infty}\bar{F}(t)\mathrm{d}t-% \tilde{d}$ $\displaystyle=0,$

where the second inequality follows from the fact that $E[X]-\beta\smash{\int_{d}^{\infty}\bar{F}(t)\mathrm{d}t}-d$ is decreasing in $d$ and the last inequality follows from (5.26):

 $\displaystyle\hat{D}(1)$ $\displaystyle=E[X]-\int_{d}^{\infty}\bar{F}(t)\mathrm{d}t-d$ $\displaystyle\leq E[X]-\beta\int_{\tilde{d}}^{\infty}\bar{F}(t)\mathrm{d}t-% \tilde{d}$ $\displaystyle=-(1+\beta)\int_{\tilde{d}}^{\infty}\bar{F}(t)\mathrm{d}t\leq 0,$

where the second inequality follows from the fact that $E[X]-\beta\smash{\int_{\tilde{d}}^{\infty}\bar{F}(t)\mathrm{d}t}-\smash{\tilde% {d}}$ is increasing in $d$. Hence, there exists a constant $\theta(d)$ such that $\hat{D}(\theta(d))=0$ since $\hat{D}(\theta)$ is continuous in $\theta$. This yields that the domain of function $\theta(\cdot)$ is $\smash{B=[0,\tilde{d}]}$.

2. (2)

This follows from the proof of (1). The proof of Lemma 4.1 is completed.

## 6 Conclusions

In this paper, we study optimal reinsurance with expectile under the Vajda condition. Among a general class of premium principles, we prove that the optimal ceded loss functions take the form of three interconnected line segments, which can be considered as a combination of quota share reinsurance and stop-loss reinsurance. Simplified forms of the optimal reinsurance treaties can be obtained if the reinsurance premium is translation invariant or follows the expected value principle. Finally, the explicit expression for the optimal reinsurance treaty is also given when the reinsurance premium is assumed to be the expected value principle or Wang’s premium principle. Further research could discuss the application of a multivariate expectile to optimal reinsurance contracts.

## Declaration of interest

The author reports no conflicts of interest. The author alone is responsible for the content and writing of this paper.

## Acknowledgements

The author is very grateful to the editor-in-chief, the editors and the anonymous referees for their constructive comments and suggestions, which led to the present, greatly improved version of the manuscript. The author was supported by the National Natural Science Foundation of China (No. 11901184), the Natural Science Foundation of Hunan Province (No. 2020JJ5025) and the Fundamental Research Funds for the Central Universities (No. 531107051210).

## References

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact [email protected] or view our subscription options here: http://subscriptions.risk.net/subscribe