Content-Length: 301513 | pFad | https://doi.org/10.1007%2Fs10479-019-03373-1

a=86400 Calculating CVaR and bPOE for common probability distributions with application to portfolio optimization and density estimation | Annals of Operations Research Skip to main content
Log in

Calculating CVaR and bPOE for common probability distributions with application to portfolio optimization and density estimation

  • S.I.: Recent Developments in Financial Modeling and Risk Management
  • Published:
Annals of Operations Research Aims and scope Submit manuscript

Abstract

Conditional value-at-risk (CVaR) and value-at-risk, also called the superquantile and quantile, are frequently used to characterize the tails of probability distributions and are popular measures of risk in applications where the distribution represents the magnitude of a potential loss. buffered probability of exceedance (bPOE) is a recently introduced characterization of the tail which is the inverse of CVaR, much like the CDF is the inverse of the quantile. These quantities can prove very useful as the basis for a variety of risk-averse parametric engineering approaches. Their use, however, is often made difficult by the lack of well-known closed-form equations for calculating these quantities for commonly used probability distributions. In this paper, we derive formulas for the superquantile and bPOE for a variety of common univariate probability distributions. Besides providing a useful collection within a single reference, we use these formulas to incorporate the superquantile and bPOE into parametric procedures. In particular, we consider two: portfolio optimization and density estimation. First, when portfolio returns are assumed to follow particular distribution families, we show that finding the optimal portfolio via minimization of bPOE has advantages over superquantile minimization. We show that, given a fixed threshold, a single portfolio is the minimal bPOE portfolio for an entire class of distributions simultaneously. Second, we apply our formulas to parametric density estimation and propose the method of superquantiles (MOS), a simple variation of the method of moments where moments are replaced by superquantiles at different confidence levels. With the freedom to select various combinations of confidence levels, MOS allows the user to focus the fitting procedure on different portions of the distribution, such as the tail when fitting heavy-tailed asymmetric data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. See Sect. 4 for specifics.

  2. When closed-form expressions are not available, we look to provide simple calculation methods that might still be utilized within parametric methods.

  3. Also called the product logarithm or omega function.

  4. Or minimized if we consider the negative.

  5. If this was not true, it would imply that there exists a portfolio with smaller bPOE and hence the current portfolio is not bPOE optimal. A formal proof can be found in Mafusalov and Uryasev (2018).

  6. Note that bPOE thresholds refer to loss thresholds instead of return thresholds. See Sect. 4.1.

  7. Right superquantile of loss distribution.

  8. Also sometimes called L-moments.

  9. A simple empirical estimate from a sample of N observations \(S=\{X_1,\dots ,X_N\}\) can be obtained by sorting S and calculating the average of the largest \((1-\alpha )N\) observations. More precise estimates can be obtained by weighted averages; see Proposition 8 of Rockafellar and Uryasev (2002) for details.

  10. www.scipy.org.

  11. Specifically, we used the leastsq function which implements MINPACK’s lmdif routine. This routine requires function values and calculates the Jacobian by a forward-difference approximation.

  12. The fit from LS2 curves toward zero because it is the only fit with \(k > 1\).

References

  • Andreev, A., Kanto, A., & Malo, P. (2005). On closed-form calculation of CVaR. Helsinki School of Economics working paper W-389.

  • Artzner, P., Delbaen, F., Eber, J. M., & Heath, D. (1999). Coherent measures of risk. Mathematical Finance, 9, 203–228.

    Article  Google Scholar 

  • Davis, J. R., & Uryasev, S. (2016). Analysis of tropical storm damage using buffered probability of exceedance. Natural Hazards, 83(1), 465–483.

    Article  Google Scholar 

  • Everitt, B. S. (2006). The Cambridge dictionary of statistics. Cambridge: Cambridge University Press.

    Google Scholar 

  • Karian, Z. A., & Dudewicz, E. J. (1999). Fitting the generalized lambda distribution to data: A method based on percentiles. Communications in Statistics: Simulation and Computation, 28(3), 793–819.

    Article  Google Scholar 

  • Landsman, Z. M., & Valdez, E. A. (2003). Tail conditional expectation for elliptical distributions. North American Actuarial Journal, 7(4), 55–71.

    Article  Google Scholar 

  • Mafusalov, A., Shapiro, A., & Uryasev, S. (2018). Estimation and asymptotics for buffered probability of exceedance. European Journal of Operational Research, 270(3), 826–836.

    Article  Google Scholar 

  • Mafusalov, A., & Uryasev, S. (2018). Buffered probability of exceedance: Mathematical properties and optimization. SIAM Journal on Optimization, 28(2), 1077–1103.

    Article  Google Scholar 

  • Norton, M., Mafusalov, A., & Uryasev, S. (2017). Soft margin support vector classification as buffered probability minimization. The Journal of Machine Learning Research, 18(1), 2285–2327.

    Google Scholar 

  • Norton, M., & Uryasev, S. (2016). Maximization of AUC and buffered AUC in binary classification. Mathematical Programming, 174(1–2), 575–612.

  • Rockafellar, R., & Royset, J. (2010). On buffered failure probability in design and optimization of structures. Reliability Engineering & System Safety, 95, 499–510.

    Article  Google Scholar 

  • Rockafellar, R., & Uryasev, S. (2000). Optimization of conditional value-at-risk. The Journal of Risk, 2(3), 21–41.

    Article  Google Scholar 

  • Rockafellar, R. T., & Royset, J. O. (2014). Random variables, monotone relations, and convex analysis. Mathematical Programming, 148(1–2), 297–331.

    Article  Google Scholar 

  • Rockafellar, R. T., & Uryasev, S. (2002). Conditional value-at-risk for general loss distributions. Journal of Banking & Finance, 26(7), 1443–1471.

    Article  Google Scholar 

  • Sgouropoulos, N., Yao, Q., & Yastremiz, C. (2015). Matching a distribution by matching quantiles estimation. Journal of the American Statistical Association, 110(510), 742–759.

    Article  Google Scholar 

  • Shang, D., Kuzmenko, V., & Uryasev, S. (2018). Cash flow matching with risks controlled by buffered probability of exceedance and conditional value-at-risk. Annals of Operations Research, 260(1–2), 501–514.

    Article  Google Scholar 

  • Uryasev, S. (2014). Buffered probability of exceedance and buffered service level: Definitions and properties. Department of Industrial and Systems Engineering, University of Florida, research report 3.

Download references

Funding

This research was supported by the Naval Postgraduate School’s Research Initiation Program.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Matthew Norton.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Proofs

Proofs

1.1 Proposition 1

Proof

First, note that \(q_\alpha (X) = \frac{-\ln (1 - \alpha ) }{\lambda }\) for exponential RVs with rate parameter \(\lambda \). We then have,

$$\begin{aligned} {\bar{q}}_\alpha (X)&= \frac{1}{1-\alpha } \int _{\alpha }^{1} q_p (X) dp \\&= \frac{-1}{\lambda (1-\alpha )} \int _{\alpha }^{1} \ln (1 - p ) dp \\&= \frac{-1}{\lambda (1-\alpha )} \int _{1-\alpha }^{0} -\ln (y ) dy = \frac{-1}{\lambda (1-\alpha )} \int _{0}^{1 - \alpha } \ln (y ) dy. \end{aligned}$$

Since \(\int \ln (y)dy = y \ln (y) - y + C\), we have

$$\begin{aligned} {\bar{q}}_\alpha (X)&= \frac{-1}{\lambda (1-\alpha )} \int _{0}^{1 - \alpha } \ln (y ) dy \\&= \frac{-1}{\lambda (1-\alpha )} \left[ ( 1-\alpha ) \ln (1-\alpha ) - (1-\alpha ) \right] = \frac{ - \ln (1-\alpha ) + 1 }{ \lambda } . \end{aligned}$$

We can then see that

$$\begin{aligned} {\bar{p}}_x (X)&= \{ 1 - \alpha | {\bar{q}}_\alpha (X) = x \} \\&= \{ 1 - \alpha |\frac{ - \ln (1-\alpha ) + 1 }{ \lambda } = x \} \\&= \{ 1 - \alpha | \ln (1-\alpha ) = 1-\lambda x \} \\&= \{ 1 - \alpha | e^{\ln (1-\alpha )} = e^{1-\lambda x} \} = \{ 1 - \alpha | 1-\alpha = e^{1-\lambda x} \} = e^{1-\lambda x} . \end{aligned}$$

\(\square \)

1.2 Corollary 1

Proof

We know that X, being exponential, has CDF given by \(P(X \ge x) = 1 - e^{-\lambda x}\). From Proposition 1, we know that

$$\begin{aligned} {\bar{p}}_x (X) = e^{(1 - \lambda x)} = e^{-\lambda (\frac{-1}{\lambda } + x)}. \end{aligned}$$

Then, since \(\mu =\frac{1}{\lambda }\), it follows that \({\bar{p}}_x (X) = e^{-\lambda ( x - \mu )} = 1 - P(X \le x - \mu ) = P(X > x - \mu )\). The equality for CVaR follows easily from Proposition 1 since \(q_\alpha (X) = \frac{-\ln (1 - \alpha ) }{\lambda }\).

\(\square \)

1.3 Proposition 2

Proof

First, note that the conditional distribution of a Pareto, conditioned on the event that the random value is larger than some \(\gamma \), is simply another Pareto with parameters \(a,\gamma \). This implies that \(E[ X | X > \gamma ] = \frac{ a \gamma }{ a - 1}\) if \(a\ge 1\); otherwise the expectation is \(\infty \). Also, \(1- F(\gamma ) = \left( \frac{x_m}{\gamma } \right)^a\). Since,

$$\begin{aligned} E[ X - \gamma ]^+ = (E[ X | X > \gamma ] - \gamma )(1- F(\gamma )) , \end{aligned}$$

we will have that,

$$\begin{aligned} E[ X - \gamma ]^+ = \left( \frac{ a \gamma }{ a - 1}- \gamma \right) \left( \frac{x_m}{\gamma } \right)^a . \end{aligned}$$

This gives us bPOE formula,

$$\begin{aligned} {\bar{p}}_x(X)&= \min _{x_m\le \gamma<x} \frac{ \left( \frac{ a \gamma }{ a - 1}- \gamma \right) x_m^a }{ \gamma ^a (x-\gamma ) } \\&= \min _{x_m\le \gamma<x} \frac{ \left( \frac{ a }{ a - 1}- 1 \right) x_m^a }{ \gamma ^{a-1} (x-\gamma ) } = \left( \max _{x_m\le \gamma <x} \frac{\gamma ^{a-1} (x-\gamma )(a-1) }{x_m^a} \right)^{-1} \end{aligned}$$

Since \(a>1\), the maximization objective is concave over the range \(\gamma \in (0,\infty )\) which contains the range \([x_m,x)\), so we just need to take the gradient of function \(g(\gamma )=\frac{\gamma ^{a-1} (x-\gamma )(a-1) }{x_m^a}\) and set it to zero to find the optimal \(\gamma \) as follows:

$$\begin{aligned} \frac{\partial g}{\partial \gamma } = \frac{ x(a-1)^2 \gamma ^{a-2} - (a-1)a\gamma ^{a-1} }{x_m^a} =0&\implies x(a-1)^2 \gamma ^{a-2} = (a-1)a\gamma ^{a-1} \\&\implies \frac{x(a-1)}{a} = \gamma \\ \end{aligned}$$

Plugging this value of \(\gamma \) into the objective of our bPOE formula yields,

$$\begin{aligned} {\bar{p}}_x(X)&=\frac{ \left( \frac{ \frac{ax(a-1)}{a} }{a-1} - \frac{x(a-1)}{a} \right) x_m^a }{ \left(\frac{x(a-1)}{a}\right)^a \left( x - \frac{x(a-1)}{a} \right) } \\&= \left( \frac{x_m a}{x(a-1) } \right)^a \end{aligned}$$

CVaR is then equal to the value of x which solves the equation \(1-\alpha = {\bar{p}}_x(X)\) or,

$$\begin{aligned} 1-\alpha = \left( \frac{x_m a}{x(a-1) } \right)^a , \end{aligned}$$

which has solution,

$$\begin{aligned} {\bar{q}}_\alpha (X) = \frac{ x_m a }{ (1-\alpha )^{\frac{1}{a}} (a-1)} . \end{aligned}$$

\(\square \)

1.4 Corollary 2

Proof

It follows by simply comparing the formulas from Proposition 1 and the CDF and quantile formulas for a Pareto RV. \(\square \)

1.5 Proposition 3

Proof

For these results, we rely on the fact that if \(X \sim GPD(\mu ,s,\xi )\), then \(X-\gamma | X> \gamma \sim GPD(0,s+\xi (\gamma -\mu ),\xi )\), meaning that the excess distribution of a GPD random variable is also GPD. Now, note also that if \(\xi <1\), then \(E[X]=\mu + \frac{s}{1-\xi }\). This gives us,

$$\begin{aligned} E[X-\gamma | X>\gamma ] = E[ GPD(0,s+\xi (\gamma -\mu ),\xi )] = \frac{s+\xi (\gamma - \mu )}{1-\xi } \end{aligned}$$

which further implies that,

$$\begin{aligned} {\bar{q}}_\alpha (X)&= E[X-q_\alpha (X) | X>q_\alpha (X)] + q_\alpha (X) \\&= \frac{s + \xi ( q_\alpha (X) - \mu )}{1-\xi } +q_\alpha (X) . \end{aligned}$$

Plugging in the values of the quantile functions yields the final formulas. Using the formulas we just found for \({\bar{q}}_\alpha (X)\), it is straightforward to solve for \({\bar{p}}_x(X)\) which equals \(1-\alpha \) such that \(\alpha \) solves the equation \(x={\bar{q}}_\alpha (X)\). \(\square \)

1.6 Proposition 4

Proof

To get the superquantile, we begin with the integral representation:

$$\begin{aligned} {\bar{q}}_\alpha (X)&= \frac{1}{1-\alpha } \int _{\alpha }^{1} q_p (X) dp \\&= \frac{1}{1-\alpha } \int _{\alpha }^{1} \mu - b\,sign(p-.5)\,\ln (1 - 2|p-.5|) dp \\&= \mu - \frac{b}{1-\alpha } \int _{\alpha }^{1} sign(p-.5)\,\ln (1 - 2|p-.5|) dp \\&= \mu - \frac{b}{1-\alpha } \left( \int _{\min \{\alpha ,.5\}}^{.5} -\ln (2p) dp + \int _{\max \{\alpha ,.5\}}^{1} \ln (2(1-p)) dp \right) . \end{aligned}$$

To evaluate the integral, we use simple substitution as well as the identity \(\int \ln (y)dy = y \ln (y) - y + C\). After simplifying, we see that with \(\alpha < .5\) the integral evaluates to,

$$\begin{aligned} {\bar{q}}_\alpha (X)= \mu +b \left(\frac{\alpha }{1-\alpha } \right)(1 - \ln (2 \alpha )) . \end{aligned}$$

Similarly, we find that with \(\alpha \ge .5\) the integral evaluates to,

$$\begin{aligned} {\bar{q}}_\alpha (X) =\mu + b\left(1 - \ln \left(2(1- \alpha )\right) \right). \end{aligned}$$

For bPOE, first assume that threshold \(x \ge \mu +b\). Using our formula for CVaR, we see that \({\bar{q}}_{.5} (X) = \mu + b\). Thus, \(x \ge \mu +b\) implies that \(1-{\bar{p}}_x(X) \ge .5\) implying that

$$\begin{aligned} {\bar{p}}_x (X)&= \{ 1 - \alpha | {\bar{q}}_\alpha (X) = x , \alpha \ge .5\} \\&= \{ 1 - \alpha |\mu + b\left(1 - \ln \left(2(1- \alpha )\right) \right) = x \} \\&= \frac{1}{2}e^{1-\left(\frac{x-\mu }{b} \right)} . \end{aligned}$$

Assume contrarily that \(x< \mu + b\). Since \({\bar{q}}_{.5} (X) = \mu + b\), we have that \(1-{\bar{p}}_x(X) < .5\) which implies that

$$\begin{aligned} {\bar{p}}_x (X)&= \{ 1 - \alpha | {\bar{q}}_\alpha (X) = x , \alpha < .5\} \\&= \left\{ 1 - \alpha |\mu +b \left(\frac{\alpha }{1-\alpha } \right)(1 - \ln (2 \alpha ))= x \right\} . \end{aligned}$$

Letting \(z=\frac{x-\mu }{b}\), we must now find \(\alpha \) which solves the equation \(\left(\frac{\alpha }{1-\alpha } \right)(1 - \ln (2 \alpha ))= z\). We do so as follows:

$$\begin{aligned} \left(\frac{\alpha }{1-\alpha } \right)(1 - \ln (2 \alpha ))= z&\implies \frac{-z}{\alpha } = \frac{( \ln (2 \alpha ) - 1)}{1-\alpha } \\&\implies e^{\frac{-z}{\alpha }} = e^{\frac{(\ln (2 \alpha ) - 1 )}{1-\alpha }} = \left( \frac{2\alpha }{e} \right)^\frac{1}{1-\alpha } \\&\implies e^{\frac{-z(1-\alpha )}{\alpha }} = \left( \frac{2\alpha }{e} \right)\\&\implies \frac{-z}{\alpha } e^{-z(\frac{1}{\alpha } -1)} = -2ze^{-1} \\&\implies \frac{-z}{\alpha } e^{\frac{-z}{\alpha } } = -2ze^{-z-1} \\&\implies \frac{-z}{\alpha } = {\mathcal {W}}(-2ze^{-z-1}) . \end{aligned}$$

where the final step follows from the definition of the Lambert-\({\mathcal {W}}\) function which is given by the relation \(xe^x=y \iff {\mathcal {W}}(y)=x\). Thus, \(\frac{-z}{\alpha } = {\mathcal {W}}(-2ze^{-z-1}) \implies {\bar{p}}_x(X) = 1-\alpha =1 + \frac{z}{{\mathcal {W}}( -2e^{-z-1}z)} \). \(\square \)

1.7 Proposition 5

Proof

It is well known that if \(X \sim {\mathcal {N}}(0,1)\), then the conditional expectation is given by the inverse Mills Ratio, \(E[ X | X > \gamma ] = \frac{f(\gamma )}{1 - F(\gamma ) }\). It follows then that \({\bar{q}}_\alpha (X) = E[ X | X > q_\alpha (X) ] = \frac{f(q_\alpha (X))}{1 - F(q_\alpha (X)) }= \frac{f(q_\alpha (X))}{1 - \alpha }\). \(\square \)

1.8 Proposition 6

Proof

Note that for a standard normal random variable, the tail expectation beyond any threshold \(\gamma \) is given by the inverse Mills Ratio,

$$\begin{aligned} E[ X | X > \gamma ] = \frac{f(\gamma )}{1 - F(\gamma ) } . \end{aligned}$$

Note also that for any threshold \(\gamma \) and any random variable we have,

$$\begin{aligned} E[ X - \gamma ]^+ = (E[ X | X > \gamma ] - \gamma )(1- F(\gamma )) . \end{aligned}$$

Using the Mills ratio gives us,

$$\begin{aligned} E[ X - \gamma ]^+ = \left( \frac{f(\gamma )}{1 - F(\gamma ) } - \gamma \right) (1- F(\gamma )) = f(\gamma ) - \gamma (1 - F(\gamma )) . \end{aligned}$$

Plugging this result into the minimization formula for bPOE yields the final formula. \(\square \)

1.9 Proposition 7

Proof

This follows from the fact that \({\bar{q}}_\alpha (X) = E[ X | X > q_\alpha (X) ] = \frac{f(q_\alpha (X))}{1 - F(q_\alpha (X)) }\) and the optimization formula of bPOE given in the previous proposition for normally distributed variables. \(\square \)

1.10 Proposition 8

Proof

To derive the integral representation, simply plug in the formula for \(E[ X - \gamma ]^+\), then utilize the definition of the PDF and CDF. The gradient calculation is a standard calculus exercise. \(\square \)

1.11 Proposition 9

Proof

We simply evaluate the integral of the quantile function as follows.

$$\begin{aligned} {\bar{q}}_\alpha (X)&= \frac{1}{1-\alpha } \int _{\alpha }^{1} q_p (X) dp \\&= \frac{1}{1-\alpha }\int _{\alpha }^{1} e^{\mu + s\sqrt{2} \text {erf}^{-1}(2p -1)} dp \\&= \frac{e^{\mu }}{1-\alpha }\int _{\alpha }^{1} e^{ s\sqrt{2} \text {erf}^{-1}(2p -1)} dp \\&= \frac{e^{\mu }}{1-\alpha } \left[ -\frac{1}{2} e^{ \frac{s^2}{2}} \left( 1 + \text {erf}\left( \frac{s}{\sqrt{2}} - \text {erf}^{-1}(2p -1) \right) \right) \right]_{p=\alpha }^1 \\&= \frac{e^{\mu }}{1-\alpha } \left[ \frac{1}{2} e^{ \frac{s^2}{2}} + \frac{1}{2} e^{ \frac{s^2}{2}} \left( 1 + \text {erf}\left( \frac{s}{\sqrt{2}} - \text {erf}^{-1}(2p -1) \right) \right) \right] \\&= \frac{1}{2} e^{\mu + \frac{s^2}{2}} \frac{ \left[ 1 + \text {erf}\left( \frac{s}{\sqrt{2}} - \text {erf}^{-1}(2\alpha -1) \right) \right] }{ 1-\alpha }. \end{aligned}$$

\(\square \)

1.12 Proposition 10

Proof

To obtain the superquantile, we have

$$\begin{aligned} {\bar{q}}_\alpha (X)&= \frac{1}{1-\alpha } \int _{\alpha }^{1} q_p (X) dp \\&= \frac{1}{1-\alpha } \int _{\alpha }^{1} \mu + s \ln \left( \frac{\alpha }{1-\alpha } \right) dp \\&= \mu + \frac{s}{1-\alpha } \int _{\alpha }^{1} \ln (p) - \ln (1-p) dp \\&= \mu + \frac{s}{1-\alpha }\left( \int _{\alpha }^{1} \ln (p) dp + \int _{\alpha }^{1} - \ln (1-p) dp \right) \end{aligned}$$

Utilizing simple substitution as well as the identity \(\int \ln (y)dy = y \ln (y) - y + C\), we get

$$\begin{aligned} {\bar{q}}_\alpha (X)&= \mu + \frac{s}{1-\alpha } \left( -1 - \alpha \ln \alpha + \alpha - (1-\alpha ) \ln (1-\alpha ) + (1-\alpha ) \right) \\&= \mu + \frac{s}{1-\alpha } \left( - \alpha \ln \alpha - (1-\alpha ) \ln (1-\alpha ) \right) \\&= \mu + \frac{s}{1-\alpha } H( \alpha ) . \end{aligned}$$

To get bPOE, we simply follow the bPOE definition, needing to find \(\alpha \) which solves \( \mu + \frac{s}{1-\alpha } H( \alpha )=x\). The transformed system arises from combining logarithms within the superquantile formula and applying exponential transformations. \(\square \)

1.13 Proposition 11

Proof

This follows from the fact that \(E[X-\gamma ]^+ = \int _{\gamma }^{\infty } (1-F(t))dt\). Evaluating this integral for \(X \sim Logistic(\mu ,s)\) yields, \(E[X-\gamma ]^+= s\ln ( 1+e^{-(\frac{\gamma -\mu }{s}) } )\) which can then be plugged into the minimization formula for bPOE. The second part of the proposition follows from the fact that the gradient of the objective function w.r.t. \(\gamma \) is given by,

$$\begin{aligned} \frac{s\ln ( 1+e^{-(\frac{\gamma -\mu }{s}) } )}{ (x-\gamma )^2 } - \frac{ e^{-(\frac{\gamma -\mu }{s} )} }{ (x-\gamma )\left( 1+e^{-(\frac{\gamma -\mu }{s} )} \right) } . \end{aligned}$$

Setting this gradient to zero and simplifying yields the stated optimality condition. \(\square \)

1.14 Proposition 12

Proof

Since there is no closed-form expression for the quantile, we utilize the representation of the superquantile given by \(\frac{1}{1-\alpha } \int _{q_\alpha (X)}^\infty t f(t) dt\). To evaluate this integral, we first take the derivative of the PDF, giving

$$\begin{aligned} \frac{df(x)}{dx} = \frac{-f(x)(x-\mu )(\nu +1)}{\nu s^2 + (x-\mu )}. \end{aligned}$$

Rearranging yields,

$$\begin{aligned} xf(x)dx = \frac{-\nu s^2 df(x) }{(\nu +1)} - \frac{(x-\mu )^2df(x)}{(\nu +1)} + \mu f(x) dx . \end{aligned}$$

We can then integrate both sides,

$$\begin{aligned} \int xf(x)dx = \frac{-\nu s^2 f(x) }{(\nu +1)} - \frac{1}{(\nu +1)} \int (x-\mu )^2df(x)+ \mu F(x). \end{aligned}$$

Integrating by parts gives us the following form of the middle term;

$$\begin{aligned} \int (x-\mu )^2df(x) = (x-\mu )^2 f(x) - 2 \int x f(x) dx + 2 \mu F(x) . \end{aligned}$$

Then, finally, after substituting this new expression for the middle term and simplifying, we get

$$\begin{aligned} \int xf(x)dx = - \frac{(\nu s^2 + (x- \mu )^2 )}{(\nu -1)}f(x) + \mu F(x). \end{aligned}$$

Taking the definite integral yields,

$$\begin{aligned} \int _{q_\alpha (X)}^\infty xf(x)dx&= \left( - \lim _{x \rightarrow \infty }\frac{(\nu s^2 + (x- \mu )^2) }{(\nu -1)}f(x) + \lim _{x \rightarrow \infty } \mu F(x) \right) \\&\qquad \qquad \quad - \left(- \frac{(\nu s^2 + (q_\alpha (X)- \mu )^2) }{(\nu -1)}f(q_\alpha (X)) + \mu F(q_\alpha (X)) \right). \end{aligned}$$

It is easy to see that the second limit goes to one and, after applying l’Hôpital’s rule where necessary, that the first limit goes to zero. This leaves

$$\begin{aligned} \int _{q_\alpha (X)}^\infty xf(x)dx&= \mu - \left(- \frac{(\nu s^2 + (q_\alpha (X)- \mu )^2) }{(\nu -1)}f(q_\alpha (X)) + \mu F(q_\alpha (X)) \right)\\&= \mu ( 1 - \alpha ) + \left( \frac{\nu s^2 + (q_\alpha (X)- \mu )^2 }{(\nu -1)} \right) f(q_\alpha (X)) \\&= \mu ( 1 - \alpha ) + s \left( \frac{\nu + T^{-1}(\alpha )^2 }{(\nu -1)} \right) \tau (T^{-1}(\alpha ) ), \end{aligned}$$

where the final step comes from writing the non-standardized quantile \(q_\alpha (X)\) and PDF f(x) in their standardized form. Then, finally, dividing by \(1-\alpha \) yields the formula

$$\begin{aligned} {\bar{q}}_\alpha (X) =\frac{1}{1-\alpha } \int _{q_\alpha (X)}^\infty xf(x)dx = \mu + s \left( \frac{\nu + T^{-1}(\alpha )^2 }{(\nu -1)(1-\alpha )} \right) \tau (T^{-1}(\alpha ) ) . \end{aligned}$$

\(\square \)

1.15 Proposition 13

Proof

To calculate the superquantile, we utilize the integral representation (1), which is

$$\begin{aligned} {\bar{q}}_\alpha (X)&= \frac{1}{1-\alpha } \int _{\alpha }^{1} q_p (X) dp \\&= \frac{1}{1-\alpha } \int _{\alpha }^{1} \lambda {(-\ln (1-p))}^{1/k} dp . \end{aligned}$$

To put this integral into the form of the upper incomplete gamma function, make the change of variable \(y= -\ln (1-p)\). This gives \(e^y = \frac{1}{1-p}\) and \(dp = (1-p) dy = e^{-y} dp\) with new lower limit of integration \( -\ln (1-\alpha )\) and upper limit of integration \( \infty \). Applying to the integral yields

$$\begin{aligned} {\bar{q}}_\alpha (X)&= \frac{\lambda }{1-\alpha } \int _{-\ln (1-\alpha )}^{\infty } {y}^{1/k} e^{-y} dy \\&= \frac{\lambda }{1-\alpha } \varGamma _U \left( 1 + \frac{1}{k} , -\ln (1-\alpha ) \right) . \end{aligned}$$

\(\square \)

1.16 Proposition 14

Proof

To calculate the superquantile, we utilize the integral representation as follows:

$$\begin{aligned} {\bar{q}}_\alpha (X)&= \frac{1}{1-\alpha } \int _{\alpha }^{1} q_p (X) dp \\&= \frac{1}{1-\alpha } \left( \int _{0}^{1} q_p (X) dp - \int _{0}^{\alpha } q_p (X) dp \right)\\&= \frac{1}{1-\alpha } \left( E[X] - \int _{0}^{\alpha } q_p (X) dp \right)\\&= \frac{1}{1-\alpha } \left( E[X] - a \int _{0}^{\alpha } \left( \frac{p}{1-p} \right)^{\frac{1}{b}} dp \right) . \end{aligned}$$

Now, note first that for \(X \sim LogLogistic(a,b)\), we have \(E[X] =a\frac{\pi }{b} csc\left( \frac{\pi }{b} \right)\). Next, for the incomplete beta function, letting \(A_1 = \frac{1}{b}+1\) and \(A_2=1 - \frac{1}{b}\), we can see that

$$\begin{aligned} B_\alpha \left( \frac{1}{b}+1,1 - \frac{1}{b}\right) =\int _{0}^{\alpha }p^{\frac{1}{b}}\,(1-p)^{-\frac{1}{b}}\,dp . \end{aligned}$$

Using these two facts, we have,

$$\begin{aligned} {\bar{q}}_\alpha (X)&= \frac{1}{1-\alpha } \left( E[X] - a \int _{0}^{\alpha } \left( \frac{p}{1-p} \right)^{\frac{1}{b}} dp \right) \\&= \frac{1}{1-\alpha } \left( a\frac{\pi }{b} csc\left( \frac{\pi }{b} \right) - a B_\alpha \left( \frac{1}{b}+1,1 - \frac{1}{b}\right) \right) \\&= \frac{a}{1-\alpha } \left( \frac{\pi }{b} csc\left( \frac{\pi }{b} \right) - B_\alpha \left( \frac{1}{b}+1,1 - \frac{1}{b}\right) \right) . \end{aligned}$$

\(\square \)

1.17 Proposition 15

Proof

Assume we have \(\xi =0\). Then, we have

$$\begin{aligned} {\bar{q}}_\alpha (X)&= \frac{1}{1-\alpha } \int _{\alpha }^{1} \mu - s \ln (-\ln (p))dp \\&= \mu - \frac{s}{1-\alpha } \left( \int _{0}^{1} \ln (-\ln (p))dp - \int _{0}^{\alpha } \ln (-\ln (p))dp \right)\\&= \mu - \frac{s}{1-\alpha } \left(y - \int _{0}^{\alpha } \ln (-\ln (p))dp \right)\\&= \mu - \frac{s}{1-\alpha } \left(y + \alpha \ln (-\ln (\alpha )) - {\mathrm {li}}(\alpha ) \right) .\\ \end{aligned}$$

Assume now that \(\xi \ne 0\). Then, we have that,

$$\begin{aligned} {\bar{q}}_\alpha (X)&= \frac{1}{1-\alpha } \int _{\alpha }^{1} \mu + \frac{s}{\xi } \left( (\ln (\frac{1}{p})^{-\xi } -1 \right)dp \\&= \mu - \frac{s}{\xi (1-\alpha )} \int _{\alpha }^{1} \left( (\ln (\frac{1}{p})^{-\xi } -1 \right)dp\\&= \mu + \frac{s}{\xi (1-\alpha )} \left[ \varGamma _L(1-\xi ,\ln (\frac{1}{\alpha })) - (1-\alpha ) \right] . \end{aligned}$$

\(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Norton, M., Khokhlov, V. & Uryasev, S. Calculating CVaR and bPOE for common probability distributions with application to portfolio optimization and density estimation. Ann Oper Res 299, 1281–1315 (2021). https://doi.org/10.1007/s10479-019-03373-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10479-019-03373-1

Keywords









ApplySandwichStrip

pFad - (p)hone/(F)rame/(a)nonymizer/(d)eclutterfier!      Saves Data!


--- a PPN by Garber Painting Akron. With Image Size Reduction included!

Fetched URL: https://doi.org/10.1007%2Fs10479-019-03373-1

Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy