Value-at-Risk (VaR) is a powerful tool for assessing market risk, but it also poses a challenge. Its power is its generality. Unlike market risk metrics such as the Greeks, duration and convexity, or beta, which are applicable to only certain asset categories or certain sources of market risk, VaR is general. It is based on the probability distribution for a portfolio's market value. All liquid assets have uncertain market values, which can be characterized with probability distributions. All sources of market risk contribute to those probability distributions. Being applicable to all liquid assets and encompassing, at least in theory, all sources of market risk, VaR is an all-encompassing measure of market risk.
As with its power, the challenge of VaR also stems from its generality. In order to measure market risk in a portfolio using VaR, some means must be found for determining the probability distribution of that portfolio's market value. Obviously, the more complex a portfolio is—the more asset categories and sources of market risk it is exposed to—the more challenging that task becomes.
It is worth distinguishing between three concepts:
A VaR measure is an algorithm with which we calculate a portfolio's VaR.
A VaR model is the financial theory, mathematics, and logic that motivate a VaR measure. It is the intellectual justification for the computations that are the VaR measure.
A VaR metric is our interpretation for the output of the VaR measure.
Examples of VaR metrics are one-day 95% USD VaR or one-week standard deviation of return EUR VaR. A VaR measure is just a bunch of computations. What justifies our interpreting the output of those computations as, say, two-week 99% EUR VaR? The answer is the VaR model. The VaR model is the intellectual link between the computations of a VaR measure and the interpretation of the output of those computations, which is the VaR metric.
This article focuses on VaR measures and VaR models. Conveniently, these can be discussed without regard for specific VaR metrics. The reason is that valuation of a VaR metric is the final step of any VaR measure. The real work for a VaR measure is to somehow characterize a probability distribution for a portfolio's market value. Valuing a specific VaR metric based on that characterization is a final step—it is almost an afterthought. By changing that final step of a VaR measure, we can alter the VaR measure to support a different VaR metric. Accordingly, to a large extent, any VaR measure can support any VaR metric, and we can discuss VaR measures without considering the specific VaR metrics they are to support.
Measure time in trading days. Let 0 be the current time. We know a portfolio's current market value . Its market value in one trading day is unknown (see the notation conventions documentation). It is a random variable. Out notation uses preceding superscripts to denote time. We find it convenient to indicate random quantities with capital letters and known constants with lower case letters.
Our task is to ascribe to a probability distribution. One way that we might simplify this task is to assume some standard distribution. Doing so reduces the problem from one of estimating an entire distribution to that of estimating the handful of parameters necessary to specify that standard distribution. Depending upon the standard distribution which is assumed, this simple approach may yield a closed-form solution for the portfolio's VaR.
For example, a normal distribution is fully described with two parameters, its mean and standard deviation. If we assume is normally distributed, then all we need do in order to measure VaR is estimate the mean and standard deviation of that distribution. (The preceding superscripts in our notation indicate that parameters are for the portfolio's time 1 value conditional on information available at time 0.) Together with the normality assumption, these two parameters provide all the information necessary to value any other parameter—VaR metric—related to the distribution of . For example, if our VaR metric is one-day 95% USD VaR, we can calculate VaR as
This formula is based on the fact that the 5%-quantile of a normal distribution always occurs 1.645 standard deviations below its mean. See Exhibit 1 to understand  or see the article linear value-at-risk for a more detailed discussion.
In practice, a portfolio's expected value will often be close to its current value . This is especially true over short VaR horizons, such as the one trading day horizon of our example. In this circumstance, it may be reasonable to set = . With this simplification, our formula  for 95% VaR becomes
Based upon similar assumptions, formulas for 90%, 97.5% and 99% VaR are
90% VaR ~ 1.282
97.5% VaR ~ 1.960
99% VaR ~ 2.326
Estimating the standard deviation of the portfolio's market value is analogous to the task of estimating the standard deviation of portfolio return, a task you may be familiar with from portfolio theory. Except for the fact that VaR deals with market values instead of returns, we may adopt this familiar mathematics of portfolio theory for estimating VaR.
We use a general result from probability. Suppose are random variables having standard deviations and correlations . Suppose another random variable Y is defined as a linear polynomial of the :
We can apply  to estimate the standard deviation of our portfolio's market value. Suppose the portfolio has holdings in m assets. The assets' accumulated market values at time 1 are random variables, which we denote . Then
Based on , we can apply  to obtain . All we need as inputs are standard deviations and correlations for the . These might be inferred by applying methods of time series analysis to historical price data for the assets. In some cases, this is feasible. In others, it is not. Collecting historical price data for every asset held by a portfolio may be a daunting task.
A more manageable approach may be to model the portfolio's behavior, not in terms of individual assets, but in terms of specific risk factors. Depending upon the composition of the portfolio, risk factors might include exchange rates, interest rates, commodity prices, spreads, implied volatilities, etc. We call the n modeled risk factors key factors. We denote their values at time 1 as . The key factors comprise an ordered set (or "vector"), which we call the key vector. We denote it :
In all likelihood, the number n or key factors we need to model will be substantially less than the number m of assets held by the portfolio.
Selecting which key factors to model is as simple—or complex!—as choosing a set of market variables such that a pricing formula for each asset held by the portfolio can be expressed in terms of those variables. That is, for each asset, there must exist a valuation function such that
Because the value of the portfolio is a linear polynomial of the asset values , we can now express in terms of the key factors:
This is a functional relationship that specifies the portfolio's market value in terms of the key factors . Shorthand notation for the relationship is
and the portfolio mapping function is simply Black's (1976) pricing formula for options on futures. Obviously, if a portfolio holds many complicated instruments, the portfolio mapping function will be equally complicated.
The portfolio mapping function maps the n-dimensional space of the key factors to the one-dimensional space of the portfolio's market value. Given a realization for , gives us the corresponding value of . That doesn't solve our problem. We're not interested in one possible realization of . We need to characterize the entire distribution of . Somehow, we need to apply the portfolio mapping function to the entire joint distribution of to obtain the entire distribution of . The question is: how? After all, beyond purporting its existence, we know very little about the portfolio mapping function . It could be some complicated function with discontinuities and other inconvenient properties.
A simple solution exists if is a linear polynomial, as is the case in the above example of a portfolio with holdings in Dell, IBM, and Microsoft stock. As indicated by , is a linear polynomial for that example. If we assume that is normally distributed and that = , then all we need to calculate is . Given standard deviations and correlations for the , we can apply  to obtain .
But what if is not a linear polynomial? In our example of an option portfolio, is given by Black's (1976) option pricing formula. That is decidedly non-linear, so we cannot use  to obtain . Furthermore, we cannot reasonably assume that is normally distributed. Because options limit downside risk, they skew the probability distribution of . Normal distributions aren't skewed.
These issues can be understood graphically.
Consider Exhibit 2. It illustrates with two graphs the situation if a portfolio mapping function is a linear polynomial. The graph on the left is of . It shows how the price of the portfolio responds linearly to changes in a single key factor . In that graph, evenly spaced values for have been mapped into corresponding values for . The resulting values of are also evenly spaced, indicating that the mapping causes no distortions. If is normally distributed, so will be . That normal distribution for is depicted in the graph on the right.
If the portfolio price function is non-linear, may not be normally distributed. This is illustrated in Exhibit 3 with a portfolio consisting of a single call option in an underlier .
The left graph of Exhibit 3 depicts the familiar "hockey stick" price function for a call option. Evenly spaced values for do not map into evenly spaced values for . If is normally distributed, the resulting distribution of will not be normal. As shown on the right, it will be skewed. That skewness reflects the call option's limited downside risk.
Portfolios can have more complex price distributions. For example, a range forward is a long-short options position which, when applied to a short position in an underlier , behaves as illustrated in Exhibit 4.
In the left graph of Exhibit 4, we see that values of cluster in two regions, resulting in the dramatically non-normal price distribution shown on the right.
These three examples illustrate how linearity of can simplify the task of calculating a portfolio's value-at-risk. Non-linear portfolios often exhibit unusual price distributions. These can differ markedly and in unpredictable ways from normal distributions. Such portfolios require more sophisticated modeling techniques.
Here is the general problem we face in calculating value-at-risk. To calculate VaR, we need to characterize the distribution of conditional on information available at time 0. Our puzzle has two pieces:
We need to combine these two pieces of the puzzle in order to estimate . Somehow we must filter the market information contained in the characterization of the distribution of through the portfolio information contained in the portfolio mapping .
Every VaR measure must address this problem. Accordingly, all VaR measures share certain common components related to solving this problem. All must specify a portfolio mapping. All must somehow characterize the joint distribution of . All must somehow combine these two pieces to characterize the distribution of . Exhibit 5 is a schematic summarizing these three processes that are common to all practical VaR measures.
Any practical VaR measure must include three procedures:
2. inference procedure, and
Recall that risk has two components:
By specifying a portfolio mapping, a mapping procedure describes exposure. By characterizing the joint distribution for , an inference procedure describes uncertainty. A transformation procedure combines exposure and uncertainty to describe the distribution of , which it then summarizes with the value of some VaR metric. In so doing, the transformation procedure describes risk.
A mapping procedure accepts a portfolio's composition as an input. Its output is a portfolio mapping function that defines as a function of . Specifying is largely an exercise in financial engineering. Since must value an entire portfolio, it can be complicated. For example, if a portfolio holds 1000 exotic derivatives, will be extremely complicated—and may take hours to value, even on a computer. For this reason, a mapping procedure may employ certain approximations, called remappings, to simplify .
The purpose of an inference procedure is to characterize the joint probability distribution of the key vector conditional on information available at time 0. It generally accepts historical market data as an input and applies techniques of time series analysis to characterize the joint distribution conditional on information available at time 0. Techniques currently employed tend to be crude. The most common are those of uniformly-weighted moving averages (UWMA) and exponentially-weighted moving averages (EWMA). What is needed is time-series methods that can address conditional heteroskedasticity in high dimensions. While research is ongoing, such methods are not yet perfected.
A transformation procedure combines the outputs from the mapping and inference procedures and uses them to characterize the distribution of 1P, conditional on information available at time 0. Based on that characterization, and perhaps the portfolio's current value 0p, the transformation procedure (or "transformation") determines the value of the desired VaR metric. The result is the VaR measurement.
Much research has focused on transformation procedures. Four basic forms of transformations are used:
Linear transformations are simple and run in real time. Based on , they apply only if a portfolio mapping function is a linear polynomial. Quadratic transformations are slightly more complicated, but also run in real time (or near-real time). They apply only if a portfolio mapping function is a quadratic polynomial and 1R is joint-normal. Monte Carlo and historical transformations are widely applicable, but tend to run slowly (run times of an hour or more are not uncommon). Both employ the Monte Carlo method. They both generate a large number of realizations 1r[k] for 1R and value 1P for each. The histogram of realizations 1p[k] for 1P provides a discrete approximation for the conditional distribution of 1P. From this, any VaR metric can be valued. Monte Carlo and historical transformations differ only in how they generate the realizations 1r[k]. Monte Carlo transformations generate them with pseudorandom number generators. Historical transformations draw them from historical market data.
Traditionally, VaR measures have been categorized according to the transformation procedures they employ. There are:
linear VaR measures (other names include: parametric, variance-covariance, closed form, or delta normal VaR measures)
quadratic VaR measures (also called delta-gamma VaR measures)
This naming convention may have had unfortunate consequences. By focusing attention on the role of transformation procedures, the convention tends to downplay the important roles of mapping and inference procedures. Over the past 10 years, most VaR-related research has focused on transformations. Important research on mapping and inference procedures has lagged.
To apply a VaR measure, it must be implemented in some manner. For a very simple portfolio—perhaps one comprising a single asset—a VaR measure might be implemented with pencil and paper. In actual trading environments, they are coded as software and run on computers. An implemented VaR measure is a VaR implementation.