/ factorpad.com / fin / quant-101 / risk-contribution.html
An ad-free and cookie-free website.
Intermediate
Videos are availabe at one organized Quant 101 Playlist on YouTube.
How to measure expected stock risk and portfolio risk contribution (19:35)
Welcome. Today's goal is to find an individual stock's expected risk and then see how the risk of a stock contributes to a portfolio's risk.
I'm Paul, and if you're like me then you understand how difficult it is to keep all of the risk calculations for stocks and portfolios straight, especially when looking at them from different timeframes.
So here we will start with how expected risk is separated into systematic risk and specific risk, then shift our attention to risk on portfolios of stocks.
All of the text and formulas for this tutorial can be found at a web page. The first link in the video's Description will take you to it.
Here is the plan for the day.
First, we will visualize a regression and the separation of stock returns with the help of the CAPM formula.
Second, we grab another formula from CAPM related to a Single-Index model and use it to decompose risk.
Third, we create a covariance matrix for a portfolio.
Fourth, we use portfolio weights and see how each stock contributes to expected risk.
And in our next episode we will focus on the decomposition of return and risk for portfolios.
In Step 1, as a refresher we will revisit how return was separated into systematic and specific components.
Before we get started, let's visit our Financial Modeling Timeframes to clarify what we're looking at here.
Historical | Expected | Forecast |
---|---|---|
|
|
|
We use the Historical Timeframe when looking back for two purposes. First, we use it to reconcile and present past performance.
Second, we use historical data to find what is baked-in to market expectations at the present. We call this the Expected Timeframe.
As we saw in the tutorial on the CAPM Model we used beta from a regression on historical returns, then independently set expectations for the Risk-Free Rate and Equity Risk Premium.
We plugged these figures into the CAPM formula and arrived at the expected return for a stock, finding one for Merck at 9.64%. This we assume is what every investor expects from Merck, in the Expected Timeframe.
From there, independently an active portfolio manager sets a forecast for Merck that differs from these expectations. We call this the Forecast Timeframe.
Subtracting the expected return from the forecast return creates what is called a forecast alpha. The active portfolio manager feeds this into a portfolio optimization program and trades off risk while maximizing forecast alpha. The end result is a portfolio that overweights stocks with positive forecast alphas and underweights those with negative forecast alphas.
Not only can we find a stock's total expected return as we did there, but we can also break that return into two pieces, as we saw in the tutorial on Expected Return. Doing so helps a portfolio manager analyze risk.
The first piece is correlated with the movements in the Market. This part we call systematic, so we have a systematic return and a systematic risk.
The second piece is related to company-specific events, like earnings reports, acquisitions, financial statements and press releases. This captures information related to the stock specifically. We refer to these as specific return and specific risk.
The process of separating total returns like this in Finance is often called the rather odd term, decomposing. So decomposing return, or decomposing risk.
To decompose returns there, we used regression measures for a 48-month period for Merck on 3/30/07 and generated two outputs, alpha and beta. With beta we decomposed returns for the Historical Timeframe.
There we said we could calculate a series of historical regressions and use the stream of systematic and specific returns to calculate risk measures for the Historical Timeframe.
That's fine for a backward-looking calculation, like in a performance attribution application, but here we want risk in the Expected Timeframe, so the calculation is a bit different.
So let's move on to Step 2 and decompose risk. The first part is similar to what we did for return, but we will need one more measure from the regression.
We have a table with two sections. The first is for the regression and the second is for the decomposition calculations.
First, we need to collect three regression measures for the 48-month period ending 3/30/07 for Merck against the Market using excess returns.
E | F | G | H | |
---|---|---|---|---|
8 | Date | Alpha | Beta | Standard Error |
9 | 3/30/07 | -0.71% | 1.16 | 0.0596 |
10 | 1.16 | 0.0583 |
These returns sit on the Returns data tab, which can be downloaded sign-up free, as instructed in the System Setup tutorial.
Starting with the Monthly Raw Returns we created another table of Excess Returns by subtracting the monthly return on a risk-free investment from all four stocks and the Market portfolio.
So in this regression we want the 48-monthly excess returns ending 3/30/07 for Merck that sit in column N as our dependent y-variable and those for the Market that sit in column O as our independent x-variable.
While we are here, it will save a ton of time if we highlight the
48-month range in column N and give it the name
MRK_excess
. Then select the range in
column O and name it
Market_excess
.
In cell E9 we bring forward the date of our regression
with =Returns!J54
.
Next, in cell F9 we use
=INTERCEPT(MRK_excess,Market_excess)
to get the intercept, or alpha. What is more important for us is the
beta in G9 with
=SLOPE(MRK_excess,Market_excess)
.
We have been able to ignore the standard error from the regression up to this point, but for risk calculations we need the error term as indicated in the complete equation for a line.
The e is the standard error from the regression.
To find it, in cell H9 we use
=STEYX(MRK_excess,Market_excess)
and arrive at the figure of 0.0596.
We will return for the beta in G10 and standard error in H10 in a moment.
Now we're ready to decompose risk in the Expected Timeframe using the following formula:
The left side is systematic risk and the right side is specific risk.
We mentioned earlier that these are given many different names. Systematic risk is often called market risk, common-factor risk or non-diversifiable risk. Specific risk is often called idiosyncratic risk, diversifiable risk, residual risk and stock-specific risk.
To compute systematic risk using variance we need to square the stock's beta and multiply it by the expected variance on the Market.
For the specific risk using variance we square the standard error.
The total of the two is the stock's variance.
I | J | K | L | M | |
---|---|---|---|---|---|
8 | Market Variance | Systematic Variance | Specific Variance | MRK Variance (addition) | MRK Variance (functions) |
9 | 0.00192 | 0.00257 | 0.00355 | 0.00612 | 0.00604 |
10 | 0.00188 | 0.00252 | 0.00340 | 0.00592 | 0.00592 |
Let's start by gathering the variance on the Market, and as we transition to the Expected Timeframe, we should actually arrive at this as if it were a macroeconomic forecast, much as we did with the expected Risk-Free Return and the expected Equity Risk Premium.
Here though we will take a shortcut and use the historical variance for
the Expected Timeframe, using
=VAR.S(Market_excess)
in cell
I9.
Later I'll touch on how practitioners set the expected risk on the Market.
If you noticed, we used the sample variance function because as we look forward we want an unbiased sample, which divides by the number of observations less one. This increases the variance number, above the population variance.
Next, in cell J9 let's calculate the systematic risk
with the first part of the equation, squaring beta and multiplying it
by the expected Market variance, with
=G9^2*I9
. The systematic risk
component is 0.00257.
For the specific risk piece we simply square the standard error. In
cell K9 we have
=H9^2
for 0.00355.
So the total expected variance for Merck is the addition of the two
uncorrelated components of risk, so in L9 we have
=J9+K9
, for a total of 0.00612.
If we run a sample variance on Merck using the
VAR.S()
function we should arrive at
the same place right? So in cell M9 we have
=VAR.S(MRK_excess)
for a sample
variance of 0.00604.
Okay, what's going on here? Why doesn't it match? Did we do it wrong? If you are like me then you can't ignore this discrepancy, even if it is small difference.
As financial modelers, we like when numbers tie, so in row 10 let's run through all of these same calculations and track down the difference.
We're all set with our beta calculation in cell G10
because we're running the same regression and the
formula
=SLOPE(MRK_excess,Market_excess)
matches at 1.16. So no differences there.
Now for the error term, by default Excel calculates the standard error by taking the sum of the squared residuals from the regression and divides that by the degrees of freedom.
If you aren't clear on that point, we will see this later when we reconcile the whole regression using Excel's Data Analysis method.
Let's calculate our own standard error and instead divide by the
number of observations ourselves, using population calculations instead
of samples. For that we will use the
LINEST()
function.
Earlier, we covered the procedures for Excel's
LINEST
function, seeing how it offers us the ability to customize a regression.
It produces output that we can either print as an array, or wrap in an
INDEX()
function to pull out just the
component we need.
Here we are only interested in the sum of squared residuals which sits in row 5, column 2, so using row-by-column convention, 5 comma 2.
The formula in cell H10 with a resulting standard error of 0.0583 looks like:
Okay, let's walk through it. With long formulas like this it helps to review it from the inside out.
First, let's spell out the regression piece. First we have the
regression of Merck first, followed by the Market. Then we specify we
want the regression run the normal way with the intercept and we want
the full stats. The closed parentheses finishes the
LINEST()
function.
We wrapped it in an
INDEX()
function to pull from the
output array the item that sits in row 5, column 2, the sum of the
squared residuals.
Since it was a sum, we need to divide it by 48, using the
COUNT()
function. All of that
is wrapped in a SQRT()
function
because we wanted the standardized version, not the variance.
Okay, so as a result, we have a standard error of 0.0583, which is a bit lower than the original, as we expected, because we're dividing by a higher number.
Now let's proceed through the rest of the table to see if we can get these numbers to tie, but this time using population calculations.
In cell I10 we have
=VAR.P(Market_excess)
, for a result
of 0.00188, again lower than when we used the sample version of
the variance calculation.
Systematic risk using variance is beta times this population variance,
or =G10^2*I10
, or 0.00252.
The Specific risk using variance is the standard error squared, or
=H10^2
for 0.00340.
When we add these we arrive at 0.00592 for the total population
variance. Let's verify that by taking the population variance of Merck
returns using =VAR.P(MRK_excess)
.
We get a match of 0.00592.
So what was the point here? Well, financial analysts get worried when numbers don't tie, so now we know they do when using population figures. So I'm comfortable with that now, as I hope you are.
In most modeling situations in practice, especially when making estimates about the future, it is most common to use sample statistics, so this exercise was for the person who asks, why isn't the sum of the two component pieces the same as total variance.
So where is the difference? The discrepancy lies in the fact that when we ran the original regression to arrive at the standard error, Excel by default divided by the degrees of freedom, which was 46. As a quick check, we can verify this in cell H11 with:
The answer of 0.0596 matches the result from the
STEYX()
function.
If this doesn't make sense now, don't worry, it will after we walk through all calculations in the Data Analysis tutorial.
Now for Step 3, let's change our focus from individual stock risk to portfolio risk.
For this we need a covariance matrix and earlier we created one using arrays and computed portfolio variance with matrix multiplication. That is what I would suggest for calculations involving hundreds or thousands of stocks, but for a 4-stock portfolio we will create one using Excel functions instead.
Here we put the portfolio weights around the covariance matrix and in this form it is called a bordered covariance matrix.
The easiest way to do this is by naming the three ranges we haven't named already, for Microsoft, eBay, Abbott Labs. So in the end, we need five named ranges.
=Returns!K7:K54
=Returns!L7:L54
=Returns!M7:M54
=Returns!N7:N54
=Returns!O7:O54
Now after hard-coding in 25% weights for each stock around the borders,
we can populate the covariance matrix using the
=COVARIANCE.S()
function for each
pair of stocks using the named ranges. Down the diagonals will be
variances because the covariance of a stock's returns is variance.
E | F | G | H | I | J | |
---|---|---|---|---|---|---|
14 | MSFT | EBAY | ABT | MRK | ||
15 | Weights | 25% | 25% | 25% | 25% | |
16 | MSFT | 25% | 0.0023 | 0.0017 | -0.0001 | 0.0009 |
17 | EBAY | 25% | 0.0017 | 0.0115 | -0.0005 | 0.0007 |
18 | ABT | 25% | -0.0001 | -0.0005 | 0.0027 | 0.0013 |
19 | MRK | 25% | 0.0009 | 0.0007 | 0.0013 | 0.0060 |
The first cell G17 is the sample covariance between
Microsoft and itself, or
=COVARIANCE.S(MSFT_excess,MSFT_excess)
.
The cell G18 is the covariance between eBay and
Microsoft.
The rest of the table is self-explanatory. Head back to the tutorial on the Covariance Matrix if you need a refresher.
Now for Step 4, let's create another similar table, except this time we will calculate the products of the bordered weights, times the covariance from the table in Exercise 2.
When we calculated portfolio risk earlier, we took the weights squared times the covariance matrix. There we used array math because that's how we'd prefer to scale this, but here instead we will use the visual and multiply each weight times the corresponding cell of the covariance matrix.
E | F | G | H | I | J | |
---|---|---|---|---|---|---|
25 | MSFT | EBAY | ABT | MRK | ||
26 | Weights | 25% | 25% | 25% | 25% | |
27 | MSFT | 25% | 0.0001 | 0.0001 | 0.0000 | 0.0001 |
28 | EBAY | 25% | 0.0001 | 0.0007 | 0.0000 | 0.0000 |
29 | ABT | 25% | 0.0000 | 0.0000 | 0.0002 | 0.0001 |
30 | MRK | 25% | 0.0001 | 0.0000 | 0.0001 | 0.0004 |
So cell G27 is
=G16*F17*G17
. Again, from there the
rest of the calculations are self-explanatory.
Now with it complete, the sum of all of the products is the total portfolio variance.
Don't worry if this doesn't look exciting, with most of the numbers not showing up even at four decimal places, it will make sense in a moment.
Now, let's sum the rows in column M, one for
Microsoft, eBay, Abbott Labs and Merck. So in cell M27
we have =SUM(G27:J27)
.
L | M | N | |
---|---|---|---|
26 | Contribution | Relative | |
27 | MSFT | 0.0003 | 16% |
28 | EBAY | 0.0008 | 44% |
29 | ABT | 0.0002 | 11% |
30 | MRK | 0.0006 | 29% |
31 | 0.00192 | 100% |
Then, we take the sum of column M we get 0.00192, which matches what we calculated earlier when we computed the sample variance for the Market. That's good.
So Microsoft contributes 0.0003 of the total 0.00192, or 16% of the
variance for the portfolio. This is found in cell N27
by dividing the stock's contribution by the total variance, so
=M27/M31
.
For eBay, the contribution is 0.0008, which means it is responsible for 44% of the risk of the portfolio at these 25% weights.
These measures are often presented in portfolio risk analysis software programs so the portfolio manager can see which stocks contribute the most to risk.
By way of summary, we tackled two different risk calculations here. First, we decomposed a stock's risk, using variance, into two uncorrelated pieces. The resulting measures of systematic and specific risk are used in a variety of risk management applications.
To satisfy our curiosity we went through sample and population calculations to see why the summation of the systematic and specific components didn't match when using Excel's default calculations. In the end this is okay because when setting expectations we prefer higher variance figures from sample measures anyway.
Second, we shifted to portfolios and with a bordered covariance matrix we saw how an individual stock's risk contributes to total portfolio variance.
As mentioned earlier, from the beginning, we took a shortcut and used the variance on the Market from the Historical Timeframe and applied it to the Expected Timeframe. It is most common for firms to lean on third-party risk model providers for their estimates of risk that sit in portfolio risk analysis and portfolio optimization programs. Some firms however, formulate their own estimates for Market risk when looking forward, but that is beyond the scope here in Quant 101.
In the next episode we will stick with portfolios, first by walking through the same decomposition as we have done for an individual stock but this time for both return and risk on a portfolio.
As we near the end of Chapter 6 we have seen how practitioners and scholars transition to the Expected Timeframe. After the portfolio risk work in the next two tutorials we proceed to Chapter 7 and optimize a portfolio using the Single-Index Model with Excel's Solver Add-In.
Our goal at FactorPad is to disseminate stock and portfolio risk knowledge to more individuals so they can see how Institutional investors analyze risk. So if that is where you would like to go, we're happy to have you be a part of our growing group who isn't satistifed with information found elsewhere.
Please feel free to join us at any time and have a nice day.
The visuals in Excel for this tutorial in particular are important so make sure to watch the video if some of these concepts are fuzzy.
If you learned something today, please subscribe to our YouTube Channel to keep the momentum going.
/ factorpad.com / fin / quant-101 / risk-contribution.html
A newly-updated free resource. Connect and refer a friend today.