24 August 2013

Default Risk Modeling: Little Experiment (Part 4: Modeling – Conclusion)

This is the fourth post in the series of articles on modeling U.S. corporate default rates. So far we have:
  • Introduced modeling dataset and a number of potential default risk indicators (Part 1: Data),
  • Discussed modeling method and looked into the correlations of the defined set of initial variables (Part 2: Modeling – Start), and
  • Developed three alternative models to choose from (Part 3: Modeling).
Now we are going to test the alternative models, make our choice and then use the final model for forecasting.

(Reminder: This series of articles is deliberately sort of “geeky stuff”, even if simplified and without using the language of PhD-s which I – truth to be told – barely can read myself.)
 
Methodical note

A model may perform excellently in the modeling sample, but that’s by no means enough for assuming that it does reasonably good job when it comes to forecasting the future. Before choosing what we think is the best model out of the three alternatives that we have by now, we’d want to evaluate each model’s out-of-sample performance. That is: as we have built the models based on the data of 1991-2012, we ought to compare the forecasted and realized default rates for 2013 and onwards. The problem is that we only can do it sometime in the future when we actually know the default rate for 2013.

This is often the case with small modeling samples: for having a sufficient number of observations (not that we have but anyways), you may need to use all data for the model development.

Therefore I’m conducting a simple test. I’m dividing the modeling dataset into two periods of equal length, each of which covers 11 years: 1991-2001 and 2012-2012. Then I’m calibrating each of the alternative models only based on the data of one period (i.e. assigning new model betas just based on the period of 11 years – with the exception of Model 2 for which I use 13 years of data as 11 observations does not suffice for calibrating a model with this many variables), and checking:
(a) What’s the difference in the model’s beta coefficients when run on all data (betas in the proposed model) and when run on a sub-set of data – the smaller that difference, the more stable are the model parameters;
(b) What’s the (average) difference between the model’s predictions and the out-of-sample predictions of the re-run model – the smaller that difference, the more consistent results our model could be assumed to provide also for the coming few years. (Alternatively, you may compare the predictions of the re-run model with the realized default rates, but if the model is stable, this difference is more or less the same as we already observed when conducting in-sample analysis.)
So I can see which one of the three alternative models is the most stable over time.
  
One might improve this test by re-sampling the data in various ways – re-sampling and simulation techniques do provide some additional information even if they can possibly go only as far as the information contained in the input data and -assumptions. For now, as an experiment, I’ll limit myself with this one single cut-off point: 1991-2001 / 2002-2012. Both datasets (
1991-2001 and 2002-2012) are capturing a peak in the default rates (2001 and 2009, respectively); they should be more or less representative for one cycle, while the models with re-calibrated beta values can be out-of-sample tested in predicting another.

Further, a sensitivity analysis ought to be conducted to get a better “feeling” of how model predictions depend on the specific values of input variables. Then it’s more or less about one’s qualitative judgment of the results being logical or not. As for now, I’ll do that part only for the finally selected model.

An additional way to validate modeling results is to do some benchmarking, i.e. compare our forecasts with those of rating agencies and other financial market participants. We might try but who says that the others are more precise? It may well be that they are biased as a group. I subjectively would not base my model selection to the “mainstream opinion” unless my own research provides confirmation.

Test results for the alternative models

I expected to see rather sizable differences in model parameters and –predictions when I re-calculated them just based on 11 years of data; I suspected that as the models’ precision in in-sample analysis was this high – particularly for the peak years 2001 and 2009 – the models are calibrated based on these peaks and most probably highly unstable. (Remember graphs entitled “Comparison: Actual versus Predicted Values” in Part 3?) To my surprise, I found that the models remained even quite stable; furthermore, I found that a model developed based on the data of 1991-2001 would have displayed notable precision for forecasting the top in the U.S. corporate default rates in 2009. But let’s take it step by step.

The following group of tables summarizes for each of our three alternative models:
  • Model betas (parameters) together with the statistically given 95% confidence intervals for the linear regression coefficients – the first group of columns;
  • Model betas based on the defined subsets of data (1991-2001 and 2002-2012, respectively; as indicated, there is a slight difference for Model 2 in mathematical reasons) – the second group of columns;
  • Difference between the minimum and maximum beta, as absolute value and in respect to the beta of the original model – the third group of columns.
This analysis implies that in addition to the superior in-sample performance, the parameters of Model 3 are the most stable too. Indeed, despite of the very few data points, for this model betas based on the sub-sets of data almost remain into the statistical 95% confidence intervals – and would be entirely within limits if we showed a little broader intervals, 99% instead of 95%, for example.
 
 
(Click to enlarge)

Just to point it out: before this test I was subjectively inclined towrards Model 2 because its design looks... hmm, somewhat more in line with the usual way of thinking about the corporate default risk. (That’s an illustration of how human mind works: we tend to cling on what we are being taught, what we are used to think and what people around us are suggesting.) But it seems that I have to change my mind...

The following figures depicting agreement (or disagreement) between each model’s predictions and out-of-sample predictions of the same models when betas are assigned based on a sub-set of data, clearly confirm the superiority of Model 3 over Model 2. Specifically: in the left-side charts you can compare these two sets of predictions, when model parameters are re-calculated based on the period of 1991-2001 (1991-2003 for Model 2) and out-of-sample test is performed based on the years 2002-2012; in the right-side charts there are provided figures for the sub-set of modeling data of 2002-2012 (2000-2012 for Model 2) and validation data of 1991-2001. For aggregating the information about each test into one number, I have calculated the average of the absolute differences in the two lines on each graph. In this analysis, Model 1 actually appears even slightly better than Model 3. 


 (Click to enlarge)

Model selection

We have gathered the following information for choosing between the three alternative models:
  • Rank-ordering starting from the best based on in-sample performance: (a) Model 3 (Adjusted R-Squared: 0.949; Average forecast error: 16.6%), (b) Model 2 (Adjusted R-Squared: 0.940; Average forecast error: 19.4%), and (c) Model 1 (Adjusted R-Squared: 0.924; Average forecast error: 21.8%).
  •  In-sample Chi-square test (I did not discuss it but on principle it provides a statistical basis for deciding if the predicted and the actual/realized default rates are close enough) would reject Model 1, but approve Model 2 and Model 3.
  • Model 3 has the most stable parameters, followed by Model 1 and Model 2 respectively.
  • Model 2 underperforms the others in the simulated out-of-sample test.
This pretty much leaves us with the Model 3. Yeah, even if I may not exactly like its design (recall comments in Part 3), it seems to do a damn good job in forecasting.

So that’s the model:
 
 
To remind the meaning of the variables:
  • (Intercept) – not an input variable by definition; has always the same fixed value (which may effectively be zero as well);
  • DF_t.1_FF – qualitative variable that describes change in realized default rates during the last year (last 12 months preceding the forecast period) while taking into account the Federal funds rate as an indicator of the reversion in credit cycle; in practice (as the Fed funds rate has been above the trend since 2011, remember discussion in Part 2) takes the value “1” if the default rate has increased during the last 12 months when compared to the default rate in the year before, and “0” otherwise (e.g. if we want to predict the default rate for 2013, we’d need to look how the default rate in 2012 has changed when compared to 2011);
  • VIX_t.1_Dec – fear index VIX representing market's expectation of stock market volatility, data as of last month before the start of the forecast period (e.g. if we wanted to predict the default rate for 2013, then we had to use VIX as available in December 2012);
  • FedFundsRate – qualitative variable describing the Federal funds rate; (in theory) it can take values “2”, “1” and “0” depending on where the rate has stood compared to the trend line during the past few years (read the detailed descriptions of the variable categories in Part 2); in practice, until the Fed maintains the long-term inflation target at around 2% (or defines anything below 3-4% as a target), it is stuck to the level “2” for the predictions starting from 2014. 
Note that the time lags are such that we can make model-based predictions for the next 12 months horizon; everything that goes beyond that is based on the assumptions about the model input variables themselves.

Sensitivity analysis


Ironically when thinking about the amount of work done, our selected model has quite a few moving parts as far as the future is concerned:
  • General mood of the financial markets as reflected by VIX (as well as by the changes in credit standards – it’s only that the variable Credit_standards_t.1_4Q did not end up in the model due to the remarkably high correlation with the VIX);
  • Increase or decrease in realized default rates during the past year.
No matter the Fed’s interest rate decisions (at least until the Fed funds rate does not go negative which is hard to imagine, and/or inflation does not pick up considerably which would be a complete failure of the Fed in its duty to ensure stable prices), the indicators of the Fed funds rates in the model are effectively fixed on the riskiest levels starting from predictions for 2014.

Accordingly, the following graph illustrates the model predicted default rates given the different values of the input variables, and assuming that the indicators of the Fed funds rate remain to the riskiest levels (i.e. assuming that the Fed funds rate is above its trend line for the variable DF_t.1_FF, and that the FedFundsRate=“2”). As can be seen, until VIX remains into its historical boundaries between 10 and 60 (VIX was introduced in the end of 1980s / beginning of 1990s), the forecast ranges from 2.5% to 7.1%. For comparison: the peak default rate in 1933 was nearly 8.5% while remaining below 6% in 2009 (Data: Moody’s, “Annual Default Study: Corporate Default and Recovery Rates, 1920-2012”). It sounds quite reasonable to me.
 

 
The picture looks a bit lame, though. Instead of all the modeling stuff, couldn’t we just have drawn a simple regression line describing the historical correlation between the VIX and the default rates? Well, it’s not the same: such simple regression line would have had quite a different slope; it would not have taken into account the fact of Fed having effectively exhausted Fed funds rate as a monetary policy tool which has impacted the default rates over the observation period.

Model predictions (2013 and the rolling 12 months up to Q2 2014)

Without any further comments or judgments at this point, the table below summarizes what the model says about the U.S. corporate default rates for 2013 and for the rolling 12 months from 1 July 2013 to 30 June 2014. I have included the values of the model input variables and, for comparison purposes, also the detailed information about the past few years starting from 2007 (the low-point of the U.S. corporate default rates over 30 years).

As you can see, based on the data as they were available at the end of 2012 / beginning of 2013, the model suggests that the U.S. corporate default rate is going to be 2.7% in 2013; based on the data as they were available at the end of June / beginning of July 2013, the rolling 12 months forecast from Q2 2013 to Q2 2014 is 3.9% respectively.  These numbers compare to the realized default rates of 1.3% in 2012, 1.1% in 2011, 1.6% in 2010, and so one going backwards in time. 

(For further reference: an annual default rate of 2.7% would translate into approximately 80 defaults of the S&P rated companies in a year; accordingly, a default rate of 3.9% would mean ca. 115 S&P rated companies defaulting. Both figures are given based on the assumption that the total number of rated companies does not change significantly when compared to 2012.)

Benchmarking and rationale


Do we believe the others or not, it is definitely interesting to compare our model’s forecasts to those provided by the rating agencies, major banks, analyst firms etc. Therefore I have collected a bunch of predictions and -opinions.

(For reference – as we are well into 2013 – by the end of the Q2, S&P had downgraded 28 U.S. corporate issuers into default. Source: S&P, “U.S. Corporate Outlook And Quarterly Rating Actions: …” 17-Jul-2013. The simple annualized figure is: 28*2=56 defaults. So where the extra 24 defaults might come from? Let’s see…)

In May 2013, Standard & Poor's Global Fixed Income Research group estimated that the U.S. trailing-12-month speculative-grade corporate default rates by the end of March 2014 are as follows:
  • In base scenario – 3.3% (up from the 2.5% trailing-12-month level ending in June 2013) which translates into ca. 50-55 defaults for all corporate issuers;
  • In optimistic scenario –  2.5% which means ca. 38-40 defaults for all corporate issuers;
  • In pessimistic scenario – 4.9% which conforms to 75-80 defaults for all corporate issuers (this one is quite in line with our forecast for 2013).
Base scenario assumes improving real economy (real GDP growth of 2.7% in 2013 and 3.1% in 2014). It also assumes unchanged U.S. monetary policy, including continuation of the Fed’s large-scale asset purchases (which is now believed not to be the case by a number of market participants). 

In its “Annual Default Study: Corporate Default and Recovery Rates, 1920-2012” (issued in 28 February 2013) Moody’s anticipated the speculative-grade default rate for 2013 to:
  • ease from 3.3% to 2.7% in baseline forecast (which would conform to ca. 1.1-1.2% for the portfolio of all corporate issuers), but
  • reach as high as 8% (approximately 3.3-3.5% for the portfolio of all corporate issuers) and even higher in case of pessimistic forecast.
When compared to the baseline forecast at the beginning of the year, the rating agency seems to have become more pessimistic about 2013 (even if still being optimistic about 2014). Here is the quote from a 23 July 2013 blog post of MarketWatch:
“Given the fact that rising yields pulled some junk-bond issuers out of the markets, Moody’s Investors Service expects its U.S. speculative-grade default rate to rise to 3.2% by November, from 2.9% at the end of the second quarter, according to Lenny Ajzenman, lead analyst of Moody’s U.S. Corporate Default Monitor, published Tuesday.”

On 22 July 2013 UBS suggested buying U.S. junk bonds as an investment idea: “The strength of the US corporate sector, our outlook for very low default rates over the next six and twelve months and ongoing central bank support make US high-yield bonds an attractive investment for the coming months. […] We expect the default rate to remain far below two percent through the year. […] The corporate re-leveraging cycle is at an early stage.”

[Quick comment to the UBS’s “investment idea”:
I would not be all that confident about the corporate re-leveraging cycle being at an early stage. In absolute terms, U.S. non-financial corporate companies have more debt than ever, and in relative terms, debt-to-assets ratio is well above the 2007 level – even if below the peak in 2009. Furthermore, according to the Flow of Funds balance sheet statistics (release: June 6, 2013), improvement in debt-to-asset ratio derives only from the financial assets the prices of which have been artificially inflated by the Fed’s ultra easy monetary policies.
It may be that UBS is “playing the hot potato game” and trying to get rid of some of the junk in its own portfolio. Just a thought…]

On 18 July 2013, Bloomberg reported: “Corporate defaults will rise as the Federal Reserve considers curtailing stimulus measures, according to a survey by the International Association of Credit Portfolio Managers. […] The Fed’s bond purchasing program “has lifted assets above what they normally would be and that has done a lot to dampen defaults,” Adrian Miller, the director of fixed-income strategy at GMP Securities LLC, said in a July 16 telephone interview from New York. “Companies that would have been washed out were given a lifeline.””

In short: the consensus expectation is that U.S. corporate default rates will remain pretty much unchanged for the coming 6-12 months when compared to 2012; this expectation is based on the baseline macroeconomic forecasts and the assumption that the Fed continues its current accommodative monetary policies. There are however warnings about things turning out quite different.

Predictions of the model developed during the current little experiment fall into ca. 75th percentile on scale of pessimism (subjective assessment). There are clear reasons for corporate defaults picking up.

Here is a moderate warning from WSJ: “With global economies sluggish and sales growth at a crawl, big U.S. companies have had one route to push profits higher: cut costs and squeeze suppliers. That strategy may be running out of steam.” (“Corporate Profits Lose Steam,” 28 Jul 2013) Further some have calculated that S&P 500 June quarter earnings growth was negative when excluding the financials. (See for example page 4 of the FactSet earnings insight from 9 August 2013: “Blended Earnings Growth is 2.1%, but Falls to -3.1% excluding the Financials Sector”.) Guidance for Q3 2013 is that the negative EPS preannouncements clearly exceed positive ones (from the same FactSet earnings insight: “For Q3 2013, 68 companies have issued negative EPS guidance and 17 companies have issued positive EPS guidance.”) – even if the analyst consensus seems rather optimistic about 2014.

The reality is that  the so-called “zombie companies” – companies that needs constant bailouts in order to operate, as well as indebted companies that are able to repay the interest on their debts but not reduce their debts – are not only the Japanese disease or an European problem, but now also a very real cause of concern for the U.S. I’d say that effectively, virtually all CCC and lower rated companies (or equivalents) depend on the mercy of the creditors, central banks and governments as they “will default without an unforeseen positive development,” and/or “upon favorable business, financial, or economic conditions” because of the “highly leveraged” financial profile (see e.g. “S&P, General Criteria: Criteria For Assigning 'CCC+', 'CCC', 'CCC-', And 'CC' Ratings”). In other words: earnings of such companies are not much higher if at all when compared to the interest expenses. In Q2 2013 there were around 20 downgrades of S&P rated companies from “B” ratings to “C” ratings which compares to ca. 10 upgrades from “C” ratings to “B” ratings. (Source: S&P, “U.S. Corporate Outlook And Quarterly Rating Actions: …” 17-Jul-2013; Table 7) Thus, from the second quarter alone we have 10 new potential defaulters.

Any squeeze in such companies’ earnings (including because of declining asset prices) and/or rise in interest rates would inevitably cause extra defaults, at least without extra forbearance from creditors’ side. However, given the current (still) loose credit conditions, it may indeed take a bit longer than our model foresees. Yet again: bond spreads and fear index are reacting with upward jumps to the Fed’s cautious checks about the possibility of tapering the bond-buying program in the coming months. Note that given the Fed’s nearly 60 times leveraged balance sheet, the central banking system of the U.S. doesn’t have many options but to taper in near term or lose its credibility all together.



In any case, if the present state of affairs is not highly speculative then I don’t know what it is. Yet IF (and only if) the Fed manages to convince financial markets that its extensive purchases of treasury bonds and mortgage-backed securities have not been all that important (in other words that such purchases can be tapered without significant negative consequences), and remains fairly moderate with the interest rates, an outright credit crunch might be possible to avoid and the U.S. corporate default rates could stay (slightly) below 4% at their highest levels in 2014-2016. However, in order to avoid “Japanese disease”, “voluntary” restructuring of debts and controlled defaults are necessary. The later vastly complicates any statistical modeling as companies that effectively default may not be recorded as defaulted in databases.

This hmm… somewhat gloomy perspective concludes this little modeling exercise. Thank you for caring.


______

Sharing financial knowledge with those who bother to pay attention – as best I can…

No comments:

Post a Comment