Background: The most widely followed type of financial risk is the risk of bankruptcy. Spectacular failures like those of Enron, Parmalat, and Barings Brothers, and more recently Lehman Brothers, make eye-catching headlines. But the many other, lesser failures are frequent enough to yield a body of statistical data, one that is of great interest to investment professionals. These data have been studied by the rating agencies Standard & Poor's and Moody's, who use them to evaluate their systems of associating credit ratings (from triple-A to single-C) with bonds.
Unlike many areas of finance, studies of bankruptcy statistics and the associated credit rating migrations often use discrete mathematical tools such as Markov chains and elementary probabilistic concepts like the hazard and survival functions. Yet combining economic ideas with these statistical concepts can produce quite sophisticated statistical models.
Task: Explore the impact of the macroeconomic environment on the frequency of upgrades and downgrades of an institution's credit rating. The insights gained in this exploration will be used to construct a statistical model for rating transition probabilities. Computational tools will include statistical modeling and data-mining software.
Data Sources: Some aggregate data are publicly available from the rating agencies and from the Federal Reserve Board's data base. Larger data bases may be obtainable from appropriate institutions, subject to confidentiality issues.
Bookmarks:
the unemployment rate and the high yield spread over Treasuriesas macroeconomic drivers for transition probabilities.
Failuresof Risk Management
Background: Much has been written about
how common risk-management techniques such as Value at Risk
(VaR) failed to guide financial institutions around the
potholes that have been revealed by the current global
financial crisis. A common theme in this criticism is that
the techniques are based on statistical models, fitted to
historical data. Thus, they can be said to be always
managing the last crisis instead of the next one.
The
suggestion is that each new crisis has its own dimensions,
and history can provide no guide to managing the
accompanying risk.
However, the failure of current methods to outline the magnitude of the risks to which banks were exposed in recent years may not be caused by their dependence on statistical models fitted to historical data, but rather by the way those models were specified. The incorporation of normal distributions is clearly inappropriate because of the well known non-normality of most financial data, and yet it still appears to be widely used. Most methods can be modified, without huge complication, to replace the normal distribution by a member of Student's t family of distributions, with sometimes dramatic effects on the resultant estimates of risk.
VaR is also open to the criticism that it measures the size of a loss that will be exceeded with a given probability (such as .05), but ignores the magnitudes of losses that exceed that amount. This weakness has been known and discussed in the academic community for over a decade, and simple alternatives (Expected Shortfall, for example) that are not open to the same criticism have been developed. Regardless of these developments, VaR still seems to be the method of choice.
Task: Explore the types of probability distribution that best fit the day-to-day or month-to-month changes in various financial data (stock prices and bond yields, for example). Examine the consequences of the choice of distribution on the value of commonly used measures like VaR and Expected Shortfall. Determine whether more appropriate analysis of historical data could have provided better estimates of risk.
Data Sources: Databases with historical stock and bond data can be accessed through the NCSU Libraries. Ideally, price data for structured products like Mortgage Backed Securities, Collateralized Debt Obligations, and Credit Default Swaps would also be used; access to these is still being explored.
The S&P/Case-Shiller Home Price Indices are a closely watched collection of measures of changes in the value of residential housing. They show a precipitous 25% decline from a peak in mid-2006 to early 2009. The decline in house values that this represents has led to sharp increases in loan delinquencies, beginning in sub-prime mortgages and progressing to Alt-A mortgages and now credit card debt. The resulting stress on lenders will only increase as long as the decline in house prices continues.
The Case-Shiller indices are adjusted neither for inflation nor for changes in wealth, and from a statistical point of view exhibit non-stationarity. However, an adjusted series such as the ratio of a home price index to per-capita personal income could be expected to show stationary behavior, varying around a long-term mean. Statistical analysis of such a series could be used to explore questions such as for how long and to what level the price decline will continue.
Task: Identify appropriate modifications of the indices, and build statistical models to predict their future values. Develop answers to the questions of for how long and to what level the recent price declines will continue. Use conventional techniques as far as possible, and use simulations to answer questions that these techniques cannot answer.
Data Sources: The indices are published monthly by Standard & Poor's. Comparison series such as Personal Income may be found at the Federal Reserve Board's data base.