本文发表在 rolia.net 枫下论坛Our fundamental risk-modeling framework.
Return to RiskMetrics
The Evolution of a Standard
In October 1994, the risk management group at J.P. Morgan took the bold step of revealing its internal risk management methodology through a fifty page technical document and a free data set providing volatility and correlation information for roughly twenty markets. At the time, there was little standardization in the marketplace, and the RiskMetrics model took hold as the benchmark for measuring financial risk. In the subsequent years, as the model became a standard for educating the industry as well, the demands for enhancements and advice grew. We continued to develop the model, and by mid-1998, the Technical Document had been updated three times, with the last release (the fourth edition, or RiskMetrics Classic) tipping the scales at almost 300 pages, more timely updates and advances had come in the form of thirteen RiskMetrics Monitors, and the free dataset had expanded to cover foreign exchange, equity, fixed income, and commodities in 33 countries. Demand for a straightforward implementation of the model arose as well, leading to the development of our first software product, FourFifteen.
In 1998, as client demand for the group's risk management expertise far exceeded the firm's internal risk management resources, RiskMetrics was spun off from J.P. Morgan. We have continued in our commitment to transparency, and have continued to publish enhancements to the RiskMetrics methodology. In total, we have now distributed approximately 100,000 physical copies of the various versions of the Technical Document, and still consistently provide over 1,000 electronic versions each month through our website. Meanwhile, the RiskMetrics datasets are still downloaded over 6,000 times each month.
Clearly, standards do not remain static as theoretical and technological advances allow for techniques that were unpractical or unknown previously and as new markets and financial products require new data sources and methods. We have faced these issues; the methodology employed in our second and third generation market risk applications represents a significant enhancement of the RiskMetrics model as documented in RiskMetrics Classic. Additionally, our experience, and the experience of the industry as a whole, has taught that a single risk statistic derived from a single model is inadequate, and as such, we have emphasized the use of alternative risk measures and stress tests in our software. So, while our model has evolved, and now represents a standard for the year 2001, the basic documentation still represents a standard for the year 1996, and a good deal has changed since then.
Looking back, we can divide the material covered in RiskMetrics Classic into three major pieces. The first of these, covered in Part One, contains the applications of the measures, or the "why" of risk measurement. In this area, regulatory standards have changed, as have disclosure and management practices. To address these changes, and to provide insight into risk management practices without delving into modeling details, we published Risk Management: A Practical Guide in 1999.
A second area, covered in Part Four of RiskMetrics Classic, concerns the market data that serves as the key input to the model. As we have covered more and broader markets, the data aspect of RiskMetrics has perhaps expanded more than any other area. We have formed a separate data service, DataMetrics, which now warehouses close to 500,000 series. Acknowledging the critical nature of this service, and its status as a product in itself, we will soon publish the DataMetrics Technical Document. This document covers market data sources used by DataMetrics, methods used to enhance the quality of the data, such as outlier identification, fitting of missing data, and synchronization, and analytics employed for derived data such as bootstrapped yield curves.
The third area, covered in Parts Two and Three of RiskMetrics Classic, is the mathematical assumptions used in the model itself. Although we have made significant enhancements to the models as represented in our software, our documentation has lagged this innovation and, unfortunately, RiskMetrics Classic,as a representation of our software, is slightly underwhelming. In other words, a self contained statement of the standard risk model does not exist today. The first goal of Return to RiskMetrics, then, is to rectify this problem by documenting the updated market-standard risk methodology that we have actually already implemented.
As well as this update, we have seen the need to clarify a number of misconceptions that have arisen as a result of the acceptance of RiskMetrics Classic. Practitioners have come to equate Value-at-Risk (VaR), the variance-covariance method, and RiskMetrics. Thus, it is common that pundits will criticize RiskMetrics by demonstrating that VaR is not an appropriate measure of risk. This is really a criticism of the use of a percentile to measure risk, but not a criticism of the model used to compute the measure. At the same time, we hear critics of VaR who claim the method is deficient because it captures only linear positions. This is not a criticism of the risk measure, but rather of the classic RiskMetrics variance-covariance method used to compute the measure. To be clear, we state that VaR is not RiskMetrics, and, in fact, is a risk measure that could even be an output of a model at odds with our assumptions. By the same token, RiskMetrics is not VaR, but rather a model that can be used to calculate a variety of risk measures. Finally, RiskMetrics is not a single set of computational techniques and approximations, such as the linear portfolio assumption or the Monte Carlo procedure. Rather, RiskMetrics encompasses all of these within a hierarchy of solution techniques for the fundamental model.
A final goal to this exercise is one of introspection. We have spoken of clarifying what RiskMetrics is not; there also lies the more difficult task of illuminating what RiskMetrics is. In a very strict sense, RiskMetrics is two fundamental and battle-tested modeling assumptions: that returns on risk factors are normally distributed and that volatilities of risk factors are best estimated using an exponentially weighted moving average of past returns. These two assumptions carry over from RiskMetrics Classic. Since the volatility estimation procedure has not changed, and since its explanation in RiskMetrics Classic is clear, we will not repeat the discussion in this document. On the other hand, though the normality assumption has not changed, we have seen the need to present it differently for clarity. In Chapter 2, we state the assumptions more technically and discuss two frameworks to calculate risk measures within the model: the closed-form approach, which is simpler but requires more approximations, and the Monte Carlo approach, which is more exact but also more burdensome. Of course, two assumptions do not make a risk model, and even with these assumptions stated, the model is not complete. For instance, it is still necessary to specify the risk factors, to which we have devoted Chapter 1, and the instrument pricing functions, to which we have devoted Chapter 5.
More generally, a risk model does not make a risk management practice. This brings us to a broader definition of RiskMetrics: a commitment to the education of all those who apply the model through clear assumptions and transparency of methods. Only by understanding the foundation of a model, and by knowing which assumptions are driven by practical needs and which by modeling exactitude, can the user know the realm of situations in which the model can be expected to perform well. This philosophy has motivated our restatement and clarification of the RiskMetrics modeling assumptions. Additionally, it has motivated us to discuss complementary modeling frameworks that may uncover sources of risk not revealed by the standard model. Chapters 3 (historical simulation) and 4 (stress testing) are thus included not as an obligatory nod to alternate approaches, but rather as necessary complements to the standard statistical model. Only through a combination of these is a complete picture of risk possible.
Throughout this document, our goal is the communication of our fundamental risk-modeling framework. However, in the interest of brevity, and to avoid overly taxing the patience of our readers, we have stayed away from delving into details that do not add to the basic understanding of our approach. For instance, in Chapter 5, we have chosen not to catalogue all of the instruments that we cover in our software application, but rather have provided a detailed look at a representative set of instruments that illustrate a broad range of pricing approaches: fixed cash flows, floating rate cash flows, options with closed-form pricing solutions, and options requiring Monte Carlo or tree-based pricing methods.
We recognize that following on RiskMetrics Classic, even if only in a focused treatment as we have written here, is a humbling task. We hope that this document is as useful in the year 2001 as RiskMetrics Classic was in 1996. To the readers of the old document, welcome back. We appreciate your continued interest, and thank you in particular for the feedback and questions over the last five years that have helped mold this new document. To those of you who are new to RiskMetrics in particular and risk modeling in general, we hope that this document gives you a solid understanding of the field, and happily invite questions, comments, and criticisms.
January 1, 2001更多精彩文章及讨论,请光临枫下论坛 rolia.net
Return to RiskMetrics
The Evolution of a Standard
In October 1994, the risk management group at J.P. Morgan took the bold step of revealing its internal risk management methodology through a fifty page technical document and a free data set providing volatility and correlation information for roughly twenty markets. At the time, there was little standardization in the marketplace, and the RiskMetrics model took hold as the benchmark for measuring financial risk. In the subsequent years, as the model became a standard for educating the industry as well, the demands for enhancements and advice grew. We continued to develop the model, and by mid-1998, the Technical Document had been updated three times, with the last release (the fourth edition, or RiskMetrics Classic) tipping the scales at almost 300 pages, more timely updates and advances had come in the form of thirteen RiskMetrics Monitors, and the free dataset had expanded to cover foreign exchange, equity, fixed income, and commodities in 33 countries. Demand for a straightforward implementation of the model arose as well, leading to the development of our first software product, FourFifteen.
In 1998, as client demand for the group's risk management expertise far exceeded the firm's internal risk management resources, RiskMetrics was spun off from J.P. Morgan. We have continued in our commitment to transparency, and have continued to publish enhancements to the RiskMetrics methodology. In total, we have now distributed approximately 100,000 physical copies of the various versions of the Technical Document, and still consistently provide over 1,000 electronic versions each month through our website. Meanwhile, the RiskMetrics datasets are still downloaded over 6,000 times each month.
Clearly, standards do not remain static as theoretical and technological advances allow for techniques that were unpractical or unknown previously and as new markets and financial products require new data sources and methods. We have faced these issues; the methodology employed in our second and third generation market risk applications represents a significant enhancement of the RiskMetrics model as documented in RiskMetrics Classic. Additionally, our experience, and the experience of the industry as a whole, has taught that a single risk statistic derived from a single model is inadequate, and as such, we have emphasized the use of alternative risk measures and stress tests in our software. So, while our model has evolved, and now represents a standard for the year 2001, the basic documentation still represents a standard for the year 1996, and a good deal has changed since then.
Looking back, we can divide the material covered in RiskMetrics Classic into three major pieces. The first of these, covered in Part One, contains the applications of the measures, or the "why" of risk measurement. In this area, regulatory standards have changed, as have disclosure and management practices. To address these changes, and to provide insight into risk management practices without delving into modeling details, we published Risk Management: A Practical Guide in 1999.
A second area, covered in Part Four of RiskMetrics Classic, concerns the market data that serves as the key input to the model. As we have covered more and broader markets, the data aspect of RiskMetrics has perhaps expanded more than any other area. We have formed a separate data service, DataMetrics, which now warehouses close to 500,000 series. Acknowledging the critical nature of this service, and its status as a product in itself, we will soon publish the DataMetrics Technical Document. This document covers market data sources used by DataMetrics, methods used to enhance the quality of the data, such as outlier identification, fitting of missing data, and synchronization, and analytics employed for derived data such as bootstrapped yield curves.
The third area, covered in Parts Two and Three of RiskMetrics Classic, is the mathematical assumptions used in the model itself. Although we have made significant enhancements to the models as represented in our software, our documentation has lagged this innovation and, unfortunately, RiskMetrics Classic,as a representation of our software, is slightly underwhelming. In other words, a self contained statement of the standard risk model does not exist today. The first goal of Return to RiskMetrics, then, is to rectify this problem by documenting the updated market-standard risk methodology that we have actually already implemented.
As well as this update, we have seen the need to clarify a number of misconceptions that have arisen as a result of the acceptance of RiskMetrics Classic. Practitioners have come to equate Value-at-Risk (VaR), the variance-covariance method, and RiskMetrics. Thus, it is common that pundits will criticize RiskMetrics by demonstrating that VaR is not an appropriate measure of risk. This is really a criticism of the use of a percentile to measure risk, but not a criticism of the model used to compute the measure. At the same time, we hear critics of VaR who claim the method is deficient because it captures only linear positions. This is not a criticism of the risk measure, but rather of the classic RiskMetrics variance-covariance method used to compute the measure. To be clear, we state that VaR is not RiskMetrics, and, in fact, is a risk measure that could even be an output of a model at odds with our assumptions. By the same token, RiskMetrics is not VaR, but rather a model that can be used to calculate a variety of risk measures. Finally, RiskMetrics is not a single set of computational techniques and approximations, such as the linear portfolio assumption or the Monte Carlo procedure. Rather, RiskMetrics encompasses all of these within a hierarchy of solution techniques for the fundamental model.
A final goal to this exercise is one of introspection. We have spoken of clarifying what RiskMetrics is not; there also lies the more difficult task of illuminating what RiskMetrics is. In a very strict sense, RiskMetrics is two fundamental and battle-tested modeling assumptions: that returns on risk factors are normally distributed and that volatilities of risk factors are best estimated using an exponentially weighted moving average of past returns. These two assumptions carry over from RiskMetrics Classic. Since the volatility estimation procedure has not changed, and since its explanation in RiskMetrics Classic is clear, we will not repeat the discussion in this document. On the other hand, though the normality assumption has not changed, we have seen the need to present it differently for clarity. In Chapter 2, we state the assumptions more technically and discuss two frameworks to calculate risk measures within the model: the closed-form approach, which is simpler but requires more approximations, and the Monte Carlo approach, which is more exact but also more burdensome. Of course, two assumptions do not make a risk model, and even with these assumptions stated, the model is not complete. For instance, it is still necessary to specify the risk factors, to which we have devoted Chapter 1, and the instrument pricing functions, to which we have devoted Chapter 5.
More generally, a risk model does not make a risk management practice. This brings us to a broader definition of RiskMetrics: a commitment to the education of all those who apply the model through clear assumptions and transparency of methods. Only by understanding the foundation of a model, and by knowing which assumptions are driven by practical needs and which by modeling exactitude, can the user know the realm of situations in which the model can be expected to perform well. This philosophy has motivated our restatement and clarification of the RiskMetrics modeling assumptions. Additionally, it has motivated us to discuss complementary modeling frameworks that may uncover sources of risk not revealed by the standard model. Chapters 3 (historical simulation) and 4 (stress testing) are thus included not as an obligatory nod to alternate approaches, but rather as necessary complements to the standard statistical model. Only through a combination of these is a complete picture of risk possible.
Throughout this document, our goal is the communication of our fundamental risk-modeling framework. However, in the interest of brevity, and to avoid overly taxing the patience of our readers, we have stayed away from delving into details that do not add to the basic understanding of our approach. For instance, in Chapter 5, we have chosen not to catalogue all of the instruments that we cover in our software application, but rather have provided a detailed look at a representative set of instruments that illustrate a broad range of pricing approaches: fixed cash flows, floating rate cash flows, options with closed-form pricing solutions, and options requiring Monte Carlo or tree-based pricing methods.
We recognize that following on RiskMetrics Classic, even if only in a focused treatment as we have written here, is a humbling task. We hope that this document is as useful in the year 2001 as RiskMetrics Classic was in 1996. To the readers of the old document, welcome back. We appreciate your continued interest, and thank you in particular for the feedback and questions over the last five years that have helped mold this new document. To those of you who are new to RiskMetrics in particular and risk modeling in general, we hope that this document gives you a solid understanding of the field, and happily invite questions, comments, and criticisms.
January 1, 2001更多精彩文章及讨论,请光临枫下论坛 rolia.net