That's an old saying from the early days of computer science, meaning that if you process bad data, you get bad results. This was part of the problem at Standard & Poor's when it came to rating mortgage-backed securities or derivatives based on them. S&P developed but did not implement new algorithms and databases and that meant the ratings it gave were based on incomplete, outdated information.
Frank Raiter was a former executive at S&P and gave the House Committee on Oversight and Government Reform this testimony today:
In 1995, S&P used a rules-based model for determining the loss expected on a given bond. Late that year, it was determined and decided to move to a statistical-based approach, and we began gathering data to come out with the first model that was based on approximately 500,000 loans with performance data going back five years.
That version of the LEVELs model was implemented in 1996 and made available for purchase by originators, investment banks, investors and mortgage insurance companies.
By making the model commercially available, S&P was committed to maintain parity between the model that they ran and the answers that were given to the investors and the issuers that purchased the model.
In other words, S&P promised model clients that they would always get the same answers from the LEVELs model that the rating agency got.
Implicit in this promise was S&P's commitment to keep the model current. In fact, the original contract with the model consultant called for annual updates to the model based on a growing database.
An update was accomplished in late 1998, 1999, and that model was ultimately released. The version was built on 900,000 loans -- I'm going to speed this up a little bit.
We developed two more iterations of the model, one with 2.5 million loans and one with 10 million loans. In a nutshell, those versions of the model were never released.
While we had enjoyed substantial management support up to this time,by 2001 the stress for profits and the desire to keep expenses low prevented us from, in fact, developing and implementing the appropriate methodology to keep track of the new products.
As a result, we didn't have the data going forward in 2004 and '05 to really track what was happening with the subprime products and some of the new alternative payment type products, and we did not, therefore, have the ability to forecast when they started to go awry.
As a result, we did not by that time have the support of management in order to -- to implement the analytics that, in my opinion, might have forestalled some of the problems that we are experiencing today.
This seems to prove that there were people in the ratings agencies who KNEW that the computer models in use needed updating but weren't given the money to do so.
1October 22, 2008 Wednesday
REP. HENRY A. WAXMAN HOLDS A HEARING ON CREDIT RATING AGENCIES
LENGTH: 41651 words
COMMITTEE: HOUSE COMMITTEE ON OVERSIGHT AND GOVERNMENT REFORM
SPEAKER:
REP. HENRY A. WAXMAN, CHAIRMAN
LOCATION: WASHINGTON, D.C.
WITNESSES:
JEROME FONS, FORMER EXECUTIVE, MOODY'S CORPORATION
FRANK RAITER, FORMER EXECUTIVE, STANDARD & POOR'S
SEAN EGAN, MANAGING DIRECTOR, EGAN-JONES RATINGS
DEVEN SHARMA, PRESIDENT, STANDARD & POOR'S
RAYMOND MCDANIEL, CHAIRMAN AND CEO, MOODY'S CORPORATION
STEPHEN JOYNT, PRESIDENT AND CEO, FITCH RATINGS
No comments:
Post a Comment