Statistics

Company Stock And Bond Ratings: Skill and Influential Forecasts

If I tell you that it’s not going to rain tomorrow, and it doesn’t, then I have given you a successful forecast. If I repeat this success a few times, then you might come to think that I know what I’m talking about, that I might have a secret method that lets me look into the life of clouds. And, if the presence or absence of rain meant something to you, you might be willing to pay me a tidy fee for my prognostications.

But not if my forecasts were for Tucson, Arizona, a place where it hardly ever rains. Why would you pay for something which is (nearly) obvious?

Part of what makes a good forecast, then, incorporates the idea of difficulty. Accurate forecasts of events which are difficult to predict are more valuable than accurate predictions of events which are easy to forecast.

Now let’s suppose that we’re still in Tucson, and my forecast is instead for rain, an event difficult to predict accurately. You ask, “Is this forecast for tomorrow or the next day?” And I answer, “Oh, let’s not bother about details. Just wait and see.”

Obviously, if we wait long enough it will eventually rain in Tucson. My reforming this statement as a “prediction” is just as obviously worthless.

Therefore, another dimension of forecast quality is precision. Exactly when does this forecast hold? You might think this too trivial to make notice of, and that nobody would ever pay for a prediction with amorphous bounds, but this is not so, as we shall see.

OK, still in Tucson, and still a forecast of rain for the next day. But now you learn that I am the proud owner of Briggs’s Cataract Cloud Seeding (“Success Guaranteed!”). How much would you pay for my forecast of rain?

If I have the power to cause, or to substantially influence, the event which I am predicting, then you should not pay much. You might, of course, pay me to do the influencing or causing, but you wouldn’t pay for my forecast. This notion is made even stronger when you consider that I might have an interest in the thing that I am forecasting (and am able to influence).

To summarize: An unskillful or low-value forecast is one which predicts something easy, is vague in its details, and is for an event which the forecaster might influence. Nobody would pay for such unskillful forecasts, right?

Wrong. They often do, usually in the form of paying a trading fee for buying a stock recommended by a broker.

Consider: a broker tells you that stock in company ABC is doing well, its “financials are sound”, and so forth. He is not quite saying, but hinting strongly, that the stock price will rise. By how much and over what period of time is never specified. The details, that is, are amorphous.

The stock might be part of a sector, which might include the entire market, which is embedded in a general upswing. For example, the technology sector in the late 1990s, or the housing sector in the mid-2000s. Forecasting the price of a stock will increase in these times requires no skill (similarly, forecasting it will decrease after the bubble bursts, also requires no skill).

Worst, the company in which the broker works might not be disinterested. The broker’s prediction might be influential, but you might not be aware of the influence.

The situation is better for bonds, because of their built-in maturity. But even bonds are not entirely trouble free. Some bonds pay regular interest, others don’t. And bonds are often incorporated into complex financial “instruments” which are much like stocks in their behavior.

Firms like Moody’s issue company bond ratings such as “Aaa” and “Aa”, a bond which is said to be “high quality by all standards.” The ratings are, of course, forecasts and should be treated as such.

And they are influential forecasts, too. It’s not that Moody’s, or other bond-rating agency, necessarily has an interest in the company which issued the bond, but it’s people’s perceptions of the bond given the rating by Moody’s that is influential. That is, the company that issued the bond might do better, and thus have a higher chance of paying off the bond, given that the company garners a high rating.

Investors look at the bond rating and say to themselves, “Since this company has such a high rating, they must be doing well. I’ll invest.” And those investments, of course, can cause the company to do well, which thus justify the ratings (forecasts).

As you can imagine, it can be extraordinarily tricky to prove that the rating (forecast) was skillful in the presence of influence.

There are two problems here: how to tell if a bond or stock rating is skillful, and how influence affects skill. We’ll look at these another time.

Categories: Statistics

7 replies »

  1. Analyzing data (statistics, etc.) is certainly useful, but all too commonly the source of the data is implicitly considered valid when it isn’t. Also, understanding the physics, or psychology, of a situation described by data is very often sufficient to enable one to asses the merits of the data for further analysis, or dismiss it outright.

    Mandelbrot’s book, “The Misbehavior of Markets…” has a nice description of the patterns of the ups & downs of a given stock, sector, or stock market overall that he demonstrates to be indistinguishable from random variation–at least over the short term. Long term trends are another thing entirely.

    After reading that, and exploring that info further on one’s own, and its hard to take any stock market prognosticator seriously.

    Amazon link: http://www.amazon.com/Misbehavior-Markets-Fractal-Financial-Turbulence/dp/0465043577/ref=pd_sim_b_1

    Also consider, “Blood on the Street….” for a summary of how conflicts of interest render much of the seemingly objective analytical data highly biased, even fraudulent: http://www.amazon.com/Blood-Street-Sensational-Generation-Investors/dp/0743250230/ref=sr_1_1?s=books&ie=UTF8&qid=1283861829&sr=1-1

  2. Matt,

    Another large concern with the credit rating agencies like Moody’s and S&P is that their clients are the same people they’re supposed to be rating objectively!

  3. Unlike weather forecasts, stock prognostication can be self-fulfilling. Look at the dot-com boom. It came about mostly because of hype and had quite a ride until reality set in. The key to success in the market is the ability to predict the crowd mood as a whole. It’s mostly a bet on human nature. People must not be very predictable if it’s so hard to predict stock prices.

  4. People must not be very predictable if it’s so hard to predict stock prices.

    It’s not so much people as their interactions when there are too many of them .
    It’s like predicting individual trajectories of molecules in a volume but much harder because the laws are vague and not very deterministic .
    Then either you have a chance and there are some robust statistical properties like it is the case for the molecules and you get a highly skillful statistical forecasts of statistical thermodynamics .
    Or you have not that chance , there are no robust statistical properties like it is the case with people and you are just left with the deterministic chaos that looks from time to time like it might be ergodic untill you get a surprise and understand that it is not ergodic at all .

  5. I remember when I was reprimanded by my old boss for making a forecast that included both a level and a time frame. Market strategists rarely give both.

    If you are a fundamental analyst, your forecast of stock prices is a function of financial variables such as sales, operating margins, future investment needs, cash flow and dividends. These variables also depend on macro variables such as GDP, inflation, interest rates, currency exchange rates, levels of future taxation. It is difficult to make an accurate projection for any of these 3 months into the future. No one makes accurate forecasts more than 1 year into the future. And some pricing models depend of forecast of the underlying variables years into the future.

    If you are a technical analyst, you believe that emotion drives the market, and the value of a security is a function of how popular it will be somewhere down the line. This is sometimes referred to the ‘greater fool theory.’ You didn’t overpay if there is a greater fool you can sell to. The problem with this theory is right on its face, you may be the greatest fool.

    JM Keynes likened stock picking to the newspaper beauty pageant. The UK newspapers used to have contests, where young ladies would send their photos to the newspaper. The readers would vote for who was prettiest. The girl that won received a prize, and those that selected her also won a prize. To win the prize, the readers shouldn’t vote for the one they think is prettiest, they should vote for the one they think is going to win. It isn’t too far off from the technicians POV.

    No one makes forecasts that reflect the way that markets truly behave. i.e. some number of companies will double in value in a short amount of time, some will go bankrupt. Analysts write up their forecast for the central tendency, which is not the actual direction the price will take. But future prices do not form a normal distribution around a central tendency. They boom and bust.

    Harry Seldon’s premise of pycho-history is bunk. People do not follow the laws of gasses. Atoms do not have emotions, panic, form herds, or spontaneously gain (and lose) momentum.

    If ran an investment bank with a rational tech analyst in the 1990s, who said that valuations were ridiculous. You would have fired him and hired one that was closer to the herd. Two reasons for this, 1) stragglers get killed 2) positive outlook helped to acquire underwriting business. Despite the ‘Chinese Wall’ there is a fundamental conflict of interest. Banks need bullish analysts. One more consequence of this conflict is the screwed up language. ‘Hold’ means sell. And I have seen many an analyst pat themselves on the back for downgrading a name or a sector to ‘Hold’ before a precipitous drop. If their clients followed their advice, they would still be broke.

    The conflict that Moody’s and S+P face is far less dangerous. Bond ratings do not forecast the success of the issuer. They reflect the relative probabilities of default based on their current balance sheet. The NSROs have done a fine job at this in the Sovereign, Municipal and Corporate default rates. Junk bonds do default more frequently than investment grade bonds. Although, corporate bonds default more frequently than Muni’s or Sovereigns with the same rating. The NSROs had a noticeable miss in 2003 with WCOM and Enron, where two large A rated names went under. Their defense was that they make their determinations on audited financial statements, and it is not their job to detect fraud. They missed spectacularly with the structured finance side of the business. This wasn’t due to a conflict of interest, but instead due to inadequate models, and banks knowing how to game the models.

  6. Doug M: Thanks for the very interesing post.

    Do you feel that Moody’s and S&P were criminally guilty in the financial collapse of Lehman Brothers, etc. by not understanding the underlying risks of their securities?

    Also, I rather enjoyed the Seldon psycho-history stuff. It makes a good story.

  7. I don’t think that S+P sunk Lehman. Lehman sunk its self. Lehman was run by big boys who had a fiduciary repsonsibility to know what they were buying. They created a lot of the paper that blew themselves up. They knew that they were gaming S+P. Lehman had better analysts and more tallent then S+P.

Leave a Reply

Your email address will not be published. Required fields are marked *