Skip to content

Category: Statistics

The general theory, methods, and philosophy of the Science of Guessing What Is.

December 5, 2017 | 9 Comments

Free Data Science Class: Predictive Case Study 1, Part IV

Review!

Code coming next week!

Last time we decided to put ourselves in the mind of a dean and ask for the chance of CGPA falling into one of these buckets: 0,1,2,3,4. We started with an simple model to characterize our uncertainty in future CGPAs, which was this:

    (5) Pr(CGPA = 4 | grading rules, old observation, fixed math notions).

Now the “fixed math notions” means, in this case, that we uses a parameterized probability multinomial distribution (look it up anywhere). This model via the introduction of non-observable, non-real parameters, little bits of math necessary for the equations to work out, gives the probability of belonging to one of the buckets, which in this case are 5, 0-4.

The parameters themselves are the focus of traditional statistical practice, in both its frequentist and Bayesian flavors. This misplaced concentration came about for at least two reasons: (a) the false belief that probabilities are real and thus so are parameters, at least “at” infinity and (b) the mistaking of knowledge of the parameters for knowledge of observables. The math for parameters (at infinity) is also easier than looking at observables. Probability does not exist, and (of course) we now know knowledge of the parameters is not knowledge of observables. We’ll bypass all of this and keep our vision fixed on what is of real interest.

Machine learning (and AI etc.) have parameters for their models, too, for the most part, but these are usually hidden away and observables are primary. This is a good thing, except that the ML community (we’ll lump all non-statistical probability and “fuzzy” and AI modelers into the ML camp) created for themselves new errors in philosophy. We’ll start discussing these this time.

Our “fixed math notions”, as assumption we made, but with good reason, include selecting a “prior” on the parameters of the model. We chose the Dirichlet; many others are possible. Our notions also selected the model. Thus, as is made clear in the notation, (5) is dependent on the notions. Change them, change answer to (5). But so what? If we change the grading rules we also change the probability. Changing the old observations also changes the probability.

There is an enormous amount of hand-wringing about the priors portion of the notions. Some of the concern is with getting the math right, which is fine. But much is because it is felt there are “correct” priors somewhere out there, usually living at Infinity, and if there are right ones we can worry we might not have found the right ones. There are also many complaints that (5) is reliant on the prior. But (5) is always reliant on the model, too, though few are concerned with that. (5) is dependent on everything we stick on the right hand side, including non-quantifiable evidence, as we saw last time. That (5) changes when we change our notions is not a bug, it is a feature.

The thought among both the statistical and ML communities is that a “correct” model exists, if only we can find it. Yet this is almost never true, except in those rare cases where we deduce the model (as is done in Uncertanity for a simple case). Even deduced models begin with simpler knowns or assumptions. Any time we use a parameterized model (or any ML model) we are making more or less ad hoc assumptions. Parameters always imply lurking infinities, either in measurement clarity or numbers of observations, infinities which will always be lacking in real life.

Let’s be clear: every model is conditional on the assumptions we make. If we knew the causes of the observable (here CGPA; review Part I) we could deduce the model, which would supply extreme probabilities, i.e. 0s and 1s. But since we cannot know the causes of grade points, we can instead opt for correlation models, as statistical and ML models are (any really complex model may have causal elements, such as in physics etc., but these won’t be completely causal and thus will be correlational in output).

This does not mean that our models are wrong. A wrong model would always misclassify and never correctly classify, and it would do so intentionally, as it were. This wrong model would be a causal model, too, only it would purposely lie about the causes in at least some instances.

No, our models are correlational, almost always, and therefore can’t be wrong in the sense just mentioned; neither can they be right in the causal sense. They can, however, be useful.

The conceit by statistical modelers is that, once we have in hand correlates to our observables, which in this case will be SAT scores and high school GPAs, that if we increase our sample size large enough, we’ll know exactly how SAT and HGPA “influence” CGPA. This is false. At best, we’ll sharpen our predictive probabilities to some extent, but we’ll hit a wall and will go no further. This is because SAT scores do not cause CGPAs. We may know all there is to know about some non-interesting parameters inside some ad hoc model, but this intensity will not transfer to the observables, which may be as murky as ever. If this doesn’t make sense, the examples will clarify it.

The similar conceit of the ML crowd is that if only the proper quantity of correlates are measured, and (as with the statisticians) measured in sufficient number, all classification mistakes will disappear. This is false, too. Because unless we can measure all the causes of each and every person’s CGPA, the model will err in the sense that it will not produce extreme probabilities. Perfect classification is a chimera—and a sales pitch.

Just think: we already know we cannot know all the precise causes of many contingent events. For example, quantum measurements. No parameterized model nor the most sophisticated ML/AI/deep learning algorithm in the world, taking all known measurements as input, will classify better than the simple physics model. Perfection is not ours to have.

Next time we finally—finally!—get to the data. But remember we were in no hurry, and that we purposely emphasized the meaning and interpretation of models because there is so much misinformation here, and because these are, in the end, the only parts that matter.

December 4, 2017 | 2 Comments

There Is No “Problem” Of Old Evidence In Bayesian Theory

Update I often do a poor job setting the scene. Today we have the solution to an age-old problem (get it? get it?), a “problem” thought to be a reason not to adopt (certain aspects of) Bayesian theory or logical probability. I sometimes think solutions are easier to accept if they are at least as difficult as the supposed problems.

I was asked to comment by Bill Raynor on Deborah Mayo’s article “The Conversion of Subjective Bayesian, Colin Howson, & the problem of old evidence“.

Howson is Howson of Howson & Urbach, an influential book that showed the errors of frequentism, but then introduced a few new ones due to subjectivity. We’ve talked time and again on the impossibility that probability is subjective (where probability depends on how many scoops of ice cream the scientist had before taking measurements), but we’ve never yet tackled the so-called problem of old evidence. There isn’t one.

Though there is no problem of evidence, old or new, there are plenty of problems with misleading notation. All of this is in Uncertainty.

The biggest error, found everywhere is probability, is to only partially write down the evidence one has for a proposition, and then that information “float”, so that the one falls prey to equivocation.

Mayo:

Consider Jay Kadane, a well-known subjective Bayesian statistician. According to Kadane, the probability statement: Pr(d(X) >= 1.96) = .025

“is a statement about d(X) before it is observed. After it is observed, the event {d(X) >= 1.96} either happened or did not happen and hence has probability either one or zero” (2011, p. 439).

Knowing d0= 1.96, (the specific value of the test statistic d(X)), Kadane is saying, there’s no more uncertainty about it.* But would he really give it probability 1? If the probability of the data x is 1, Glymour argues, then Pr(x|H) also is 1, but then Pr(H|x) = Pr(H)Pr(x|H)/Pr(x) = Pr(H), so there is no boost in probability for a hypothesis or model arrived at after x. So does that mean known data doesn’t supply evidence for H? (Known data are sometimes said to violate temporal novelty: data are temporally novel only if the hypothesis or claim of interest came first.) If it’s got probability 1, this seems to be blocked. That’s the old evidence problem. Subjective Bayesianism is faced with the old evidence problem if known evidence has probability 1, or so the argument goes.

Regular readers (or those who have understood Uncertainty) will see the problem. For those who have not yet read that fine, award-eligible book, here is the explanation.

To write “Pr(d(X) > 1.96)” is to make a mistake. The proposition “d(X) > 1.96” has no probability. Nothing has a probability. Just like all logical argument require premises, so do all probabilities. They are here missing, and they are later supplied in different ways and equivocation occurs. In this case deadly equivocation.

We need a right hand side. We might write

     (1) Pr(d(X) > 1.96 | H),

where H is some compound, complex proposition that supplies information about the observable d(X), and what the (here anyway) ad hoc probability model for d(X) is. If this model allows quantification, we can calculate a value for (1). Unless that model insists “d(X) > 1.96” is impossible or certain, the probability will be non-extreme (i.e. not 0 or 1).

Suppose we actually observe some d(X_o) (o-for-observed). We can calculate

     (2) Pr(d(X) > d(X_o) | H)

and unless d(X_o) is impossible or certain, then again we’ll calculate some non-extreme number. (2) is almost identical with (1) but with a possibly different number than 1.96 for d(X_o). The following equation is not the same:

     (3) Pr( 1.96 >= 1.96 | H),

which indeed has a probability of 1.

Of course! “I observed what I observed” is a tautology where knowledge of H is irrelevant. The problem comes in there to put the actual observation, of the right or left hand side.

Take the standard evidence of a coin flip C = “Two-sided object which when flipped by show one of h or t”, then Pr(h | C) = 1/2. One would not say because one just observed a tail on an actual flip that, suddenly, Pr(h | C) = 0. Pr(h | C) = 1/2 because that 1/2 is deduced from C about h. (h is the proposition “An h will be observed”).

Pr(I saw an h | I saw an h & C) = 1, and Pr(A new h | I saw an h & C) = 1/2. It is not different from 1/2 because C says nothing about how to add evidence of new flips.

Suppose for ease d() is “multiply by 1” and H says X follows a standard normal (ad hoc is ad hoc, so why not?). Then

     (4) Pr(X > 1.96 | H) = 0.025.

If an X of (say) 0.37 is observed, then what does (4) equal? The same. But this is not (4):

     (5) Pr(0.37 > 1.96 | H) = 0,

but because of the assumption H includes, as it always does, tacit and implicit knowledge of math and grammar.

Or we might try this:

     (6) Pr(X > 1.96 | I saw an old X = 0.37 & H) = 0.025.

The answer is also the same because H like C says nothing about how to take old X and modify the model of X.

Now there are problems in this equation, too:

     (7) Pr(H|x) = Pr(H)Pr(x|H)/Pr(x) = Pr(H).

There is no such thing as “Pr(x)” nor does “Pr(H)” exist and we already seen it is false that “Pr(x|H) = 1”.

Remember: Nothing has a probability! Probability does not exist. Probability, like logic, is a measure of a proposition of interest with respect to premises. If there are no premises, there is no logic and no probability.

Better notation is:

     (8) Pr(H|xME) = Pr(x|HME)Pr(H|ME)/Pr(x|ME),

where M is a proposition specifying information about the ad hoc parameterized probability model, H is usually a proposition saying something about one or more of the parameters of M, but it could also be a statement about the observable itself, and x is a proposition about some observable number. And E is a compound proposition that includes assumptions about all the obvious things.

There is no sense that Pr(x|HME) nor Pr(x|ME) equals 1 (unless we can deduce that via H or ME) before or after any observation. To say so if to swap in an incorrect probability formulation, like in (5) above.

There is therefore no old evidence problem. There are many self-created problems, though, due to incorrect bookkeeping and faulty notation.

December 1, 2017 | 4 Comments

Parameters Aren’t What You Think

I was asked to comment on a post by Dan Simpson exploring the Bernstein-von Mises theorem.

This post fits in with the Data Science class, a happy coincidence, and has been so categorized.

A warning. Do not click on the link to a video by Diamanda Galas. I did. It was so hellishly godawful that I have already scheduled a visit with my surgeon to have my ear drums removed so that not even by accident will I have to listen to this woman again.

Now the Bernstein-von Mises theorem says, loosely and with a list of caveats given by Simpson, that for a parameterized probability model and a given prior, the posterior on the parameter converges (in probability) to a multivariate normal with a covariance matrix an inverse function n and of the Fisher Information Matrix centered around the “true” parameter.

It doesn’t matter here about the mathematical details. The rough idea is that, regardless of the prior used but supposing the caveats are met, the uncertainty in the parameter becomes like the uncertainty a frequentist would assess of the parameter. Meaning Bayesians shouldn’t feel too apologetic around frequentists frightened that priors convey information. It’s all one big happy family out at The Limit.

There is no Limit, though. It doesn’t exist for actual measures.

You know what else doesn’t exist? Probability. Things to not “have” probabilities or probability distributions (we should never say “This has a normal distribution”). It is only our uncertainty that can be characterized using probability (we should say “Our uncertainty in this is quantified by a normal”). And also non-existent, as a deduction from that truth, are parameters. Since parameters don’t exist, there can’t be a “true” value of them. Yet parameters are everywhere used and they are (or seem to be) useful. So what’s going on?

Recall our probabilistic golden rule: all probability is conditional on the assumptions made, believed, deduced or measured.

Probability can often be deduced. The ubiquitous urn models are good examples. In an urn are n_0 0s and n_1 1s. Given this information (and knowledge or English grammar and logic), the chance of drawing a 1 is n_1/(n_1+n_0). In notation:

     (1) Pr(1 | D) = n_1/(n_1+n_0),

where D is a joint proposition containing the information just noted (the deduction is sound and is based on the symmetry of logical constants where there is no need to talk of drawing mechanisms, randomness, fairness, or whatever; See Uncertainty for details).

If all (take the word seriously) we know of the urn are its constituents, we have all the probability we need in (1). We are done. Oh, we can also deduce answers to questions like, “What are the chances of seeing 7 1s given we have already taken such-and-such from the urn.” But the key is that all is deduced.

So what if we don’t know how many 0s and 1s there are, but we still want:

     (2) Pr(1 | U) = ?,

where U means we know there are 1s and 0s but we don’t know the proportion (plus, as usual and forever, we also in U know grammar, logic, etc.). Well, it turns out the answer is still deducible as long as we assume a value n = n_1 + n_0 exists. We don’t even need to know it, really; we just need to assume it is less than infinity. Which it will be. No urn contains an infinite number of anything. Intuitively, since we have no information on n_1 or n_0, except that they must be finite, we can solve (2). The answer is 1/2. (Take a googol to the googol-th power a googol times; this number will be finite and bigger than you ever need.)

As above, we can ask any kind of question, e.g. “Given I’ve removed 18 1s and 12 0s, and I next grab out 6 balls, what are the chance at least 3 will be 1s?” The answer is deducible; no parameter is needed.

So what if we don’t know n? It turns out not to be too important after all. We can still deduce all the probabilities we want, as long as n is finite. What if n is infinite, though? Well, it can’t be. But what if we assume it is?

We have left physical reality and entered the land of math. We could have solved the problem above for any value of n short of infinity, and since we can let n be very large indeed, this is no limitation whatsoever. Still, as Simpson rightly says, asymptotic math is much easier than finite, so that if we’re willing to close our eyes to the problem of infinitely sized urns, maybe we can make our lives computationally easier.

Population increase

Statistics revolves around the idea of “samples” taken from “populations”. Above, when n was finite, our population was finite, and we could deduce probabilities for the remaining members of the population given we’ve removed so much sample. (Survey statistics is careful about this.)

But if we assume an infinite population, no matter how big a sample we remove, we always have an infinite number left. The deductions we produced above won’t work. But we still want to do probability—without the hard finite population math (and it is harder). So we can try this:

     (3) Pr(1 | Inf) = θ,

where Inf indicates we have an infinite urn and θ is a parameter. The value of this parameter is unknown. What exactly is this θ? It isn’t a probability in any ontological sense, since probabilities don’t exist. It’s not a physical measure as n_1/(n_1+n_0) was, because we don’t know what n_1 and n_0 are except that there are an infinite number of each of them and, anyway, we can’t divide infinities so glibly. (The probability in (1) is not the right hand side; it is the left hand side. The right hand side is just a number!)

The answer is that θ isn’t anything. It’s just a parameter, a placeholder. It’s a blank spot waiting to be filled. We cannot provide any answers to (3) (or questions like those above based on it) until we make some kind of statement about θ. If you have understood this last sentence, you have understood all. We are stuck, the problem is at a dead end. There is nowhere to go. If somebody asks, “Given Inf, what is the probability of a 1?” all you can say is “I do not know” because you understand saying “θ” is saying nothing.

Bayesians of course know they have to make some kind of statement about θ, or the problem stops. But there is no information about θ to be had. In the finite-population case, we were able to deduce the probability because we knew n_1 could equal 0, 1, …, n, with the corresponding adjustments made to n_0, i.e. n, n-1, …, 0. No combination (this can be made more rigorous) was privileged over any other, and the deduction followed. But when the population is infinite, it is not at all clear how to specify the breakdowns of n_1s and n_0s in the infinite urn; indeed, there are an infinite number of ways to do this. Infinities are weird!

The only possible way out of this problem is do what the serial writer of old did: with a mighty leap, Jack was free of the pit! An ad hoc judgment is made. The Bayesian simply makes up a guess about θ and places it in (3). Or not quite, but that would work about would give us

     (4) Pr(1 | Inf; θ = 0.5) = θ (= 0.5).

Hey, why not? If probability is subjective, which it isn’t, then probability can equal anything you feel. Feelings…whoa-oh-a feelings.

No, what the Bayesian does is invoke outside evidence, call it E, which sounds more or less scientific or mathematical, and information about θ, now called the prior, is given. The problem is then solved, or rather it is solvable. But it’s almost never solved.

The posterior is not the end

Having become so fascinated by θ, the statistician cannot stop thinking of it, and so after some data is taken, he updates his belief about θ and produces the posterior. That’s where we came in: at the end.

This posterior will, given a sufficient sample and some other caveats, look like the frequentist point estimate and its confidence interval. Frequentists are not only big believers in infinity, they insist on it. No probability can be defined in frequentist theory unless infinite samples are available. Never mind. (Frequentism always fails in finite reality.)

You know what happens next? Nothing.

We have the posterior in hand, but so what? Does that say anything about (3)? No. (3) was what we wanted all along, but we forgot about it! In the rush to do the lovely (and it is) math about priors and posteriors we mislaid our question. Instead, we speak solely about the posterior (or point estimate). How embarrassing. (Come back Monday for more on this subject.)

Well, not all Bayesians forget. Some take the posterior and use it to produce the answer to (3), or what turns out to be the modification of (3), and what is called the posterior predictive distribution.

     (5) Pr(1 | Inf; belief about θ) = some number.

Here is the funny part, at least for this problem. If we say, as many Bayesians do say, that θ is equally likely to be any number between 0 and 1 (a “flat” prior), then the posterior predictive distribution is exactly the same as the answer for (1).

That’s looking at the wrong way around, though. What happens is that if you take (1) (take it mathematically, I mean) and let n go to infinity in a straightforward way, you get the posterior predictive distribution of (3) (but only with a “flat” prior).

So, at least in this case, we needn’t have gone to the bother of assuming an infinite urn, since we had the right answer before. Other problems are more complex, and insufficient attention has been paid to the finite math, so we don’t have answers in every problem. Besides, it’s easier to assume an infinite-parameter based model and work out that math.

Influence

Assuming there is an infinite population, not only of 1s and 0s, but for any statistical problem, is what leads to the false belief that “true” values of parameters exist. This is why people will say “X has a distribution”. Since folks believe true values of parameters exist, they want to be careful to guess what they might be. That’s where the frequentist-Bayesian interpretation wars enter. Even Bayesians joust with each over their differing ad hoc priors.

It should be obvious that, just as assuming a model changes the probability of the observable of interest (like balls in urns), so does changing the prior for a fixed model change the probability of the observable. Of course it does! And should. Because all probability is conditional on the assumptions made; our golden rule. Change the assumptions, change the prior.

There is no sense whatsoever that an “noninformative” prior can exist. All priors by design convey information. To say the influence of the prior should be unfelt is like saying there should be married bachelors. It makes no logical sense. There isn’t even any sense that a prior can be “minimally” informative. To be minimally informative is to keep utterly quiet and say nothing about the parameter.

If there is any sense that a “correct” prior exists, or a “correct” model for that matter, it is in the finite-deducible sense. We start with an observable that has known finite and discrete measurements qualities, as all real observables do, and we deduce the probabilities from there. We then imagine we have an infinite population, as an augmentation of finite reality, and we let the sample go to infinity. This will give and implied prior and posterior and predictive distribution we which can compare against the correct finite sample answer.

But if we had the correct finite sample answer, why use the infinite approximation? Good question. The only answer is computational ease. Good answer, too.

The answer

Even though it might not look it, this little essay is in answer to Simpson. I’m answering the meta-question behind the details of the Bernstein-von Mises theorem, the math of which nobody disputes. As always, it’s the interpretation that matters. In this case, we can invert the BnM theorem and use it to show how far wrong frequentist point-estimates are. After all, frequentist theory can be seen as the infinite-approximation method to Bayesian problems—which themselves are when using parameters infinite-population approximations to finite reality. Frequentist methods are therefore a double approximation, which is another reason they tend to produce so much over-certainty.

What I haven’t talked about, and what there isn’t space for, are these so-called infinite dimensional models, where there are an infinity of parameters. I’ll just repeat: infinity is weird.

November 28, 2017 | 6 Comments

Free Data Science Class: Predictive Case Study 1, Part III

You must review: Part I, II. Not reviewing is like coming to class late and saying “What did I miss?” Note the New & Improved title!

Here are the main points thus far: All probability is conditional on the assumptions made; not all probability is quantifiable or must involve observables; all analysis must revolve on ultimate decisions; unless deduced, all models (AI, ML, statistics) are ad hoc; all continuum-based models are approximations; and the Deadly Sin of Reification lurks.

We are using the data from Uncertanity, so that those bright souls who own the book can follow along. We are interested in predicting the college grade point of certain individuals at the end of their first year. We spent two sessions defining what we mean by this. We spend more time now on this most crucial question.

This is part of the process most neglected in the headlong rush to get to the computer, a neglect responsible for vast over-certainties.

Now we learned that CGPA is a finite-precision number, a number that belongs to an identifiable set, such as 0, 0.0625, and so on, and we know this because we know the scoring system of grades and we know the possible numbers of classes taken. The finite precision of CGPA can be annoyingly precise. Last time we were out at six or eight decimal places, precision far beyond any decision (except ranking) I can think to make.

To concentrate on this decision I put myself in the mind of a Dean—and immediately began to wonder why all my professors aren’t producing overhead. Laying that aside (but still sharpening my ax) I want to predict the chance any given student will have a CGPA of 0, 1, 2, 3, or 4. These buckets are all I need for the decision at hand. Later, we’ll increase the precision.

Knowing nothing except the grade must be one of these 5 numbers, the probability of a 4 is 1/5. This is the model:

    (1) Pr(CGPA = 4 | grading rules),

where “grading rules” is a proposition defining how CGPAs are calculated, and with information of what level of precision that is of interest to us, and possibly to nobody else; “grading rules” tells us CGPA will be in the buckets 0, 1, 2, 3, 4, for instance.

The numerical probability of 1/5 is deduced on the assumptions made; it is therefore the correct probability—given these assumptions. Notice this list of assumptions does not contain all the many things you may also know about GPAs. Many of these bytes of information will be non-quantified and unquantifiable, but if you take cognisance of any of them, they become part of a new model:

    (2) Pr(CGPA = 4 | grading rules, E),

where E is a compound proposition containing all the semi-formal and informal things (evidence) you know about GPAs, like e.g. grade inflation. This model depends on E, and thus (2) will not likely give quantified or quantifiable answers. Just because our information doesn’t appear in the formal math does not make (2) not a model; or, said another way, our models are often much more than the formal math. If, say, E is only loose notions on the ubiquity of grade inflation, then (2) might equal “More than a 20% chance, I’ll tell you that much.”

To the data

We have made use of no observations so far, which proves, if it already wasn’t obvious, that observations are not needed to make probability judgments (which is why frequentism fails philosophically), and that our models are often more reliant upon intelligence not contained in (direct) observation.

But since this is a statistics-machine learning-artificial intelligence class, let’s bring some numbers in!

Let’s suppose that the only, the sole, the lone observation of past CGPAs was, say, 3. I mean, I have one old observation of CGPA = 3. I want now to compute

    (3) Pr(CGPA = 4 | grading rules, old observation).

Intuitively, we expect (3) to decrease from 1/5 to indicate the increased chance of a new CGPA = 3, because if all we saw was an old 3, there might be something special about 3s. That means we actually have this model, and not (3):

    (4) Pr(CGPA = 4 | grading rules, old observation, loose math notions).

There is nothing in the world wrong with model (4); it is the kind of mental model we all use all the time. Importantly, it is not necessarily inferior to this new model:

    (5) Pr(CGPA = 4 | grading rules, old observation, fixed math notions),

where we move to formally define how all the parts on the right hand side mathematically relate to the left hand side.

How is this formality conducted?

Well, it can be deduced. Since CGPA can belong only to a fixed, finite set (as “grading rules” insists), we can deduce (5). In what sense? There will be so many future values we want to predict; out of (say) 10 new students, how many As, Bs, etc. are we likely to see and with what chance? This is perfectly doable, but it is almost never done.

The beautious (you heard me: beautious) thing about this deduction is that no parameters are required in (5) (nor are any “hidden layers”, nor is any “training” needed). And since no parameters are required, no “priors” or arguments about priors crop up, and there is no need of hypothesis testing, parameter estimates, confidence intervals, or p-values. We simply produce the deduced probabilities. Which is what we wanted all along!

In Uncertainty, I show this deduction when the number of buckets is 2 (here it is 5). For modest n, the result is close to a well-known continuous-parameterized approximation (with “flat prior”), an approximation we’ll use later.

Here (see the book or this link for the derivation) (5) as an approximation works out to be

    (5) Pr(CGPA = 4 | GR, n_3 = 1, fixed math) = (1 + n_4)/(n + 5),

where n_j is the number of js observed in the old data, and n is the number of old data points; thus the probability of a new CGPA = 4 is 1/6; for a new CGPA = 3 it is 2/6; also “fixed math” has a certain meaning we explore next time. Model (5), then, is the answer we have been looking for!

Formally, this is the posterior predictive distribution for a multinomial model with a Dirichlet prior. It is an approximation, valid fully only at “the limit”. As an approximation, for small n, it will exaggerate probabilities, make them sharper than the exact result. (For that exact result for 2 buckets, see the book. If we used the exact result here the probabilities for future CGPAs would with n=1 remain closer to 1/5.)

Now since most extant code and practice revolves around continuous-parameterized approximations, and we can make do with them, we’ll also use them. But we must always keep in mind, and I’ll remind us often, that these are approximations, and that spats about priors and so forth are always distractions. However, as the approximation is part of our right-hand-side assumptions, the models we deduce are still valid models. How to test which models worked best in our decision is a separate problem we’ll come to.

Homework: think about the differences in the models above, and how all are legitimate. Ambitious students can crack open Uncertainty and use it to track down the deduced solution for more than 2 buckets; cf. page 143. Report back to me.