Thursday, February 19, 2009

Blog moved to wordpress.com

The blog for the class has been moved to wordpress.com.

Future postings will be there. Past postings will remain here.

Bill

Wednesday, February 18, 2009

Improvements to the blog

I have learned how to include a modest amount of mathematics in the blog, and I have gone back and done this to all the earlier blog posts.

I may change the location of the blog; I have been pointed to another blog site (Wordpress.com) that has much better LaTeX support. Essentially I will be able to put a LaTeX expression into the blog, and it will automatically be rendered.

If I do decide to make such a change, I will announce it on this blog and on the website.

Tuesday, February 17, 2009

A bit of history: Why we call it a "random variable"

Here's a link to an article in Chance News, explaining how the term "random variable" got fixed in the terminology. I mentioned this in class today.

Stat 295 2/17/09

We discussed the problem #3 on problem set 3 and showed that the condition for equality was that f(y|x,θ)/f(y|θ) be independent of θ. We then also saw that if (for the hospital example) x={y,z}, where z is the data from all the other hospitals, then f(y|x,θ)=f(y|z,y,θ)=1 and the ratio is not independent of theta, so in this case you can't make this substitution. But if f(y|z,y,θ)=1 then the posterior for trying to use y again is equal to the prior so you can't use the data twice. Probability theory won't let you do that (legitimately).

Jeff talked about the curse of dimensionality. When you get to high-dimensional parameter spaces, you can't use the usual integration methods (e.g., integrating on a grid, since the number of points required is exponentially large in the dimension of the space). This is what kept Bayesian methods from wide usage until the introduction of MCMC about 20 years ago. But with MCMC people are now routinely analyzing models with thousands of variables.

New chart set: Introduction to Bayesian Computation.

Start with the Beta-binomial model for overdispersion. We wish to model stomach cancer death rates for 20 cities in Missouri.

Let yj=number of deaths at city j, nj the number of cases. j=1,2,...,20.

Initial attempt at a model would be that yj~bin(nj,θ), i.e., use a common probability of death for all cities. However, there is greater variability amongst the number of deaths than this model can fit, because not all cities have the same death rate. This is known as "overdispersion," (a term that Jeff doesn't like).

Instead use yj~bin(nj,θ) and θ~beta(α,β). We then wish to integrate θ out of this model, and use that as the likelihood.

The calculation is given in the chart set. We get

f(y|n,α,β)=Cyn B(α+y,n+β-y)/B(α,β)

where B(,) is the normalization constant for the beta function.

This is the likelihood for α and β, given data n,y.

Recall that if θ~beta(α,β) then E[θ]=α/(α+β) and Var(θ)=α β/[(α+β)2(α+β+1)]

Reparameterize with η=E[θ], K=α+β. K is a precision parameter, related to 1/Var(θ). Indeed, Var(θ)=η(1-η)/(K+1)

α=Kη, β=K(1-η). K>0 and 0<η<1.

For prior, use independent priors for eta and K:

g(η,K) ∝ 1/(η(1-η)) ⋅ 1/(K+1)2.

The posterior for η and K is

g(η,K|y1,...,y1) ∝ Π120B(Kη+y,K(1-η)+n-y)/B(Kη,K(1-η)) ⋅ 1/(η(1-η)) ⋅ 1/(K+1)2.

Look at contour plot of the posterior (on charts).

The chart goes way off the map, and suggests using a log transform on K and a logit on η.

θ1=logit(η), θ2=log(K). Thus η=F(θ1) where the inverse logit, F, was defined last time. K=exp(θ2).

We need to compute the Jacobian. We have

dη/dθ1=F(θ1)(1-F(θ1)=Jη, and exp(θ2)=JK. It turns out that Jηg(η)=1 in this parameterization where g(η) is the prior on η. This may have motivated Albert's choice for the prior on η.

The prior on K in this parameterization is 1/(1+exp(θ2))2exp(θ2)

Contour plot in these variables looks much better.

Next thing: Approximate this posterior density with a normal approximation. The contour plot looks pretty non-normal, so the fit won't be great. Nonetheless, we can get an idea of what's going on from this. Need to know means, variances and the covariance. How do we get these?

Laplace Approximation: Derive approximation to the posterior that's easy to calculate.

The approximation can be useful when the posterior is unimodal, which is typically true when we have enough data. How much is enough? A rule of thumb (Kass & Raftery, Journal of the American Statistical Association, Vol. 90, No. 430 (Jun., 1995), pp. 773-795) says that we need 20 times as much data than the number of parameters for the Laplace aproximation to work reasonably well (p. 778, note the discussion and cautions).

Here's how to do it: The Taylor's series expansion says

h(θ)≈h(θ0)+h'(θ0)(θ-θ0)+0.5h"(θ0)(θ-θ0)2.

The vector version of this is in the notes. The matrix h"(θ0) of second derivatives is the Hessian.

h(θ0)+∇h(θ0)(θ-θ0)+0.5(θ-θ0)T h"(θ0)(θ-θ0)

We want θ0 to be chosen as the posterior mode, so that the linear term vanishes.

Want h(θ)=log[f(y|θ)g(θ)] ≈ h(θ0)+0.5(θ-θ0)T h"(theta0)(θ-θ0).

When you exponentiate this, the result is a multivariate normal distribution. The mean is the posterior mode, and the variance-covariance matrix is the inverse of the Hessian matrix.

To sum up, θ|y ≈ N(θ0,-(h"(θ0))-1).

The Laplace approximation is used much less than in the past, because MCMC techniques allow us to sample from the exact posterior distribution to obtain the information we need. It can still be useful in formulating a "proposal distribution," which is used in MCMC sampling.

Thursday, February 12, 2009

Corrected Chart Set #6

The corrected chart set #6 can be found here.

STAT 295 2/12/09

Happy 200th birthday, Charles Darwin!

Today we discussed the problem (#3) regarding the non-independent data. If you can't write f(x,y|θ)=k(x,y)fy(y|θ)fx(x|θ), then you cannot treat the sequential problem as if the y variable is independent of x (that is, the calculation given in the analogous problem in problem set #2 does not work). I need to do a bit of research on the k(x,y) piece of this.

Another example: Bioassay experiment. First: transforming parameters to (-∞,∞) is often a good idea. In this case we used a log transform. Similarly, the probabilities are transformed from (0,1) to (-∞,∞) using the logit transformation:

q=logit(p)=log(p/(1-p))

with q=(β01x)=F(q) where p=F(q)=(1+exp(-q))-1

log(p/(1-p))=logit(p)=β01x

β1 is interpreted as the difference in log odds for a 1-unit change in x (the dose).

Logistic regression is very important in statistics. Working statisticians will see it over and over again.

Note that F'(v)=F(v)(1-F(v))

We have animal data from the book, and wish to make inference on the βs.

The likelihood is binomial in pi (i is the group) and then the product over all groups of animals. This is because we have two situations (the animal lived or it died) with probability p (given by the inverse logit) that it lived and (1-p) that it died. So the likelihood is the product of p's and (1-p)'s over all the animals in the study.

We'll use a sampling approach on a discretized version of β0, β1. But need to get an idea of how far to go. Go for the mode (MLE since priors are flat). Then estimate the standard errors, go out several standard errors and use that.

We did this in R using the R function glm (generalized linear model).

We recommend looking at functions in Albert's LearnBayes package to see how he did it. You'll learn new things about R, and have a better idea about what Albert is doing.

We looked at the plot (Figure 4.3 of the book) and saw that we've got to go pretty far out (farther than Jeff guessed) in order to get a decent sample.

How to sample from the distribution. The contour plot gives us an idea. Chop up region into a lot of little squares. Have something that is proportional to the density. Height of this surface is sort of like the probability that theta takes that value. Can use the sample() function to do this (sampling with replacement). We did this and the sample in Fig. 4.4 is the result. Looks pretty good.

We want to compute LD-50, the dose at which 50% of the animals die. Thus

Pr(y|x)=1/(1+exp(-(β01x)))=0.5

1=exp(-(β01xLD50))

0=-β01xLD50, so x=-β01

Now look at the Bayesian analysis. From the sample, divide the samples of β0 by the samples of β1, look at the 95% credible interval for the sample. We did this, looked at the histogram and computed the 95% credible interval. (The result is in the log dose).

We also used the density() function to draw the posterior density for the marginal of LD-50. This uses a nonparametric kernel estimator to draw a smooth line. β1 is the log odds ratio for a unit change in dose. To get the odds ratio, exponentiate.

(The nonparametric kernel estimator works this way: At each point of the sample, we draw a peaked, narrow bell-shaped curve with unit area. We then add the curves for the whole sample and divide by the number of samples to get an approximation to the probability density curve.)

Sheila asked about hypothesis testing. We can get one-sided credible intervals (analogous to one-sided confidence intervals) quite simply. For example if s$y contains the samples of some quantity β, can compute probability that β>1 as mean((s$y)>0).

I pointed out that there is no Bayesian analog to frequentist two-sided hypothesis testing. We'll get to this later, when we discuss hypothesis testing.

Assignment #4

The pdf for the fourth assignment (due next Thursday) will be found here.

Wednesday, February 11, 2009

New chart set #6

The new chart set, on Bayesian Computation, will be found here.

Tuesday, February 10, 2009

STAT 295 2/10/09

I made some remarks on the homework. Won't repeat them here since they will appear on the papers that were turned back.

We talked about Jacobians (named for a 19th century mathematician, however he was German, not French as I had misremembered). I gave my reasoning, which relied on the change of variables when doing an integral:

∫p(u)du=∫(p(u(v))(du/dv)dv=∫p(v)dv

The idea here is that if u is a variable and we transform to v, the integral has to be unchanged since it is the same probability that we're talking about.

The highlighted piece here is the Jacobian. Whenever you change variables in our probability statements, you have to include a Jacobian. So, the above equation says that

p(v)=p(u)(du/dv).

Jeff gave a different (but entirely equivalent) argument. His argument (which is hard to reproduce here, so I hope you got good notes) also starts with integrating, here to compute the probability that θ is less than x. That is an integral from -∞ to x of the density on θ. But if we transform to λ via θ=g(λ), then you get

Pr(θ<x)=Pr(λ<g-1(x))=∫-∞g-1(x)p(λ)dλ

To go from the distribution (Pr) to the density (p) you take the derivative with respect to x. When you do this, you use the fundamental theorem of calculus (the derivative of the integral is the function under the integral) and the chain rule (you have to multiply by dg-1(x)/dx, which is the Jacobian).

On the homework for Thursday, Jeff generalized the problem to the following condition:...Assume that there does not exist k(x,y) independent of θ such that

fx(x|θ)fy(y|θ)=k(x,y)f(x,y|θ)

Then you should be able to show that you can't pretend that x and y are independent when you calculate the posterior distribution on x and y. You have to use the formulation that f(x,y|θ)=k(x,y)f(y|x,θ)f(x|θ), in other words, the full conditional probability formula is required.

The corrections to the Albert book are here. You have the most recent (third) printing, in all likelihood, so most of these won't apply.

We then discussed the normal likelihood with both mean (μ) and variance (σ2) unknown.

The charts have the equations. We took a prior that is flat on mu and 1/σ2 on σ2. We'll justify these priors as "noninformative" later in the course. Both priors are improper, but if the posterior is proper there will not be a problem.

The key observation here is that in the posterior, there is a piece that is independent of mu, times another piece that has mu and sigma. This means that we can factorize the posterior like g(σ2)g(μ|σ2). We observed that the latter piece would be normal, with mean the mean of the y's, and with variance to be gotten by sampling σ2. Then we found that the marginal distribution of σ2 is an inverse chi-square distribution with (n-1) degrees of freedom, where n is the number of data points. Jeff discussed several ways of sampling from an inverse chi-square distribution, and they are included in his R code, which will be found here.

Jeff then used the code to plot contours of the posterior distribution, plot sample points, and compute quantiles and other things of interest. It is known that the frequentist confidence interval on mu is given by a t distribution with (n-1) degrees of freedom, and Jeff computed that as well as the Bayesian credible interval from the sample. They coincided. This is one example where the credible interval and the confidence interval are the same.

Undergraduate opportunity

The Statistical and Applied Mathematical Sciences Institute (SAMSI), in Research Triangle Park, NC, is hosting a one week undergraduate workshop for college juniors and seniors focused on SAMSI research activities related to the statistical and applied mathematical modeling and analysis of experimental data. During the first day, a summary of research activities in the 2008-2009 programs on Sequential Monte Carlo Methods, and Algebraic Methods in Systems Biology and Statistics will be presented. In days two through five, participants will be involved in a hands-on experience. They will use mathematical and statistical models to analyze experimental data they collect in the CRSC/Math Instructional Research Lab on the NC State University campus.

Tutorials on modeling, mathematical and statistical methodology and on the physical experiments being used will be given. Participants working together in small teams will collect data and analyze it using mathematical and statistical software provided.

REGISTRATION

Applicants should use the on-line application form and also have one letter of recommendation. Full financial support for travel expenses, subsistence and lodging in university housing will be provided for all attendees. Due to space considerations, participation is restricted and will be offered to approximately 18 individuals selected from the applicant pool. Participants are expected to arrive for the workshop on Sunday, May 17, 2009 and remain in continuous attendance until 12:00 pm on Friday, May 22, 2009.

Further details, including information on how to register and where to send letters of recommendation can be found here.

Applications will be considered beginning February 10, 2009 and continuing until April 3, 2009, but registration will likely be closed before that date, as workshop capacity is reached. Successful applicants will be notified of their acceptance as soon as a decision on their application is reached.


[Note: Apply early, if you are interested!!!]

Upon acceptance for the program, individuals must confirm by email within three days their intention to participate (otherwise their place will be given to another applicant). Accepted participants will be advised regarding airfares and will need to purchase plane tickets with a three-week advance fare.

Please direct questions concerning the workshop to ugworkshop200905@samsi.info.

Friday, February 6, 2009

4-up Chart Set 5

I figured out what was wrong that was keeping me from saving pdf files 4-up. A 4-up version of the corrected Chart Set 5 is here.

STAT 295 2/5/09

First, the assignment for next Thursday can be found here.

Interesting class today. Jeff started out by asking about the homework. We suggest learning how do use the array capabilities of R (array of numbers, vectors, matrices). You can multiply an array by a scalar and get an array with the same number of entries. You can add, subtract or multiply two arrays, or add a constant to an array; the operations will take place pointwise. You can use sum() to add up the elements of an array. And so forth. Play around with R in the calculator mode where you enter a formula and look at what's been computed. Facility with these kinds of calculations can really speed up a calculation, as well as making for easier-to-read code.

In #4, the likelihoods are proportional to each other. To show this, you should complete the square in the summation form of the equation, and note that when you do this the sum can be turned into the sample mean, and the rest of the items in the exp() function are constant with respect to μ.

The ESP problem seemed sensitive to the prior...not enough data to overwhelm it.

We suggested that anyone who plans to write scientific papers in their professional life would do well to learn TeX/LaTeX. LaTeX is most useful.

Jeff gave some background on the heart transplant mortality problem. Most of it was "chalk talk" on the blackboard. Typically the study starts by modeling the risk to individual patients using logistic regression. This powerful method allows us to estimate the probability that a patient will die as a function of covariates such as age, disease status, presence or absence of diabetes, and so forth. We would use logistic regression on a large number of patients to get the coefficients that are appropriate for each covariate. Then, when a new patient comes along, we can plug the covariates appropriate for that patient into our logistic model, and predict the probability of death from that.

Then, armed with these probabilities for each patient, we can predict the number of patients that are expected to die in a given hospital by just adding up those probabilities over all patients in the study. This is what Albert calls the exposure. Jeff used 'd' for that quantity.

Both Turner and Jeff are using this basic idea in their research...Turner in trauma cases, Jeff in neonatal cases.

In the example, a particular hospital had d=0.066 and had one death.

We model the observed number of deaths with a Poisson distribution:

Pr(y|μ)=μyeμ/y!

where μ=d⋅λ.

Note that the expectation E[Y|λ]=d⋅λ.

With a gamma prior g(λ|α,β)=λα-1e-βλ we get a gamma posterior, since the gamma distribution is the conjugate prior for the Poisson distribution. This is a gamma-Poisson model.

To choose α and β, we imagine we have data on 10 other hospitals. We had a discussion about why we cannot use the hospital that we're studying to decide on the prior. Basically, to do that would be to use the data from that hospital twice, and that's not allowed. Strictly speaking, you cannot do this if you respect the fact that if data are dependent, then you cannot simply multiply probabilities, you have to use the conditional probability formula correctly. If you do this, you'll find that the rules of the probability calculus will automatically prevent you from using the data twice. Problem #4 in the new problem set addresses this issue.

Anyway, if zj is the observed number of deaths at hospital j, and oj the expected number, then we can write a Poisson likelihood term for each hospital, and assuming the hospitals are independent, the product of these is the likelihood for our estimation. Then a standard prior on λ is 1/λ. [Brief interlude: This is a commonly used prior for scale variables, e.g., lengths, rates, and so forth. In such problems it should not matter if we use a ruler calibrated in inches, for example, instead of centimeters...if we do it consistently we should always get the same result. Mathematically, it is seen as a group-theoretic feature under the group of multiplications by positive real numbers: if c is a constant and x=cy, then dx/x=dy/y regardless of the value of c.]

The prior is improper, that is, its integral from 0 to ∞ diverges at both ends. So it's not a "real" prior since it can't be normalized. However, if we imagine going from 1/n to n in the integral, you can normalize that. Then the question is, if you use such a prior, and multiply by the likelihood, does the posterior end up being normalizable as n→∞? If so, then it is legitimate to use the improper prior, you won't get into trouble.

The posterior using the data from these 10 hospitals is on the chart set. This is what we'll use for the prior with the hospital we are studying.

Digression: Neither Jeff nor I would use this method as described. Rather, we would use a hierarchical model where all hospitals, including the one under investigation, are analyzed together. We will return to this subject later.

Jeff finished the discussion by running the R code in the notes. Running it on the hospital with 1 death and exposure d=0.066, we saw that the 95% credible interval for lambda contained 1 near the middle, so there was no evidence that this hospital had an excess death rate. This is so even though 1 is quite a bit larger than 0.066 (about a factor of 15). However, when we plugged 10 deaths into the calculation, the 95% credible interval did not include 1 so we would say that such a hospital has an elevated risk.

Wednesday, February 4, 2009

STAT 295 2/3/09

I have posted a revised Chart Set #5. Jeff noted some errors and they have been corrected. Unfortunately, since I moved to the latest version of MacOS, I am no longer able to produce 4-up pdf files, so this one (and some of the later ones) will be full size. I apologize for this and will consult with Small Dog to see if this can be fixed.

Jeff asked why the two forms of the likelihood for problem #4 (due 2/5) are equivalent. You should address this question in your turned-in assignment. Note that programming without loops is faster, so you should attempt doing that calculation without loops.

On Chart Set #4, Chart 24, we had been discussing robustness. We noted that the mean and standard deviations of the posterior distribution do depend (although fairly insensitively) on the prior. In particular, the beta prior had a smaller standard deviation and the mode was moved to the left, towards the peak of the beta prior, relative to where the mode was for the flat prior. This led to the notion of stable estimation, such that when we have a lot of data or very precise data, and the prior doesn't change much over the region where the likelihood peaks, then the results won't be sensitive to the prior.

We considered continuous examples, of which the beta-binomial model is an example. Jeff justified the density approximation P(a<Y<b|c-ε<X<c+ε) ≈ ∫f(c,y)/g(c)dy. Thus, we can use ratios of joint to marginal densities in the continuous case, just as we can use the ratio of joint to marginal distributions in the discrete case.

We saw how increasing the amount of data tightens the posterior around the true value.

We then skipped to Chart Set #5. We will return to Chart Set #4 later.

We discussed the Poisson distribution and motivated it. Many things are well modeled as Poisson events, in addition to the ones on the chart set, requests to google.com, stars per square degree, etc.

We went on to the heart transplant mortality problem from Albert's book. The exposure for each patient is the probability that the particular patient will die in a given time frame after the operation. It depends on things like the patient's age, conditions like diabetes, and so on, and is presumed known from other studies. Then the exposures for each patient are added up over all the patients to estimate the overall exposure (risk) for the particular hospital. It is the expected number of patients that will die. The notes use the letter 'e', but Jeff remarked that it's easy to confuse that with the base of natural logarithms, so he changed it to 'd' in his chalkboard discussion. Then if Y is the random variable representing the observed number of deaths, the likelihood has Y~pois(λd) and λ is the parameter we wish to estimate for the hospital.

A gamma prior is chosen. It is flexible, with two adjustable parameters and pedagogically easy because it is the unique conjugate prior for a Poisson likelihood. However, it is a bad idea to use a prior simply because it makes calculations easy, since modern sampling techniques allow us to use any prior we wish, and if we know better, we should use it.

I've corrected Slide #8 on which z and o were transposed. See the chart set published above.

The heart transplant mortality problem will be continued next time.

Monday, February 2, 2009