Wednesday, October 11, 2006

Okay, folks, it's time for a statistics lecture.

There are a lot of people who are hoping to debunk this study.

No, wait. That's not true. There are a lot of people who are hoping to say that this study is bullshit, and who are hoping you believe it's bullshit, because they don't like it's conclusions

So, let's go over a bit of basic statistics, okay?

There's a statistics question that I saw in a stats book that really impressed the heck out of me. There's an election going on, and there's a huge number of people voting. Say, for example, it's like our last Presidential election, with 100,000,000 voters. You pick ten voters at random. They all say they are voting for CandidateA. What are the odds that Candidate A will win?

It's virtually certain. If the race were a dead heat, with approximately 50% of the people voting for CandidateA, the odds that ten perfectly randomly selected voters would all favor CandidateA is one in 1024; a bit worse than one in a thousand.

The key is, we're assuming we picked people at random. We're assuming that every single voter is just as likely to be picked as any other voter. That's nearly impossible to guarantee.

Nevertheless, this is an important point with statistics: with a perfectly chosen sample, you don't necessarily need a lot of data to make strong predictions.

Of course, we never have a perfectly chosen sample, and we rarely obtain such strong results. Nevertheless, with a good, generally random sample, and with a large enough sample size, we can make very strong predictions. These predictions won't always be right, but they constitute very strong evidence... usually, the best evidence we can come up with.

In this study, the researchers chose a solid algorithm for grabbing random sections of Iraq to talk to people. They ended up finding the stories of some 13,000 people, and learned about numbers, and the births and deaths for 14 months prior to the invasion, through 40 months past it.

In these randomly chosen areas, over 98% of the households participated in the survey; a little under 2% were either absent, or refused to participate.

So, basically, the researchers grabbed people living in particular areas; the areas were chosen at random; they had 98% participation. This means that we can be pretty darn confident that we have a truly random sample.

They asked about births, and deaths, and tallied the numbers up. In 87% of the cases when learning about deaths, the researchers asked for a death certificate, and in 501 of the 629 reported deaths, these were present. There might be more death certificates; in about 13% of the cases, the researchers failed to ask. Nevertheless, nearly 5/6ths of the deaths were confirmed. Call it 80%... it's possible that the death rates were over-stated in some cases, but if so, we know that 80% of them, at least, were legitimate.

Keep this in mind: even if people were lying about the deaths, trying to make things look worse than they are, the most they could have done is skewed the results by 25%. Basically, instead of over 650,000 additional deaths, we'd be looking at "only" about 520,000 additional deaths

What about the validity of extending the circumstances of the 12,801 people to the 25 million people in Iraq? Couldn't that be misleading us?

It could, but it's not likely. Because people were picked at random, based upon their locations, and a lot of locations were used, we have no reason to think there was some large group of people whose circumstances were missed. Surely, some people are safer than those in the sample group, but it's just as sure that some people are in more danger. It's certainly possible that these random selections happened to pick all the worst, most dangerous places to live, but the odds of picking 47 of the worst paces to live are miniscule. It's much more likely that they picked some of the best, some of the worst, and a lot the places between the best and worst.

Now, the results: the results of the survey indicate that an average person had about a 4% chance of dying during the 40 months from the invasion, through June of 2006. This is compared to about a 1.6% chance of dying if the death rate had been unchanged. That means that the population of Iraq has been reduced by close to 2.5%, or 1/40th, of its total over that time. (A 2.5% chance of dying means you expect 2.5% of the population to die, especially when dealing with millions of people.)

I won't bore you by talking about confidence intervals and what they mean. Suffice to say, based upon the strongest evidence we have, we would be pretty darn surprised if any less 400,000 people had died.

Note that even if you multiply this by the 80% (80% of deaths investigated had death certificates), it would still be 320,000 deaths. In 2004, there was a great cry of outrage over reports that as many as 100,000 people had died... surely, people said, it couldn't be as bad as that! Well, by now, it seems that it's probably three times as bad... and the statistics they gathered showed that the 100,000 in 2004 was actually pretty close to accurate. (In fact, it now looks a bit low - 112,000 is the expectation.)

I can't even begin to explain how strong this evidence is that there has been a massive number of deaths in Iraq. I mean, I could explain to a fellow math geek or statistician... but such a person would already understand. What it comes down to is this: Either some terribly serious problem will come to light about this study, or hundreds of thousands of people have died as a result of the war in Iraq.

The study is too strong; the methodology is too good, and the numbers are too big. Someone would pretty much have to have falsified data to make things come out this way otherwise.

A lot of rightwingers are insisting this is wrong, because, geez, it's terrible news. But it's extremely strong evidence. Unless there is a real problem with it, it is the absolute best estimate we have of war deaths in Iraq.

Anyone who tries to claim this study must be wrong, without presenting some extremely strong evidence showing a real, honest-to-goodness flaw (and it has been peer reviewed, so all the 'easy' flaws would have been spotted), is deliberately choosing to ignore painful evidence, because it might point to a painful truth.

Comments:
``You pick ten voters at random. They all say they are voting for CandidateA. What are the odds that Candidate A will win?"

Well, if those random voters were in San Francisco:
1) There's a pretty good chance you'd have gotten 10 votes for Kerry.
2) Kerry lost.

``So, basically, the researchers grabbed people living in particular areas; the areas were chosen at random; they had 98% participation. This means that we can be pretty darn confident that we have a truly random sample."

Tell that to the San Franciscans.
 
Kevin:

The probability of picking ten voters at random, and having them all be from San Francisco, is ridiculously small.
 
Ah, but you didn't say that and they didn't pick the Iraqi's at random either for the Lancet report. They picked specific locations and then interviewed adjacent households in a cluster. So for any given cluster one can get a skewed result (e.g. San Francisco.)

Putting that aside, however, I'd be more interested in your analysis of Iraq Body Count's (IBC) response to the Lancet report. Particularly, any rationale you may have as to why the Lancet report statistics are more credible or accurate than those of the IBC. Also, in the event you choose to dismiss their response, I would be interested to learn of your expertise in this particular field that qualifies you to do so. (i.e. what qualifies a Database Analyst with a masters degree in mathematics as an expert?)
 
Ah, but you didn't say that and they didn't pick the Iraqi's at random either for the Lancet report. They picked specific locations and then interviewed adjacent households in a cluster. So for any given cluster one can get a skewed result (e.g. San Francisco.)

I don't have time to explain experimental design to every person who has a doubt about it. Suffice to say, when they began the study, the broke Iraq into locations such that any given Iraqi seemed equally likely to be selected.

Was every Iraqi equally likely to be selected? Of course not. There's been at least some movement of population since the war began. Some where more likely to be selected than others. With 47 different clusters, covering over 12,000 people, there is such a large sample set that one expects such chance occurrances to even out. Yes, some people who were least likely to be killed were certainly less likely to be selected; and, some people who were most likely to be killed were certainly also less likely to be selected. With broad random sampling from a large number of areas, there's no reason to suspect that the overal surveyed population is not representative of Iraq as a whole... and there's strong reason to suspect it *is*.

Putting that aside, however, I'd be more interested in your analysis of Iraq Body Count's (IBC) response to the Lancet report. Particularly, any rationale you may have as to why the Lancet report statistics are more credible or accurate than those of the IBC.

I believe that IBC is a perfectly accurate count of all deaths recorded by IBC.

IBC only records violent deaths, of civilians, that are reported by English-speaking news sources. They also survey hospitals and morgues, though I've seen references that suggest that they do this only to match up against their news source reports.

Their methods probably can't catch people who aren't deemed newsworthy by English speaking news sources, and can't catch people whose death certificates aren't being tracked on a national level.

The study by the Johns Hopkins researchers can find people who have died, whether or not their deaths were considered newsworthy, and regardless of how well death certificates are being tracked.

Now, if you could show that the death certificate tracking in Iraq was top notch, and rarely missed a recorded death, and if a survey of death certificates suggested that IBC was correct, and the study was not, that would raise serious questions about the study.

But since we have no information about the tracking of deaths in Iraq, and we know that 80% of the deaths in the study were certified, it currently seems most likely that there is not good tracking of death certificates issued.


As such, I find the study to be more accurate. As to which I find more credible, both are credible sources for what they are trying to measure, but one is measuring reported deaths and the other is measuring actual deaths.

As for IBC's statements about the expectations ("if the report is true, we would expect"), it was the inspiration for a recent blog post, pointing out that when reality doesn't match your expectations, it's time to check your expectations, rather than try to deny the reality.

And the reality is that it would be extraordinarily hard to find that big of a population, with that many excess deaths, unless a huge number of excess deaths were really occurring.

So, "why hasn't there been more hospital treatment for injuries?" means "well, let's look at the quality, safety, and accessibility of hospital treatment".

If it turned out that there were good hospitals that were safe and easily accessible in all of Iraq, and that the records were meticulously kept and shared out on request, then you'd raise questions about the report. Until then, the lack of hospital treatment (or lack of proof that such treatment occurred - why do we assume IBC can determine how many people have been treated in which hospitals for which kinds of injuries?) is not a condemnation of the report.

Also, in the event you choose to dismiss their response, I would be interested to learn of your expertise in this particular field that qualifies you to do so. (i.e. what qualifies a Database Analyst with a masters degree in mathematics as an expert?)

You'll have to pardon me, but I read this last bit as "if I don't like what you say, I'll ignore it unless your qualifications are so iron-clad that I find myself unable to do so."

I don't work for you, and I have nothing to prove to you. I have responded to your comments in ways that might be useful to any readers of the exchange. But what on earth makes you think that I feel any desire to please a rather unpleasant individual who popped into my blog, who doesn't understand statistics, but feels qualified to call my own statistical knowledge into question?
 
This comment has been removed by a blog administrator.
 
With regards to the IBC, you obviously didn't read their analysis as they discuss (what they claim to be) a statistically accepted 3:1 injury to death ratio in wars and the ramifications of this figure. They also discuss why the methodology of the Lancet report is wrong.

I'll be glad to read their explanation of why the methodology of the Lancet report is wrong; I hadn't seen that part.

They did discuss the ramifications of that many injuries. Did they mention that many Iraqis are afraid of going to hospitals? That Iraqis have been kidnapped and murdered in hospitals? Did they demonstrate that injury tracking is solid and trustworthy in Iraq?

Here's something that you might understand as a physicist.

If you can't measure a quantity reliably, you can't make valid inferences based upon that measurement. I'd imagine that a physicist would ordinarily understand this... so why don't you?

If we don't know how many injuries there are, we don't know if there are enough injuries to account for the deaths.

And if Iraqis are afraid to go to the hospital, it ends up having a double whammy on reporting. First, there's no record of treatment for the injury. Second, some who would have lived, die. Fewer reported injuries, and fewer injuries, all in one fell swoop.

I'm a believer in Occam's Razor--the simplest answer is usually the correct one. You seem to go through a lot of explanations and contortions to justify the Lancet report whereas; the IBC Project Team presents a good argument based on (empirical) data.

If you're a believer in Occam's razor, one might assume you actually know what it is.

Before you can judge simplicity, however, you need to take a closer look at the IBC's objections.

It's assuming that death certificates are well tracked. It's assuming that injuries are similarly well tracked. It's assuming that the injured have no fear of the hospital. It makes these assumptions about a war zone, which calls each of those things into question. Does it provide support for its claims that this tracking is good? No.

Now, let's see... where do we stand.

Kevin pretends that there's a high likelihood of selecting 10 random voters out of 100 million, and having them *all* end up in San Francisco.

Kevin seems to think that you can *get* a good chance of having ten random voters all voting for one candidate (if SF went 90/10 for Kerry, 10 random voters would still only give about a 35% chance of all voting for Kerry)

Kevin doesn't understand clustered sampling, but feels free to complain about it.

Kevin doesn't think about why IBC's complaints might not be accurate, and thinks he knows what Occam's Razor states (and is wrong).

And his valid criticisms are...

Well...

Gee. He doesn't have any.

He's just saying that he's "unimpressed", as if that constitutes an argument.

What virtue is it to be impressive to a person who can't even realize when he's out of his depth?
 
Well, actually as a physicist and a researcher, I deal with statistics and error analysis on a regular basis (and yet, I don't claim to be an expert--just a talented user.) It just sounds to me like you are no more of an expert than I--especially based on the elementary statistics you've discussed up till now. You claimed that unless the person is a math geek or statistician, you wouldn’t be able to explain it--as Richard Feynman stated, ``if you can't explain something to an undergraduate, you really don't understand it." Seriously, I’ve been waiting to hear something befitting someone with an advanced degree but so far, it’s been lacking.

I agree that you don't have to prove anything to me but please remember, you are the one who suggested that you were intellectually up to the task of having a little fun with me--I'm just accepting the challenge. I'm making the assertion that your claims that the Lancet report is the best information we have (repeatedly) without a mathematical or scientific analysis is unconvincing. You claimed that you reviewed the report and found no flaws in their methodology; I'm just asking to hear your analysis of the statistics upon which you based your conclusion. What you’ve offered so far is rather elementary so let’s cut to the chase. You want to discuss statistics and error analysis? I'm game.

I’m sorry that you believe that I’m being an unpleasant individual but as an educated person, I just figured you’d understand that knowledge and expertise in a given field is what gives additional weight to one’s opinion—not vacuous claims. I was honestly inquiring as to why you believe you are capable of offering more than (a liberally biased) opinion on the Lancet report. You may be a big frog in a little pond at the LA but I thought you might welcome the chance to stretch your intellect for a change.
 
Unfortunately, I had deleted my original post while you were formulating your underwhelming response so I’ll leave you with a final response.

``If you can't measure a quantity reliably, you can't make valid inferences based upon that measurement. I'd imagine that a physicist would ordinarily understand this... so why don't you?"

The more inferences one makes, the less reliable the results become but that's what you don't seem to understand. The first thing one learns in science is when you get an answer, check it against known reality. The IBC says the exact same thing in their response--and amazingly enough, they too call it a reality check. If Johns Hopkins produced a report that said the George Bush was going to keep the sun from rising tomorrow with 95% accuracy, I’m sure, you'd believe that too.

``And if Iraqis are afraid to go to the hospital, it ends up having a double whammy on reporting."

I don't believe that premise and neither does the IBC and they cover that in their (thoughtful) analysis--human nature is to try to get to help when you're injured or dying. Some may be afraid but not hundreds-of-thousands.

``Kevin pretends that there's a high likelihood of selecting 10 random voters out of 100 million, and having them *all* end up in San Francisco."

Wonderful misstatement; you really aren’t too smart are you?--my point was that if one took 47 random points in the United states (that's the cluster sampling you claim to understand--haven't you read the Lancet report?!?) and at each of those random points interviewed adjacent neighbors, one of these points could easily be San Francisco, or New York, or Philadelphia, or Seattle, or any of a number of high volume liberal areas, thus skewing the data to a liberal outcome. What's so hard to understand about that? The same with the Iraqi neighborhoods—no demographic data was taken on the respondents so how can you claim that there is no bias in responses or in the data collectors who remained unnamed in the report.

``would still only give about a 35% chance of all voting for Kerry"

Congratulations, you can calculate 0.9^10. That shows you’re capable of performing high school statistics. It also shows that 35 times out of 100, you can get all Kerry which is pretty good odds. I notice you didn’t point out that in an area that is evenly split, the chances of that occurring is less than one-tenth of a percent. You'd have to select 1000 statistical ensembles of 10 people (you may want to start using statistical terminology to sound more believable) to reach the statistical probability of it occurring only once. Being that there's a war going on in Iraq and the U.S. is there, I would be willing to bet that there may be a fair number of people that don't much care for the U.S. and would lie to skew the results.

``and thinks he knows what Occam's Razor states (and is wrong)"

Oh, another brilliant response--once again you make a claim and still don't offer a definition. My definition of Occam’s Razor--the fewer assumptions an explanation of a phenomenon depends on, the better it is—what’s yours? IBC bases their data off of empirical data, past statistics from other wars, and the experience of having done this in the Balkans and elsewhere (it's all on their site.) They don’t have to generate a lot of convoluted explanations as to why the numbers are right even though no one can derive them through empirical means. Don't believe what you see and experience, we're right because we used statistics--now there's a good definition of gullible.

``Kevin doesn't understand clustered sampling, but feels free to complain about it."

Au contraire, I understand how it can lead to very skewed results (see previous.)

So let me recap; longhairedweirdo believes:

He is an expert in the demographic characteristics of the Iraqi respondents in the Lancet report and can state conclusively that the people in the 47 areas chosen are truly random (and the interviewers were also unbiased.)

He believes that hundreds-of-thousands of injured people are afraid of hospitals and will not seek help even if they are injured or dying.

He dismisses (without any analysis whatsoever) the parallel research team (IBC) that has an ongoing tracking process and estimates total deaths based on their research because they disagree with his unproven assertion that the Lancet report is valid.

He has presented only a statistical analysis at a high school level.

He has not discussed the distribution model that would be appropriate for the Lancet report data nor has he even touched on the appropriate error analysis technique other than to state it's right.

He refutes my understanding of cluster sampling while neglecting to offer his explanation, thus actually proving to any external observer that happens across this site to believe that he actually does understand it. Someone questions his credentials and his response is to claim he has nothing to prove--sounds just like Ward Churchill.

`` What virtue is it to be impressive to a person who can't even realize when he's out of his depth?”

That's just so many empty words and braggadocio. I can likewise reply, how can one that claims to hold a masters degree in mathematics fail to present a statistical analysis at anything greater than a high school level? If you actually have one, I’m assuming the masters was a consolation prize for a failed Ph.D. attempt? But then again, your vocabulary, analytical skills, and ability to express your arguments seem rather limited so we'll just have to leave it at that.

I see you weren't up to the challenge but at least you can always impress 'em over at the LA.
 
You know, it just occurred to me that you also may have a very short attention span. I didn't just happen upon your site; this is the continuation of an initial post on the Liberal Avenger (before I was banned from posting for being an intelligent conservative--oh the horror!)

My original post, and very basic analysis (it's only high school algebra so you should be able to follow it,) stated that 655,000 people dead in 3.25 years equates to 552 deaths per day. The area of Iraq is just slightly larger than that of CA, however, the actual inhabited portion is significantly less. Assuming an even mortality distribution for the time evaluated, that equates to 552 people dying per day, every day for over 3 years, in an area less than the size of California. I assert, and IBC concurs, that could not go unnoticed for such a long period of time. And this is not the total mortality rate but only the excess since the beginning of the war. So you’re now you’re claiming that any reporters in Iraq that happen to be liberal and/or anti-war (and would print this in a heartbeat if they could even half prove it) are incompetent or ignorant to those numbers of deaths? I'll believe that liberals are incompetent or ignorant but that's just me. My thesis is that the numbers quoted by the Lancet report sound highly improbable but you can continue taking comfort by clinging to your fantasy.
 
Kevin, if you wish to be insulting, and have your insults mean anything, you really have to pick someone who thinks you're worthy of respect.

I will answer this:
The more inferences one makes, the less reliable the results become but that's what you don't seem to understand. The first thing one learns in science is when you get an answer, check it against known reality. The IBC says the exact same thing in their response--and amazingly enough, they too call it a reality check.

If you can't measure a thing, you can't make any inferences from it. Period. You don't know the quantity; how can you draw any inferences?

If you don't know how good the death and injury tracking system is in Iraq, you can't make any inferences based upon their tracking of death and injury.

For example, you can't say "the tracking system would have caught those deaths and injuries" because you simply don't know.

Now, is that simple enough? If not, too bad. I've wasted more than enough time with you.
 
Post a Comment

Links to this post:

Create a Link



<< Home

This page is powered by Blogger. Isn't yours?

Weblog Commenting and Trackback by HaloScan.com