I wrote about Alan Dershowitz’s attempt to have the evidence of domestic abuse thrown out of the O. J. Simpson trial on the ground that it would be statistically irrelevant. Dershowitz failed to convince the judge but Johnny Cochran spoke of the same alleged irrelevance in his summation.
Derschowitz argued in his 1996 book on the case that, to quote one of I. J. Good‘s papers linked below, the “representative batterer’s probability of murder is about 1/2,000 per year.” But this 0.05% incidence is an order of magnitude greater than the likelihood of a random woman being murdered in any given year in the first half of the 1990s. Expressed as the share of female murder victims in the total female population, the latter was less than 0.005%. (The overall murder rate was less than 10 per 100,000 but women were about a quarter of the victims, so assuming a 50-50 gender split in the general population, the rate was less than 5 per 100,000 for the women.)
For a Bayesian analyst, that difference does the trick, if I understand correctly I. J. Good’s reasoning from his 1995 and 1996 notes in Nature. Suppose it’s 1993 and there’s a couple in America where the man is a chronic abuser of the woman. Suppose we know that in 1994, the woman will be murdered. What will be the odds – that’s P divided by (1-P) – of the husband being the murderer, if we know she is being abused and will be killed next year? From Bayes’ theorem, it follows that these odds are equal to the odds of his killing her in 1994 conditional on our not knowing if she will be murdered then, divided by the probability of the battered woman getting murdered by someone else in 1994.
The numerator looks like 1/1,999 and the denominator, 1/20,000. The odds are about 10 to 1, a 90% probability. Now we’re at the opposite extreme: this looks damning. But this probabilistic estimate may not and should not be used as proof of one’s guilt. If it is not clear why, consider this example. Suppose the authorities have detained 100 people and can prove that 95 of these 100 have committed murder. In the absence of any additional information, a person randomly selected from the 100 would be guilty of murder with a 95% probability. Not what most people call a fair trial, although some Bentamites would approve.
Can one count on a jury to understand this, even if an expert mathematician is called to the stand to explain? (By the way, I. J. Good was not only a prominent Bayesian but also a member of the Bletchley Park team during WWII.) A mere mention of the 10-to-1 odds favoring the “guilty” hypothesis might have been prejudicial to the defendant, so that his team, knowing what to expect from the expert witness, could have convinced the judge to block it.
In 1996, Vincent Bugliosi, the California prosecutor who became famous during the Manson Family trial, published a book on the O J Simpson case, Outrage. Perhaps it should have been subtitled, “How I would have prosecuted the case.” It seems that Bugliosi understood, at some level, the statistical issues involved:
In responding to Cochran’s argument that the domestic violence by Simpson against Nicole was not relevant to the murder charges, and just because he hit her doesn’t mean he would murder her, don’t you argue to the jury that although most men who beat their wives may indeed not go on to murder them, that that is looking at the statistics the wrong way?
Bugliosi was a bulldog but it seems he dug his teeth into the wrong ankle here:
That even without statistics, common sense will tell you that in those cases where husbands have, in fact, murdered their wives, the overwhelming majority have previously physically abused and battered their wives.
“Without statistics,” common sense tends to go astray when probability is involved: the key is not the cases “where husbands have… murdered their wives” but, according to Leonard Mlodinow:
…the probability that a battered wife who was murdered was murdered by her abuser. According to the Uniform Crime Reports for the United States and its Possessions in 1993… of all the battered women murdered in the United States in 1993, some 90 percent were killed by their abuser.
This data, rather than Good’s derivations (which produced the same odds), would have been preferable at the trial for its simplicity but would not have solved the problem of the jury misinterpreting the 90% probability as a guilty verdict.
It’s probably worth adding that in the early 1990s, about 25-30% of female murder victims were killed by their partners. With that in mind, the boyfriend or husband should have been the number one suspect by default. Suspect-centric investigations can go horribly astray but the cops had a valid reason to zoom in on O. J. from the beginning.