One of my complaints about the recent L.A. Times article on DNA, cold hits, and statistics is that I believe it inadequately portrayed the extent of disagreement regarding the need for the statistical adjustment discussed in the article (multiplying a random match probability like 1 in 1.1 million by the size of a database like 338,000.) . I’ll once again show you the image of the front page to remind you how strongly the paper portrayed the adjustment as the product of a wide consensus among leading experts:
Yet, as I have previously noted, the L.A. Times‘s own expert mathematician Keith Devlin of Stanford says that “the relevant scientific community (in this case statisticians) have not yet reached consensus on how best to compute the reliability metric for a cold hit.” And Prof. David Kaye, who was on the 1996 committee that recommended the adjustment, actually told me that the contrary view is more widely accepted:
[The L.A. Times's] description portrays one approach to the issue as if it is the consensus in the scientific literature. It is not. There is disagreement about the need to adjust a random-match probability. Furthermore, if one counts the number of peer-reviewed articles on the subject, the dominant view is that adjustment is not necessary.
Jason Felch, one of the authors of the L.A. Times article, responded to this portion of my complaint, and authorized me to quote him:
That brings us to your second point: that we did not portray the full scientific debate in the article. You are right in saying that a debate persists among statisticians (as it does in most complex scientific questions.) The 1996 National Research Council and, after its dissolution, the FBI’s DNA Advisory Board carefully weighed the arguments of the various statistical camps — the Bayesians and frequentists, but also those who favor likelihood ratios or the first NRC’s approach, which defense attorneys are arguing for now and is more conservative that the NRCII’s adjustment. Both NRC and the DAB concluded the RMPxDATABASE approach was best for cold hit cases. In the forensic field, these two bodies are the source of authority on questions of science — The NRCII is referred to as the “bible” of forensic DNA. But their recommendations are not being followed. This is the point we make in the article, while acknowledging there is not unanimity of opinion.
For the courts, the question is: is there enough of a consensus on this issue that a generally accepted practice has emerged? If the answer is no, the law (Kelly-Frye here in California, Daubert in other states) holds that the evidence should not be presented in courts. So there’s a lot at stake in the question. Not surprisingly, many in the field argue that the issue is not a lack of consensus, but a debate among which of several accurate scientific approaches is more appropriate. So far, the courts have agreed. This is what the California Supreme Court will weight. We are likely to explore some of these complexities in our upcoming coverage of that case.
I’m not convinced that the paper “acknowledg[ed] there is not unanimity of opinion” is a way that was meaningful to readers. The article never even mentioned the entire Donnelly/Balding hypothesis that Prof. Kaye says constitutes the majority opinion of peer-reviewed articles. Readers were told only in passing, deep in the article, that the adjustment discussed in the article “has been widely but not universally embraced by scientists.” As for how the article portrayed general scientific acceptance of the adjustment, I refer you once again to the image of the front page above.
But while I might disagree with Mr. Felch, I thank him for his response.
P.S. I am working on a proposed e-mail to Mr. Felch that questions the article’s assertion that there was a “1 in 3″ chance that “the database search had hit upon an innocent person” in selecting Puckett.