Patterico's Pontifications

5/3/2008

Statistical Probability in Cold Hit DNA Cases

Filed under: Crime,Dog Trainer,General — Patterico @ 10:20 pm



The L.A. Times has an interesting article about the application of probability measures to “cold hit” cases made from DNA databases. I find the statistical arguments made in the article to be unconvincing, but due to my lack of training in this area, I remain completely humble about my ability to properly analyze the issue. However, experts have widely divergent opinions on the matter — a fact you’d never learn reading the article.

The article begins by describing a 1970s rape/murder scene. A match was made from badly deteriorated DNA that bore only 5 1/2 of the possible 13 markers available. When all 13 markers are available for a match, the probability of a random person bearing the same profile can run to 1 in a quadrillion — thousands of times the number of people on the planet. Because of the lack of the 13 markers in this case, the chance was lowered to 1 in 1.1 million.

This is known as a “random match probability” and the article describes it as the “rarity of a particular DNA profile in the general population.”

At Puckett’s trial earlier this year, the prosecutor told the jury that the chance of such a coincidence was 1 in 1.1 million.

Jurors were not told, however, the statistic that leading scientists consider the most significant: the probability that the database search had hit upon an innocent person.

In Puckett’s case, it was 1 in 3.

The article restates the proposition again later in the article:

In every cold hit case, the panels advised, police and prosecutors should multiply the Random Match Probability (1 in 1.1 million in Puckett’s case) by the number of profiles in the database (338,000). That’s the same as dividing 1.1 million by 338,000.

For Puckett, the result was dramatic: a 1-in-3 chance that the search would link an innocent person to the crime.

It seems to me that the conclusion does not logically follow at all. The formulation simply can’t be right. The suggestion appears to be that the larger the database, the greater the chance is that the hit you receive will be a hit to an innocent person. I think that the larger the database, the greater the probability of getting a hit. Then, once you have the hit, the question becomes: how likely is it that the hit is just a coincidence?

An example makes it simpler.

Let’s say the random match probability for a DNA profile is one in 13.4 billion. In such a case, it seems very unlikely that the hit you get will come back to a different person than the person who left the DNA at the crime scene. Now assume that your database contains all 6.7 billion people on the planet. It’s virtually certain that you will get a hit, of course. But if you got a hit — only one hit — you would intuitively feel certain that you had the right person from that hit.

Yet the logic of the article would seem to say you take 13.4 billion and divide it by the size of the database (6.7 billion). making a 1-in-2 chance (50%) that you have the wrong person (an “innocent person”).

I say hogwash. And I think my example shows why it’s confusing and potentially misleading to use the word “innocent” in these calculations.

My off-the-cuff reaction — and keep in mind, I have no experience in statistics — is that the people who advocate this approach are measuring the question:

1. What are the chances that a search of this database will turn up a match with the DNA profile?

when the truly relevant question is, instead:

2. What are the chances that any one person whose DNA matches a DNA profile is indeed the person who left the DNA from which the profile is taken?

There is a third, rather silly question whose answer seems obvious, but which I will raise for the purposes of relating to an analogy I will make:

3. Once a match has been made through the database, what is the chance that the person whose DNA provided the match will match the DNA profile?

This last one is obviously almost 100%, the lack of complete certainty owing purely to human error; taking human error out of the equation for a theoretical analysis, it’s a tautology: a match is a match.

It seems to me that this is a useful analogy: everyone knows a coin has a 50/50 chance of coming up heads. If I give you a room that has 10,000 coins that were randomly tossed in the air and have landed on the ground, the chances that at least one of those coins landed heads are very nearly approaching 100% certainty (question 1). But the chances that any one of those coins was going to come up heads before it was tossed is still 50% (question 2).

Now, if I tell you to go find me a coin that has come up heads, then the chances it did come up heads are (absent human error) 100% (question 3). But, the chances that it was going to come up heads before it was tossed are still 50% . . . and always will be, no matter how many coins are in the room. You’re almost certain to find one with heads in a room with a larger database (thousands of coins), but the chances that it was going to come up heads always remain the same.

Applying the analogy to a DNA database, it seems to me that the size of the database increases your chances of a hit. But the chances that the profile obtained from your hit is a coincidence will always remain the same, and will always be a function of the number of loci and their frequency in the relevant populations.

The L.A. Times article makes it sound as though it’s quite well accepted that jurors are constantly being misled:

Jurors are often told that the odds of a coincidental match are hundreds of thousands of times more remote than they actually are, according to a review of scientific literature and interviews with leading authorities in the field.

. . . .

[B]ecause database searches involve hundreds of thousands or millions of comparisons, experts say using the general-population statistic can be misleading.

The closest you get to an acknowledgement that not everybody agrees is a passing reference to the fact that this assertion “has been widely but not universally embraced by scientists.”

“Not universally” is quite the understatement. Apparently, there is a debate raging about this among statisticians. Law professor David H. Kaye explains that, while many agree with the analysis described in the L.A. Times article, there is a theory out there that the use of the database “actually increases the probative value of the match.” (I have an e-mail in to Professor Kaye to ask him for further comment.)

The argument to which Professor Kaye refers was made in a Michigan Law Review article by Peter Donnelly, Professor of Statistical Science and Head of the Department of Statistics at the University of Oxford, and Richard D. Friedman, a law professor at the University of Michigan. The first page of their law review article is here. An earlier version of the argument was apparently made by Donnelly with David Balding in a paper titled “Evaluating DNA Profile Evidence When the Suspect is Identified Through a Database Search,” according to mathematician Keith Devlin of Stanford.

Devlin appears to agree with the approach described in the L.A. Times article. However, he says:

Personally, I (together with the collective opinion of the NRC II committee) find it hard to accept Donnelly’s argument, but his view does seem to establish quite clearly that the relevant scientific community (in this case statisticians) have not yet reached consensus on how best to compute the reliability metric for a cold hit.

You’d never know that reading the L.A. Times article, which implies that all but the most rabid pro-law enforcement shills agree that jurors are being given bogus statistics.

[UPDATE: For proof as to how conclusively the paper portrays this point of view, look at this image of what appears on the front page of today’s Sunday paper:

dna-on-front-page.JPG

Tell me where in that image you see any hint that “the relevant scientific community (in this case statisticians) have not yet reached consensus” as mathematician Devlin states.]

The paper wraps up the article by suggesting that the real probability of a coincidence is not 1 in 1.1 million, but 1 in 3:

In the end, however, jurors said they found the 1-in-1.1-million general-population statistic Merin had emphasized to have been the most “credible” and “conservative.” It was what allowed them to reach a unanimous verdict.

“I don’t think we’d be here if it wasn’t for the DNA,” said Joe Deluca, a 35-year-old martial arts instructor.

Asked whether the jury might have reached a different verdict if it had been given the 1-in-3 number, Deluca didn’t hesitate.

“Of course it would have changed things,” he said. “It would have changed a lot of things.”

By the way, in the case described in the L.A. Times article, there was more than just the cold hit. In addition to the fact that the defendant was a serial rapist who described his rapes as “making love” — the same terminology used by the murderer — the prosecution also showed the following:

[Defendant] Puckett “happened to be in San Francisco in 1972,” Merin told jurors in his opening argument. Merin could not place Puckett in [victim] Sylvester’s neighborhood on the day of the slaying. But Puckett had applied for a job near the medical center where Sylvester worked.

With the court lights dimmed and a photo of Sylvester’s naked body displayed on a screen, Merin argued that Puckett’s 1977 sexual assaults showed an “MO” consistent with Sylvester’s killing.

In each of those crimes, Puckett had posed as a police officer to gain the woman’s trust. The absence of forced entry to Sylvester’s apartment indicated her killer had also used a ruse, Merin said.

Puckett had kidnapped his victims by holding a knife or ice pick to their necks, leaving scratches similar to those found on Sylvester’s neck — what Merin called “his signature.”

I now throw open the matter for discussion.

UPDATE: Radley Balko has posted on this. He agrees with the L.A. Times experts. I have posted some counterarguments in his comments.

UPDATE x2: Follow-up post here with helpful responses from Prof. Kaye.

UPDATE x3: Statistics always opens the possibility of using language that doesn’t describe what’s really going on. For example, in this post I referred to “random match probability” as “in essence, the chance that two unrelated people will share the same genetic markers.” I’m not comfortable that this is right, and have removed the sentence. Random match probability refers to the expected frequency of a set of markers appearing in a population of unrelated individuals. I think it’s best to stick with that definition.

69 Responses to “Statistical Probability in Cold Hit DNA Cases”

  1. The following are comments not scientific analysis:
    1. In the general population of 1972, more than half can be ruled out – female and too young.
    2. Of the database used for the cold case, a would believe a majority of the 338,000 samples could be ruled out since most were not of an age or alive in 1972. DNA sampling would be mostly recent since only recently have we had the equipment to do the DNA matching. So if we were going to do a simple math of the number of samples to the 1.1 million we would only use the relevent samples.
    3. I would believe that for each marker there are a number of variation in the database which we can assume are representative of the general population. It would be like a pole of 338,000 for the 300,000,000 in the USA. Each variation would see different likelihood. i.e. for marker one there may be 14 variety but 3 of them could make up half of them and one of the others may only 2 out of 338,000 samples in the database. There may be scientific research outside of a criminal database that give better probaility for the markers.
    4. I would like to see a study of the 5 and 1/2 markers and how many matches there are in the database among the 338,000 samples. That would be a better defense than the 1 in 3 argument.

    Sounds like an argument that would be made to a jury that anybody who would question the math would have been weeded out of.

    Sounds like an article from the MSM. You have given many of examples where your expertise sees the problem right off. I enjoy learning from them.

    EDP (7223ad)

  2. I have a firm rule that when the media do a story with numbers, they’ll get it wrong. But that doesn’t mean that those who disagree have it correct. The math of DNA matching is both complicated and complex, and useful, accurate analogies are likely to be difficult to make.

    If (unlikely to be true) each of the 13 markers is equally and independently possible in the population, if five of them match (five heads, so to speak), the odds would seem to be 1:2^5 = 1:32, and if thirteen match, 1:2^13 = 1:8,192. So where do these figures in the millions and billions come from?

    Because the markers are not equally present, and probably not independently present, and the math is much more complicated. It also depends on who is in the database. If there are only ten people in the database of ten thousand with marker U78, and the suspect matches U78, that tells you something. If a random sample of the public shows that three percent have U78, it shows that the database is not representative of the public, and that the U78 marker is perhaps not as usable as thought; those with U78 are somehow being excluded from the database. (Perhaps U78 self-destructs more frequently than other markers; there doesn’t have to be any grand plot.)

    My primary objection is that this use of DNA essentially turns the principle of “innocent until proven guilty” on its head; the defendant is placed in a position of proving his innocence when most, if not all other evidence is long gone.

    htom (412a17)

  3. The problem is the misapplication of the statistical probabilities.

    If investigation identifies a suspect using normal investigatory techniques, and that single suspect’s DNA is compared using this degraded comparison where there is a 1 in 1.1 million chance of random match, then the probative value of the comparison is high. An example, if the DNA is compared to a short list of people who had access to a victim, then a 1 in 1.1million probability is very useful.

    But if the DNA comparison is used to sort through a DNA database, then the probative value is low. Imagine that the database was the size of the US population, round up to 300 million. Then the DNA comparison to the database produces literally hundreds of matches. And we know that all but one is innocent, do we not?

    SPQR (26be8b)

  4. As is usual in statistical discussions, it is very easy to make critical mistakes. One of the critical mistakes not mentioned above but which should shine through from your analysis is this:

    What is the probability that the actual guilty party is in the database?

    The problem is that in the above case, that question is unknown. You have a 300k people database, but a 1 in 1.1 million match TO ANYONE ON THE PLANET. That 1 in 1.1 million match versus a database of 6.7 billion people, everyone on the planet, would yield something like 6090 matches.

    We have no idea how many of those 6090 possible people on the planet are in the 300,000 sample database, although the above techniques (eliminate women, eliminate children, etc) will help. Could be 1, could be 5, could be ten.

    The probability that the match that they found just by querying the database is the actual perpetrator is neither 1/3 or 1/1.1 million. It can’t really be measured numerically, because the sample, the 300,000 person database, is not a representative sample either. It’s a sample of DNA of previously-sampled criminals already. It will have several people in it who are ‘capable’ of this kind of crime.

    Basically, this is a bad time to use the technique that they used and they really need more evidence.

    luagha (94a0b5)

  5. Huh?

    jimboster (364ef3)

  6. luagha – my question is highly related, in that it’s about the folks in the database:
    don’t they get this database *from* criminals?

    They matched those five and a half marks to the rapist’s DNA profile in California’s DNA database of criminal offenders; by the database having only the DNA of criminal offenders, you’ve got a higher chance of finding criminals, not average non-criminal people, because folks don’t tend to only do *one* crime that calls for DNA evidence.

    I found the following article useful:
    http://kennebecjournal.mainetoday.com/news/local/4819671.html

    Foxfier (74f1c8)

  7. If (unlikely to be true) each of the 13 markers is equally and independently possible in the population, if five of them match (five heads, so to speak), the odds would seem to be 1:2^5 = 1:32, and if thirteen match, 1:2^13 = 1:8,192. So where do these figures in the millions and billions come from?

    Because the markers aren’t just yes or no, on or off binary type measurements. Each has multiple possibilities, and when you start multiplying them all, it adds up. Or multiplies up.

    Patterico (4bda0b)

  8. And I second jimboster’s “Huh?”

    I followed you when you said “As is usual in statistical discussions, it is very easy to make critical mistakes.” . . . but then I quickly lost you after that.

    Patterico (4bda0b)

  9. Okay, I’m too tired to figure out why your formulation doesn’t strike me as quite right but you are definitely correct that the line pushed in the LAT is bogus.

    In fact, my first reaction is that the numbers used to get the “1 in 3” should have been divided rather than multiplied (ie, the RMP of 1:1.1×106 should have been divided by the quantity of database entries (33800) rather than multiplied, giving a cold hit probability of about 2*10-12 against the general population. Note, however that the database entries are not composed of the general population, and I’m too tired at the moment to figure what the correction is.)

    EW1(SG) (84e813)

  10. #7 Patterico:

    . . . but then I quickly lost you after that.

    luagha is voicing some of my objections that I am glad he (she?) can think of.

    There are many factors in the scenario you’re presented that are not accountable statistically (at least, not in any meaningful way that most of us here will understand).

    EW1(SG) (84e813)

  11. I’m going to indulge in a little speculation, both chemical and mathematical. DNA is a complex molecule that contains about 6 billion pairs of protein connectors to its double helix backbone. Each location can have one of 4 proteins usually abbreviated ACGT. If each marker contained only one such protein the total of 13 markers would be 4 to the 13th power, or about 64 million. Since all 13 markers make odds of around a quadrillion there must be some 25 locations considered ( all this presumes an equal chance of a given protein at a given site).

    The five and a half markers must contain 10 sites to come up with around a million to one odds.

    I am willing to admit that this exhausts my best guesses in this case. There is one thing of which I’m sure, no reporter in the mainstream press has any idea of either biochemistry or math.

    Ken Hahn (7742d5)

  12. Part of the indeterminability is not having a handle on just *what* is being measured and *what* it is being measured against.

    The measurement against is on the database of known criminals who have committed a type of crime to get themselves in the database. That gives limits as to what one can look for in the database itself: those criminals of the type legally accountable to be recorded for that system. Not all States have this in-place for the exact same crimes and if a felon crosses State lines that makes a database hit harder to get, although it is a suspected federal crime if there is any other evidence of same, thus allowing a broader canvasing. Police departments may cooperate otherwise on a reciprocity basis to help catch such individuals also. The ‘cold hit’, in circumstances of no other evidence linking a crime to anyone, serves two purposes: first it is an attempt to see if any other individuals with a similar MO are in the database and have method/motive/opportunity to commit the crime, and, secondly, it gets investigators thinking about the case, itself, to see if there is evidence at first discounted that should be investigated. A ‘hit’ on a limited database should not be definitive in and of itself, and the rest of the evidence and circumstances must demonstrate more than chance connection.

    There is a lot of leg-work there, but getting a sub-set of criminals with similar MO and closely associated DNA may help spur memories of the case or other circumstances. This will not eliminate new suspects, however, as a ‘copy cat’ may do a crime similar to ones done previously, or a new criminal taking up an associated MO may appear. Genetics is not a ‘get out of investigation free’ card.

    The other problem is the length of the marker sequences. What is being measured for are changes in a relatively well conserved sequence of base pair matchings that have some known variation in the entire population. Those base pairs have not an exponential rate of change but a factorial rate of change. With a single base pair matching for a changed sequence you do not get 50/50, but 4! (1x2x3x4) changes of which a subset of those is a known mutation in the population that varies in that factorial space. From 1/24 for a single base pair that is then shifted downwards by eliminating those mutations that do not show up in the genetic structure.

    When speaking of ‘conserved genes’, these are genes that are necessary to support developmental changes that allow final functioning and survival of the larger scale biological individual. These genes, because of later functioning, tend to change very little, and examining such things as eyesight leads to finding a smaller sub-set of genes that is highly conserved from blue-green algae to modern animals. Although there are some changes within the actual conserved gene set, function is unimpaired and may, actually, offer variability to do other things. It is that set of variations that is looked for in relatively well conserved genes or between sets of conserved genes with a known set of conserved gene areas. Marker genes are those variable sets of genes between highly conserved areas but with known permutations within the general population. From those differences one can see patternings in that sub-set of genetic material via correlations. Say in a ten gene sequence the third base pair of one type (say AT) only shows up *with* a base pair at the seventh (say CT) and no other pairing… thus 3 & 7 have a linkage via genetic inheritance. Thus if pairs 1,5,9 have a correlation that always excludes the known matching of 3&7 you are starting to form a marker: it varies in a known and set way, most likely due to biochemical reasons, at some point. These sequences, conserved by inheriting but not immune to mutation, serve as the broader ‘markers’ for population sub-types which may or may not have correlation to geographic origin, ethnicity, etc. But if you have that gene from one of your parents, you share that genetic heritage. By procreation those gene markers then get 50/50 contribution from the parents… with mitochondrial DNA usually coming from the mother, but cases of fathers having offspring with their mitochondrial DNA are not unehard of.

    That is why the 13 marker genes gets you the incredibly low chance of anyone sharing that exact, same suite of markers: inheriting them from similar parents of similar background with similar genetic make-up for those markers (as opposed to the actual conserved genes) becomes a non-functional question at some point. The chances of the lower number (5) in the population also drops, but so does its presence in the database. It is not necessarily a 1:1 drop based on common markers in types of populations. The presence of those 5 may be over or under-represented in the database due to region, migration and inherited background.

    At best the ‘cold hit’ may serve to winnow things down, but without the higher number of matches it is coincidental and may tend to indicate identification, but without the older method/motive/opportunity a case would be hard put to go forward *just* on genetics: it can help to narrow down the suspect list, but is not confirmation in and of itself.

    ajacksonian (87eccd)

  13. The prosecutor was right, the LAT is wrong.

    The “database” of 338,000 persons is your sample. Which is supposed to be representative of your population. From it, the probability of occurrence of a certain combination of genetic markers in the population is calculated. One in a quadrillion for 13; 1 in 1.1 million for 5.5.

    To go back and divide the probability you derived from the sample by the size of the sample is nonsensical. It’s like taking a hundred people, counting the redheads and finding ten, and then dividing the ten by one hundred to conclude that there is only one.

    (Scientists have methods of testing the representativeness of their samples which involves their size but the LAT’s analysis is not that.)

    nk (1e7806)

  14. The Indian chief calls the weather service and asks what is the long-range forecast for the coming winter. The weatherman says “Cold”. The chief orders his tribe to start collecting firewood.

    A little while later, the chief calls again and asks if there’s been an update. The weatherman says, “Colder than I first had thought”. The chief orders his tribe to collect even more firewaood.

    Some more time passes and the chief calls again. The weatherman says, “It looks like one of the coldest winters in years”. The chief finally asks, “Why do you say that?” The weatherman says, “Because the Indians are collecting firewood like crazy”.

    nk (1e7806)

  15. I can see using the DNA approach to identify the most likely candidate in a population.

    The size of the population affects the probability that the doer is the data base.

    The DNA results “rule out” the non-matching portion of the population.

    The greater number of markers and the greater the population, the better the chance that the test can “rule out” all of the population other than the doer. Eventually, the test becomes essentially mathematically definitive. Until that point is reached, however, DNA and fingerprint tests can only rule out candidates, leaving a smaller and smaller population of possible doers to be further investigated.

    I do not think there is any dispute with the above.

    The problem begins with how the most likely candidate is treated in less definitive cases and this case is a wonderful example.

    In conjunction with the other case elements, I think the DNA element in this case is powerful evidence. If I were a juror, I think I would be convinced.

    jim2 (005d56)

  16. The problem is in the scientific method used. If you set in advance fair and reasonable standards for getting a hit, and you get one hit, then you are reasonably sure the suspect was a match. If you set in advance too lax standards, you get multiple hits and since you are reasonably sure they aren’t all suspects, you know your method is wrong.

    But if you decide in advance that the closest match is the only hit, regardless of how near or far, then you’ve gamed the system.

    In the case of the article, they started with 13 markers and boiled through the samples until they found a hit at 5.

    Imagine getting arrested and the cop says, “Well, your skin is the wrong color, you don’t have a sunburst tatoo around your left eye, and we stopped you driving a Honda instead of a helicopter, but the witness nailed you as an English speaking male human AND they got your brand of wristwatch AND shoes exactly right.”

    dustydog (e715ff)

  17. PPS. The correct way to argue it, however, for both the prosecution and the defense, is like this (the numbers are for illustrative purposes only):

    “There are 300 million people in the United States. That means that the defendant is one of about 300 people in the United States who could have left this DNA sample. There are six billion people in the world. That means that the defendant is one of about 6,000 people in the world who could have left this DNA sample.”

    And you stop right there. You do not go to arguing “the probability that the defendant did not do it is one in 1.1 million” because that is the same circular reasoning as used above. It is also extremely prejudicial, trespasses on the jury’s function to determine reasonable doubt, and the judge will stop you.

    nk (1e7806)

  18. #16 nk:

    And you stop right there.

    I have a feeling that’s going to be very close to the response I get from my criminalist friend when I broach the subject with him.

    He may know a particular suspect is guilty, but that is the jury’s function to decide.

    EW1(SG) (84e813)

  19. EW1(SG),

    As illustrated by this case, all we are doing is narrowing the list of suspects. With 13 markers we can narrow it down to one* and that is overwhelming evidence. When we narrow it down to about three hundred the other “old-fashioned” evidence in the case is what obtains the conviction.

    *For the time being, the sample/database of 338,000 is considered representative and the 13 chosen markers are considered valid. Whether they are is a different question and it is up to the scientific community to keep revisiting it.

    nk (1e7806)

  20. I think this is an example of the specific case where, whenever there’s something in the media that we know something about on a detailed basis, we recognize flaws in it, but somehow fail to apply that to the general case. Having said that, I believe the LA Times is both incorrect in the specific details here, but correct in that there is an issue.

    I’m going to assume for the sake of argument that the 1 in 1.1 million chance stated is correct, and that those people are evenly distributed throughout the global population. The second statement is definitely suspect, but since there doesn’t appear to be any info to tie it down further, that’s what we’re left with.

    Given those premises, if you were to take DNA samples from a random 338k people there would be about a 1 in 3 chance of a match. But it simply doesn’t say that the person is innocent or guilty, it simply says they matched.

    However, these samples aren’t random. They’re of previous offenders. So presumably that has to be factored in as well.

    Skip (163356)

  21. I’m pretty sure that what the LA Times is saying is correct. That as you increase the size of the data base and apply it the chance that you will get a hit is more likely. The 1 in 1.1 million number applies to a single individual selected at random but as you add more people to your sample size then the odds that you will select someone with those markers increases.

    chad (582404)

  22. “Tell me where in that image you see any hint that “the relevant scientific community (in this case statisticians) have not yet reached consensus” as mathematician Devlin states.]”

    is there a problem with science being used to convict someone when the relevant community has not reached a consensus. is that “reasonable doubt.”

    stef (d9c465)

  23. I’m pretty sure that what the LA Times is saying is correct. That as you increase the size of the data base and apply it the chance that you will get a hit is more likely. The 1 in 1.1 million number applies to a single individual selected at random but as you add more people to your sample size then the odds that you will select someone with those markers increases.

    Chad, that’s not all they’re saying. They’re saying that the chances you pick an *innocent* person will increase.

    What I am saying in the post is that the chances of getting a hit will increase. But if you get only one hit, I don’t see how the chances of that one hit being the right one are affected at all by the size of the database.

    Re-read my post, particularly the part where I set off three questions in italics. You will see that you (and the LAT) are discussing question 1, when the jury is interested in question 2.

    At least that’s how it seems to me.

    Patterico (4bda0b)

  24. is there a problem with science being used to convict someone when the relevant community has not reached a consensus. is that “reasonable doubt.”

    No, because reasonable doubt depends on the quality of all of the evidence, and varies from case to case.

    What we are discussing is a matter of what the jury should be told about what the numbers mean. There are various questions wrapped up in that question: admissibility of evidence, jury instructions, battling experts, and such. But your “it’s reasonable doubt” formulation is so overly simplistic as to be wildly inaccurate.

    In any event, the post is about 1) what the numbers really do mean, and 2) whether the LAT is telling people about the dispute, and accurately characterizing the issues.

    Patterico (4bda0b)

  25. “But your “it’s reasonable doubt” formulation is so overly simplistic as to be wildly inaccurate.”

    It is. But it seems to me that if the science hasn’t reached a consensus, that goes to the quality of the evidence.

    stef (b022b7)

  26. Here is a quote from an article Prof. Kaye sent me:

    Consider two cases. In Case I, the defendant was identified through a trawl, and further investigation produced confirmatory evidence. In Case II, the confirmatory evidence was known at the outset, making the defendant a suspect. The police did not bother to secure a DNA sample from him, however, because they knew that his thirteen-locus STR genotype was already included in the state’s convicted-offender database. To be on the safe side, rather than just compare the crime-scene genotypes to the defendant’s record in the database, they ordered a full search through the database. This search showed that the defendant matched and that no one else did. The only difference between the package of evidence in the two cases is the order in which it was uncovered.89 In Case I, the police trawled, then “confirmed.” In Case II, they “confirmed,” then trawled. It is hard to see why the evidence in Case I would be any less persuasive than that in Case II.

    Now, granted, evidence you gather when you have a suspect could be the product of assumptions — but logically speaking, that’s not what we’re discussing here. We’re discussing the other half of the equation: the meaning of the “hit” from the database, and what the numbers say about it.

    Patterico (4bda0b)

  27. nk (#16) makes the most sense of any of these comments.

    assistant devil's advocate (acf7ef)

  28. Here is the best paragraph I have found:

    We can approach this question in two steps. First, we consider what the import of the DNA evidence would be if it consisted only of the one match between the defendant’s DNA and the crime-scene sample (because he was the only person tested). Then, we compare the impact of the match when the data from the trawl are added to give the full picture. . . . In the database trawl case . . . [i]f anything, the omitted evidence makes it more probable that the defendant is the source. On reflection, this result is entirely natural. When there is a trawl, the DNA evidence is more complete. It includes not only the fact that the defendant matches, but also the fact that other people were tested and did not match. The more people who are excluded, the more probable it is that any one of the remaining individuals — including the defendant — is the source. Compared to testing only the defendant, trawling therefore increases the probability that the defendant is the source. A database search is more probative than a single-suspect search.

    That’s why, if the random match probability is “1 in x” and x is a greater than the population of the Earth, then the chances that you have the right person increase as the size of the database increasingly approaches the population of the Earth. If the database reaches the actual population of the Earth, and you get only one hit, you have achieved near theoretical certainty that the hit is correct. But the logic of the article suggests that the reliability of the hit would *decrease* as the size of the database increases.

    Patterico (4bda0b)

  29. #16 nk:

    In most cases, the probabilities are in the quadrillions and other numbers far exceeding the Earth’s population. So your argument in comment 16 becomes a little awkward.

    Patterico (4bda0b)

  30. Patterico, in your discussion of #27, you keep adding the hypothetical information that there is only one ‘hit’ as the database is increased. But that changes the statistical issues.

    SPQR (26be8b)

  31. Good questions!

    Suppose we have a database of one. We have a one in a million matcher. It hits. Is it a million to one that the person did it?

    No. This is commonly known as the prosecutor’s fallacy – the one in a million shot is wrong no matter what.

    Patterico is right that the increased size of database isn’t a particularly helpful divisor or even indicator (but see below – it does affect the results), but that’s because we started with a bad assumption.

    Conditional probability and Bayes’ Theorem are vitally important for many purposes. If we do a 99.999% accurate biological test of some type that would *absolutely* ID the murderer of the Black Dahlia, and I test as guilty, what are the chances I am guilty?

    The answer is, “Zero,” because I wasn’t alive. But if we assume that each person in America at the time (150 million? Guessing.) was equally likely to be responsible, and I was one of them, the test would generate lots and lots of false positives. The probative value would be tiny. The chance I was guilty would be very small – one in 1,500. The 99.999% figure is nonsense, just as the one in 1.1 million figure is.

    In multiple testing scenarios, it’s completely trivial to defeat the divisor issue, but I’m afraid you’re mistaken as to its ultimate relevance. Suppose that I have a one in 10K chance to hit. I grab one person to hit. Suppose we have no other inclusionary or exclusionary evidence.

    If person is innocent: Chance to hit: 1 in 10K.
    If person is guilty: Chance to hit: 1 in 1.

    If we took this guy at random, this still wouldn’t mean it was 10K-1 that he was guilty; not even close.

    Let’s suppose I do 20,000 tests on a panel of people. First, let’s assume none of them did it:

    Chance of at least one hit: About 86%. This is 1-(99.99% to the 20K power). (You absolutely cannot divide to get this. That’s…. that’s ridiculous.)

    Chance of exactly one hit: About 27%.

    The chance of exactly one hit if the guilty person is included in the panel is 14%.

    Again, the relevance of this will be based on what confirmatory evidence you have – but the hit isn’t very helpful in either case. Still, it *is* more likely that you’ll get a *bad* hit with more people in the pool. (This, again, doesn’t mean that when you do get a hit, that it affects the chance that he’s the guilty one.)

    (On a side note, math professors argued over the Monty Hall problem, so the fact that people disagree doesn’t mean much to me.)

    But the biggest thing is the conditional probabilities act on each other. Some of our hits will be people who were dead/living in Canada/have alibis/etc. If you *start* with evidence pointing at the defendant, and then he tests positive, that’s a jackpot. If you get good confirmatory evidence, that’s a jackpot.

    But 1.1 million to one just based on a 1.1 million-to-one match is very bad math. The 1-in-3 is also bad math. Intuitive jumps in probability are often wrong; you need to do the legwork.

    –JRM

    JRM (355c21)

  32. That’s what I was trying to say, JRM.

    SPQR (26be8b)

  33. I’m sorry, SPQR, but I thought I was just applying the basic premise of the question: we have had a database trawl and we have gotten one hit. But I don’t think the statistical question would be any different if you got two.

    For example, in the “1 in 1.1 million” example, you could easily get two hits in a large database. I still think the random match probability for each hit is the same, and is unaffected by the size of the database, or (and the logic makes sense to me) increased *very* slightly by the size of the database because of the number of samples examined and excluded.

    When the “x” in the “1 in x” random match probability is far less than the world population, jurors have to keep in mind that the guy we have in court ain’t the only guy out there with this profile. I think that’s the point nk was making, and it’s a perfectly fair point. And yes, you could have other evidence tying him in, but jurors should consider the chances that you might possibly find circumstantial evidence tying in someone else.

    All fair points.

    But my point is that — one hit, two hits, or five hits — the random match probability (if accurately calculated up front) for any individual sample is the same (or virtually the same) regardless of the size of the database.

    Make sense?

    Patterico (4bda0b)

  34. JRM,

    I’d be interested in a response to the blockquoted material in the previous comments from the law professor’s law review article.

    I still think the 1 in 1.1 million number is meaningful; what isn’t meaningful is the argument: “What are the chances that it just happens to be THAT guy?!?!?!” Well, it’s 1 in 1.1 million, but absent other evidence, that leaves many other possibilities. THAT is the prosecutor’s fallacy, not the 1 in 1.1 million number in the abstract.

    In other words, if the chance that a house is struck by lightning is 1 in a million, then the chances that your house will be struck by lightning are 1 in a million. If you have 5000 houses, the chances that one of your houses is struck is 1 in 200. BUT the chance that *that* house was going to be struck was still 1 in a million.

    Again, that’s at least the way it seems to me, even after reading your comment.

    Patterico (4bda0b)

  35. Patterico, the probability of a match means that each time you compare the collected DNA with a single instance of database DNA, there is a 1 in 1.1million chance of a match.

    If you compare to more instances, ie., the database is increased, then the amount of expected total matches increases. So a larger database is expected to produce more matches than a smaller one.

    Of course, the nature of the biological facts, there does not have to be more than one match in a database. The distribution of DNA markers not necessarily being perfectly smooth.

    The problem is how one uses the information, as JRM was illustrating. By using the DNA match itself to sort people out of a database, it is not any longer correct to say that the match itself tells us that there is a 1 in 1.1 million chance against the person being innocent.

    SPQR (26be8b)

  36. Patterico, in #33, the problem is that you started by sorting out only houses that had been struck by lightning.

    SPQR (26be8b)

  37. Conditional probability and Bayes’ Theorem are vitally important for many purposes. If we do a 99.999% accurate biological test of some type that would *absolutely* ID the murderer of the Black Dahlia, and I test as guilty, what are the chances I am guilty?

    The answer is, “Zero,” because I wasn’t alive.

    Indeed, and that’s why I argued in the post that we should leave the word “innocent” out of this discussion. What we’re talking about is the chance that the defendant (the “hit” in the database) actually left the DNA at the scene (the profile in evidence).

    Patterico (4bda0b)

  38. In other words, if you create a hypothetical where you know that the house you want was struck by lightning, so you sort out all houses struck by lightning and then say that the odds that the house you picked is the wrong house is 1 in a million.

    SPQR (26be8b)

  39. Patterico, the probability of a match means that each time you compare the collected DNA with a single instance of database DNA, there is a 1 in 1.1million chance of a match.

    If you compare to more instances, ie., the database is increased, then the amount of expected total matches increases. So a larger database is expected to produce more matches than a smaller one.

    Agreed, and consistent with my post.

    Of course, the nature of the biological facts, there does not have to be more than one match in a database. The distribution of DNA markers not necessarily being perfectly smooth.

    Agreed.

    The problem is how one uses the information, as JRM was illustrating. By using the DNA match itself to sort people out of a database, it is not any longer correct to say that the match itself tells us that there is a 1 in 1.1 million chance against the person being innocent.

    I can’t even respond to that because of the use of the word “innocent” which throws off the whole discussion. I suggest that you don’t use that word when discussing the statistics because it muddies the water. Innocence and guilt have to do with factors other than a DNA match — the issue is what the match means. So when we’re discussing pure math, using the word just confuses things.

    Patterico (4bda0b)

  40. In other words, if you create a hypothetical where you know that the house you want was struck by lightning, so you sort out all houses struck by lightning and then say that the odds that the house you picked is the wrong house is 1 in a million.

    I don’t know what the “wrong house” means in this example.

    The chance a house will be struck by lightning (bad example because there could be physical reasons a house attracts lightning, like location, materials, height, etc. but ignore all that for now) are the same whether the house has been struck or not.

    If your coin came up heads, the chances it *did* come up heads are 100% (question 3 in the post) but the chances it was going to (before it was tossed) were 50% (question 2).

    Patterico (4bda0b)

  41. It’s a fun debate but I don’t have all day for it, gents.

    Patterico (4bda0b)

  42. Can I post in this one I wonder?

    Levi (76ef55)

  43. I’ll take one more stab at an analogy.

    We’re looking for the green die. There is only one in the world — just as we know that only one person in the world actually left that DNA at the scene.

    A witness says that the green die was in a room, and had come up number 1 after it was thrown.

    There is *a* room full of dice lying on the floor randomly thrown. We don’t know if this is *the* room that has the green die or not, and we are colorblind and can’t tell what color the dice are.

    We look in the room and find a die that came up number 1. Is it the green one? What are the chances?

    Well, that depends on a lot of things.

    I would think the main questions would be: How many dice are there in the world? and How many sides are on each die?

    If there are 6 billion dice in the world, and each one has 1 quadrillion sides, the chances are pretty good that the die you found is the green one. Because the green one came up as a “1” when it could have come up any number from 1 to a quadrillion.

    And I don’t see how it matters how many dice are in the room — except that the more of the 6 billion are in there, and the more you look at and find they came up with a number that is *not* 1, then the (very very very slightly) greater are the chances that the die you found that *did* come up 1 is the green die. But the chances that it is the right die are still *mostly* a function of the absolute probability of a die coming up “1” anywhere in the world, which is a function of the number of dice and the number of sides per die.

    The number of dice in your room affects how likely you are to find a “1” — but not how likely you are that, when you have found a “1,” that it is the green die.

    Or so it seems to me.

    Patterico (4bda0b)

  44. Unless there is a problem with the way that the evidence is collected and stored, the odds of a false match should have a linear growth, not an exponential growth as some have suggested. Still, that deserves serious consideration. Is the state of biometrics software conducive to allowing a normal police department to investigate the possibilities that that would create for them in a national database?

    One of the problems with false positives is that if you have a rate of say… .0001 and that is thrown against 300,000,000 Americans, that’s still 30,000 false positives. Even at .000001, that’s 300 suspects to investigate. I don’t know the statistics from leading biometrics COTS products about their rate of false positives, but I can imagine it would be a real headache for law enforcement, especially if the database is maintained with the same attention to detail that goes into other government databases for crime like sex offender registries.

    MikeT (d15f5c)

  45. Furthermore, with the mobility that we have today, if you have false positives distributed all over the country, how can a cop in Nebraska really know that one of those 300 potential hits from New York or Florida wasn’t in town over the weekend, visiting the victim? I don’t think it would be anywhere as simple as:

    SELECT * FROM dna_registry WHERE genetic_code = :genetic_value AND crime_state = ‘NE’;

    MikeT (d15f5c)

  46. It’s only a real headache for the 30,000 or the 300 or however many are accused; the others are just doing their jobs. I’m not sure what it is for those convicted (whether in court or just in the press) who shouldn’t have been.

    htom (412a17)

  47. First, there’s a 1 out of 3 chance of a false positive from the DNA bank.

    Second, if there are 22M men in the greater SF Bay Area (I pulled that number out of my ass), that makes for about 20 men who also match the DNA profile, in addition to the suspect.

    So there is only about a 4-5% chance, based on DNA evidence alone, that he did it.

    On that basis, I absolutely would not convict. Even 2 out of 3 odds isn’t good enough for me, when it comes to guilt beyond a reasonable doubt.

    I do believe the jury should be told all of this. In fact, I think DNA experts, when they testify as “neutral” experts in a case, have an ethical duty to disclose this (but they should use real population numbers, they can’t invent them like I did).

    . . .

    However, the police had more than DNA evidence in this case. Their target was a serial rapist(!) who used the same MO(!). Again, all by itself, that evidence would not be good enough. There are a lot of serial rapists out there, and more than one of them has posed as a police officer.

    But together, and with evidence that he was in SF at the time, it’s a compelling case, well beyond the requirements for “reasonable doubt.” Justice was done in this case, even if the procedure was not completely fair.

    . . .

    The “1 in 1.1 million” statistic is very dangerous in terms of misleading the jury. Jurors tend to hear that and think there is only a 1 in 1.1 million” chance that the suspect is innocent, which is absolutely not true. Prosecutors take advantage of jurors’ innumeracy all the time, and it’s despicable. I hope you are always honest with your juries, Patterico.

    Daryl Herbert (4ecd4c)

  48. Part of the problem of this debate, and the article is trying to tie the two statistics together.

    The odds of a DNA match for the 5 markers is 1 in 1.1 mil

    The odds that such a match is in the Database is 1 in 3

    The odds (as has been pointed out) that the killer’s DNA matches himself is 1:1

    Therefore: IF the killer’s DNA is in the database the odds of another hit are 1:3. If the killer’s DNA isn’t in the database the odds of a false hit are still 1:3.

    What this tells us is that a hit in the database stands a good chance of being the killer and is worth checking out. Nothing more.

    Dr T (340565)

  49. First, there’s a 1 out of 3 chance of a false positive from the DNA bank.

    No, that’s not true. There’s a 1:3 chance of any match in the database. The chance of a false positive is still just 1:1.1 million.

    Steverino (6772c8)

  50. Patterico #29,

    In most cases, the probabilities are in the quadrillions and other numbers far exceeding the Earth’s population. So your argument in comment 16 becomes a little awkward.

    Sure. The expert will say that it is impossible, based on a reasonable degree of scientific certainty, for the DNA in evidence to have come from any person other than the defendant. It’s very strong overwhelming evidence. It is proper argument to argue that “based on reasonable degree of scientific certainty the DNA could not be identifying anyone else”.

    I would say that the chance of improper argument comes in a case like this. Where the DNA excludes 1,099,999 out of every 1.1 million people from whom the DNA could have come. There are still more than a few 1,100,000th persons. It sounds as though there’s only a 1,100,000th chance that the defendant is innocent but it is not. Just that we grabbed one of the 300 or so who could be guilty.

    nk (1e7806)

  51. There is no DNA control group of “not guilty persons” to render such evidence “blind”, as in “double-blind studies”. A “hit” in the “not guilty” database would be “reasonable doubt”. But where one would get such persons to provide the samples of the “not guilty” DNA is as big a question as the premise of this threads commentaries. Should these “Control Samples” of blood be obtained from the dead who were never convicted of any crime, or should the blood be collected from the newborn, who are undoubtedly “innocent”?

    This then begs the questions of just how truly “not guilty” are these dead DNA donors and just how “innocent” will these newborns remain throughout their life? I fear that only the Cheshire Cat has the answers and he’s not telling.

    Of one thing I am sure, If it’s the Los Angeles Times, it’s even money that the story is always only half right.

    C. Norris (7dc1d8)

  52. Would you object if I said, “Ladies and gentlemen of the jury, the odds that my client is guilty are 0.33%”. In a loud voice, “One in three hundred!” 😉

    nk (1e7806)

  53. C. Norris: A “hit” in the “not guilty” database would be “reasonable doubt”.

    No, it wouldn’t. In fact, if we knew everyone in the NG database was NG, we wouldn’t give a damn how many hits they were, because by definition, none of them could be the perp. (They wouldn’t be in the DB if they weren’t innocent.)

    You could make a DB out of all Chinese citizens born after 1990. By definition, they would all be innocent, but there would probably be 1 hit for every 1.1M births. It doesn’t matter how many “hits” you find in that database, that doesn’t change one whit whether or not the suspect is guilty.

    Daryl Herbert (4ecd4c)

  54. Whatever the math, the purpose of every such article like this from the Slimes is to undermine Americans’ faith in their legal or (insert institution here) system.

    Bezmenov on demoralization of America

    Patricia (f56a97)

  55. Patterico,
    A little knowledge can be a dangerous thing. Ever heard that before? It is probably safe to assume that the LA Times reporter that wrote this story is quite ignorant about statistical analysis. However at this point in time your grasp of the subject is insufficient for you to responsibily use this as a tool to determine a target worthy of prosecution. Your examples, analogies, and explanations are interesting enough but have little bearing upon the subject at hand and illustrate far more of what you don’t understand than what you do understand. Math is a funny thing, not really very subjective.

    Amused Observer (168d86)

  56. Would you object if I said, “Ladies and gentlemen of the jury, the odds that my client is guilty are 0.33%”. In a loud voice, “One in three hundred!”

    That’s not an accurate presentation of the evidence. Just because there are 300 Americans who match, not all of them could have done it.

    First, not all were men who had reached puberty (subtract 160).

    Second, not all of them had the opportunity to strike (geographically, too far away). Even if we assume that 1/4 of the country lived close enough to SF to carry out the attack, that brings you down to 35 men.

    Finally, that’s based on DNA evidence alone. On DNA alone, you could claim something like 1:35 (3% chance). If this was a pure cold hit, saying “3% chance of guilt” would be accurate. But this was a cold hit + other evidence.

    . . .

    The defendant is a man, and 1/2 of the population are men. That doesn’t mean that the chance that defendant is guilty is 1 in 150 million.

    If the prosecution’s case was based solely on the fact that D is an American man, as is the rapist, you could accurately say 1 in 150 million odds of guilt.

    But if there is other evidence as well, you can no longer say that.

    . . .

    If DNA evidence narrows the potential suspects down to 300 Americans, that doesn’t mean the defendant walks. If the other traits defendant shares with the perpetrator are weird enough (being in SF at that time, being a serial rapist, having that MO), and those weird traits belong to far fewer than .33% of the population, then the chance that Defendant is guilty is very good–past reasonable doubt.

    If serial rapists with that MO who live in SF actually make up 10% of the population of America, then the suspect’s odds of being guilty are only 1:30. (300 x 10% is 30, meaning we expect that there are 30 serial rapists with that MO in SF and having those 5 DNA markers.) Luckily for the prosecution (and the rest of us!), 10% of Americans are not serial rapists living in SF.

    . . .

    We can multiply the probability of having 5 DNA markers times the probability of being a serial rapist, because we are sure that those probabilities are independent. Having DNA markers does not make someone more or less likely to be a rapist. (If people w/ those 5 markers were significantly more likely to be a rapist, we would expect more “cold hits” from the 300k person database of sex offenders.)

    Daryl Herbert (4ecd4c)

  57. That’s not an accurate presentation of the evidence.

    It certainly is not. Neither is “the chances of the defendant being innocent is one in 1.1 million”. That was my point.

    nk (1e7806)

  58. Amused Observer,

    Maybe rather than insulting me, you could give your opinion about what you think the proper way to view the issue is.

    Maybe I can arrange a discussion between you (Expert #1) and the head of the statistics department at Oxford (Expert #2).

    Patterico (4bda0b)

  59. What, Patterico? And make Amused Observer actually show his own command of the subject? Why how presumptuous of you! ( grin )

    I think that the basic idea, that the statistic “1 in 1.1 million” chance of a match can be misused is correct, for the reasons I mentioned earlier. But I do agree that the Los Angeles Times article exaggerates the issue to make the defendant’s guilt look less likely than it actually is.

    SPQR (26be8b)

  60. I’m not sure I’ve followed all this, but one clarification:

    The math for chances of 1 in 1.1 million is *much* different than it is for 1 in 14 quadrillion. One in 14 quadrillion= It’s him or a twin or a triplet or a clone. End of line.

    One in 1.1 million is more problematic; you must have other evidence for this to help you.

    –JRM

    JRM (355c21)

  61. There we agree.

    Patterico (4bda0b)

  62. Paterico, it is seriously misleading to just give the 1 in 1.1 million random match probability in a case like this. I would consider it reversible error. What the jury should be told is debatable but I don’t think there is any question that it should be more than just the random match probability. I don’t usually cite my credentials but in this case I will note I have a PhD in mathematics and although statistics is not my area of specialization I am confident about what I state above.

    James B. Shearer (fc887e)

  63. Thanks, Patterico, for posting and inviting discussion on this.

    I generally agree with EDP, SPQR, JRM, ajacksonian and James B. Shearer above.

    One issue is whether the database is statistically valid sample of the entire population. Another is the statistics of the particular marker sequences. Those two are enough for me.

    I’m not a PhD in math or statistics, but I’ve got enough education and experience in related fields, and enough rudimentary background in statistics, to see possible holes in the foundation for the “one in a million” statistic given the jury. I could not have voted to convict if I had known the information that the prosecutor and court withheld from the jury.

    But, being a lawyer I also realize that few prosecution or defense lawyers or judges have even the understanding that some commenters here have. After all, People v. Collins, 68 Cal2d 319 (1968) (“Sleepy” McComb dissenting without comment) reversed the trial court for allowing prosecution’s “expert” witness to conflate (with absolutely no foundation) the relatively simple matters of dependent and independent probabilities.

    The statistical issues here are a touch more complicated than that.

    Occasional Reader (34440c)

  64. I’m not an expert and it’s been a long time since I studied rudimentry statistics. I do remember enough to recognize the limitations of your grasp of the subject however. Mathmatics is not subjective. An attempt to color a jury’s perception using math you don’t fully understand and analogies that don’t really apply is at very best intellectually dishonest and misleading. Or more simply put, What Mr. Shearer said.

    Amused Observer (a8b179)

  65. I don’t usually cite my credentials but in this case I will note I have a PhD in mathematics and although statistics is not my area of specialization I am confident about what I state above.

    I don’t usually cite others’ credentials but in this case I will note that Peter Donnelly’s credentials as the head of statistics at Oxford are far more impressive than yours.

    I’m not an expert and it’s been a long time since I studied rudimentry statistics. I do remember enough to recognize the limitations of your grasp of the subject however. Mathmatics is not subjective. An attempt to color a jury’s perception using math you don’t fully understand and analogies that don’t really apply is at very best intellectually dishonest and misleading. Or more simply put, What Mr. Shearer said.

    More simply put, what I just said. If it’s a battle of credentials, you lose when compared with the head of the statistics department at Oxford. But thanks for playing!

    Patterico (4bda0b)

  66. If that wasn’t blunt enough, Amused Observer: I am amused by your utterly contentless observations.

    Patterico (339671)

  67. My theory:
    I believe we can take the 1 in 1.1M probability of a match as a valid starting point. The decide how many men are in San Francisco any given day. Let’s say 5M people which would be about 1.5M men of sufficient age to be suspects. That means you probably only have 1, 2, or maybe 3 who would match the sample as close as the test did. As others have said, this isn’t good enough without other evidence.

    Next would be what percentage of the male population of San Francisco are rapists.
    (1 in 1000?)

    Then what percentage of rapists use that MO, including getting into the residence without force. (1 in 20?)

    Bayes’ theorem would then produce the odds that another suspect could match these characteristics as well as the person charged.

    Just crudely multiplying these probabilities gives me a guess of 1 in 6000.

    I think this is beyond a reasonable doubt.

    Ken from Camarillo (245846)

  68. Another angle:
    If every rapist known to use that MO is in the DNA database (meaning every other rape of that MO was solved) and only one hit resulted from the database (assuming the second best hit was much inferior), then the probability of someone else being the murderer becomes vanishingly small.

    Ken from Camarillo (245846)

  69. The opinion linked by Justin in his new post explains the “prosecutor’s fallacy” i.e. that the chances of a match being one in 1.1 million is not the same thing as the chances of the defendant being innocent are one in 1.1 million, at pages 4885 et seq. better than I have here.

    Which is why they’re Circuit Judges and I’m not, I guess. 😉

    nk (1e7806)


Powered by WordPress.

Page loaded in: 0.1213 secs.