Wednesday, July 09, 2003

Frank on SKAPP (part 2)

Ted Frank's discussion of "false negatives" and "false positives" in the application of Daubert (see my post of 7/07/03) is intriguing on a number of levels.

At the simplest level, Frank's point can be read as one about balance. Like many participants in the debates on scientific evidence, the authors of SKAPP do seem more concerned with one kind of evidentiary mistake (Frank's "false negatives") than with others. An analytical framework that also gave significant weight to the social problems occasioned by questionable scientific evidence, and to the injustices it can cause, might be more useful, credible, and practical. On this level, Frank has registered what seems to me a valid criticism of SKAPP's inaugural report. Let us grant, that is, for purposes of argument, that Daubert sometimes results in the exclusion of expert testimony that many reputable scientists would recognize as legitimate. The problem is that a similar criticism could be lodged against almost any possible evidentiary rule of general applicability. Any general evidentiary guidelines, that is, will likely lead to bad results in particular cases. It is possible to eliminate particular types of bad result by adopting rules at one end of the permissiveness spectrum. E.g., the risks of excluding good evidence could be reduced to zero by a rule mandating the acceptance of all evidence. The problem is that such a rule would give free rein to the bad evidence, multiplying unjust outcomes. So long as we remain in the business of sorting evidentiary wheat from chaff, we need some credible mechanism for doing the sorting.

It's important to recognize that this criticism does not (or need not) reduce to a charge of bias against SKAPP. It is a substantive critique of SKAPP's report, that it does not propose evidentiary principles that would correct the deficiencies it addresses without (perhaps) introducing other problems that might be equally bad. In fairness to SKAPP, it should also be noted that a similar critique could be lodged against no small number of participants in the debate on scientific evidence, and not every contribution to public discourse needs to embody a finished architectonic of reform. SKAPP, that is, has not advertised its report as a comprehensive prescriptive solution to America's evidentiary problems, and perhaps its ambition, so far, has merely been to draw attention to serious deficiencies in Daubert that have not (SKAPP feels) received sufficient notice. As scientists are wont to say, there is a need for further research.

To that end, it may be helpful to focus more deeply on Frank's "false positive"/"false negative" model, to see where it takes us. Frank's model naturally suggests analogies from the field of public health. But there are at least two ways to frame a public health analogy here. The first would be to conceive of legal injury as a disease, with evidentiary rules constituting a kind of diagnostic tool. This first version of the analogy would measure the efficacy of evidentiary rules by the outcomes they tended to promote. A "false positive" would represent the admission of testimony that promoted a false affirmative diagnosis -- viz., an unwarranted verdict for the proponent. A "false negative" would be the exclusion of evidence that would have facilitated a correct diagnosis -- a deserved verdict in the proponent's favor.

There are problems with this way of framing the analogy, and I doubt it is the analogy that Frank intends. The most obvious difficulty would be the absence of any neutral, reliable, objective criteria to identify what litigation outcomes are desirable. After all, if we all agreed on outcomes, we could skip the trials altogether.

A second way to frame the analogy (and one closer, I suspect, to Frank's intent) would be to think of valid scientific insight as a kind of benign virus, and of evidentiary rules as diagnostic tests for detecting the virus's presence in the testimonial organism. Here a "false negative" would again be the exclusion of scientifically legitimate testimony; a "false positive," the admission of scientifically dubious evidence. But the standards against which the diagnoses were measured would be different. Here the criterion would be whether the relevant testimony could legitimately claim scientific status.

What some partisans in the "junk science" wars may insufficiently appreciate (and I do not include Frank in this) is that this version of the analogy, too, relies on evaluative criteria that aren't always neutrally, reliably, and objectively defined. Within the community of practicing scientists, there are often heated debates about the methodological legitimacy of different practitioners' approaches and findings, and neither the scientists nor the epistemologists have located the Philosopher's Stone through which such debates might be definitively resolved. A naive view of science, that is, looks to science for findings and results. A more sophisticated view recognizes that although science does involve methodical empirical inquiry, there is no single and univocal "scientific method" that can be applied to any and all sets of observational fact to determine a unique theoretical conclusion. All the same, Frank might say, some theories are better than others, and some are demonstrably false. And Frank would be right. An evidentiary standard that that attempts to assess testimony along epistemological lines may be fallible, but it probably can claim more neutrality and objectivity than standards governed by preferences for particular litigation outcomes.

There is, however, another problem with the evidentiary model suggested by the second version of the public health analogy: its circularity. Daubert's point of departure is that scientific testimony should be admissible if and only if it is -- well, scientific. That criterion seems uninformative. We cannot say that a litmus test is a valid measure of alkalinity because the litmus paper turns blue in all and only the cases in which it turns blue. We would need some independent standard against which to measure the litmus paper's performance. Daubert did take some steps in this direction, by commending judicial attention to methods characteristic of good-faith scientific endeavor. The problem some nevertheless perceive is that after ten years, this approach seems to be leading, in practice, to evidentiary outcomes that are difficult to predict, and sometimes harder still to justify by reference to the stated desideratum of promoting testimony founded on good science.

This relates, I believe, to one reason why litigants who believed the science to be on their side might find adverse Daubert rulings harder to swallow than other evidentiary setbacks. If my claim founders because the key evidence is excluded by the hearsay rule, I may find the result unjust, but at least I can understand, perhaps, that there might be good general reasons for excluding certain varieties of testimony even though such testimony may be probative and accurate on particular occasions. That is harder to understand when testimony is excluded because that very testimony's claims to reliability are deemed inherently weak, if the proponent believes those claims to be strong under the very criteria that the exclusionary decision invokes.

To put the point another way, litigants may be likelier to perceive injustice when their claims or defenses are stymied by "false negatives" produced by rules propounded in the name of scientific accuracy and truth. It may also be that rules defended in those terms are especially likely to promote judicial decisions in which policy choices are masked -- but maybe that's another topic for another day.
Fed. R. Evid. 702: If scientific, technical, or other specialized knowledge will assist the trier of fact to understand the evidence or to determine a fact in issue, a witness qualified as an expert by knowledge, skill, experience, training, or education, may testify thereto in the form of an opinion or otherwise, if (1) the testimony is based upon sufficient facts or data, (2) the testimony is the product of reliable principles and methods, and (3) the witness has applied the principles and methods reliably to the facts of the case.